Jan 28 00:58:28.037911 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 23:02:38 -00 2026 Jan 28 00:58:28.037948 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 00:58:28.037962 kernel: BIOS-provided physical RAM map: Jan 28 00:58:28.037978 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 28 00:58:28.037988 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 28 00:58:28.037998 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 28 00:58:28.038010 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 28 00:58:28.038021 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 28 00:58:28.038032 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 00:58:28.038043 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 28 00:58:28.038054 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 00:58:28.038064 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 28 00:58:28.038080 kernel: NX (Execute Disable) protection: active Jan 28 00:58:28.038091 kernel: APIC: Static calls initialized Jan 28 00:58:28.038104 kernel: SMBIOS 2.8 present. Jan 28 00:58:28.038116 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 28 00:58:28.038128 kernel: Hypervisor detected: KVM Jan 28 00:58:28.038144 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 00:58:28.038156 kernel: kvm-clock: using sched offset of 4501326787 cycles Jan 28 00:58:28.038169 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 00:58:28.038181 kernel: tsc: Detected 2499.998 MHz processor Jan 28 00:58:28.038193 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 00:58:28.038205 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 00:58:28.038216 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 28 00:58:28.038228 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 28 00:58:28.038240 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 00:58:28.038256 kernel: Using GB pages for direct mapping Jan 28 00:58:28.038268 kernel: ACPI: Early table checksum verification disabled Jan 28 00:58:28.038642 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 28 00:58:28.038659 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:58:28.038671 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:58:28.038683 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:58:28.038694 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 28 00:58:28.038706 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:58:28.038718 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:58:28.038738 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:58:28.038750 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 00:58:28.038762 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 28 00:58:28.038774 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 28 00:58:28.038786 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 28 00:58:28.038804 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 28 00:58:28.038816 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 28 00:58:28.038833 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 28 00:58:28.038846 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 28 00:58:28.038858 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 28 00:58:28.041307 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 28 00:58:28.041334 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 28 00:58:28.041348 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 28 00:58:28.041361 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 28 00:58:28.041381 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 28 00:58:28.041394 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 28 00:58:28.041406 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 28 00:58:28.041418 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 28 00:58:28.041431 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 28 00:58:28.041456 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 28 00:58:28.041469 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 28 00:58:28.041481 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 28 00:58:28.041493 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 28 00:58:28.041505 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 28 00:58:28.041523 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 28 00:58:28.041535 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 28 00:58:28.041548 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 28 00:58:28.041560 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 28 00:58:28.041573 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 28 00:58:28.041586 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 28 00:58:28.041598 kernel: Zone ranges: Jan 28 00:58:28.041611 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 00:58:28.041623 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 28 00:58:28.041640 kernel: Normal empty Jan 28 00:58:28.041653 kernel: Movable zone start for each node Jan 28 00:58:28.041665 kernel: Early memory node ranges Jan 28 00:58:28.041677 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 28 00:58:28.041690 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 28 00:58:28.041702 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 28 00:58:28.041714 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 00:58:28.041727 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 28 00:58:28.041739 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 28 00:58:28.041751 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 00:58:28.041769 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 00:58:28.041781 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 00:58:28.041793 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 00:58:28.041806 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 00:58:28.041818 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 00:58:28.041830 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 00:58:28.041842 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 00:58:28.041855 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 00:58:28.041867 kernel: TSC deadline timer available Jan 28 00:58:28.041884 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 28 00:58:28.041896 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 00:58:28.041909 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 28 00:58:28.041921 kernel: Booting paravirtualized kernel on KVM Jan 28 00:58:28.041933 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 00:58:28.041946 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 28 00:58:28.041959 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Jan 28 00:58:28.041971 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Jan 28 00:58:28.041983 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 28 00:58:28.042000 kernel: kvm-guest: PV spinlocks enabled Jan 28 00:58:28.042013 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 00:58:28.042027 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 00:58:28.042040 kernel: random: crng init done Jan 28 00:58:28.042052 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 00:58:28.042065 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 28 00:58:28.042077 kernel: Fallback order for Node 0: 0 Jan 28 00:58:28.042089 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 28 00:58:28.042107 kernel: Policy zone: DMA32 Jan 28 00:58:28.042120 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 00:58:28.042132 kernel: software IO TLB: area num 16. Jan 28 00:58:28.042145 kernel: Memory: 1901592K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 194764K reserved, 0K cma-reserved) Jan 28 00:58:28.042158 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 28 00:58:28.042170 kernel: Kernel/User page tables isolation: enabled Jan 28 00:58:28.042183 kernel: ftrace: allocating 37989 entries in 149 pages Jan 28 00:58:28.042195 kernel: ftrace: allocated 149 pages with 4 groups Jan 28 00:58:28.042208 kernel: Dynamic Preempt: voluntary Jan 28 00:58:28.042225 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 00:58:28.042242 kernel: rcu: RCU event tracing is enabled. Jan 28 00:58:28.042257 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 28 00:58:28.042269 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 00:58:28.042338 kernel: Rude variant of Tasks RCU enabled. Jan 28 00:58:28.042376 kernel: Tracing variant of Tasks RCU enabled. Jan 28 00:58:28.042396 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 00:58:28.042409 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 28 00:58:28.042422 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 28 00:58:28.042448 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 00:58:28.042462 kernel: Console: colour VGA+ 80x25 Jan 28 00:58:28.042475 kernel: printk: console [tty0] enabled Jan 28 00:58:28.042494 kernel: printk: console [ttyS0] enabled Jan 28 00:58:28.042507 kernel: ACPI: Core revision 20230628 Jan 28 00:58:28.042520 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 00:58:28.042533 kernel: x2apic enabled Jan 28 00:58:28.042546 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 00:58:28.042564 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 28 00:58:28.042578 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jan 28 00:58:28.042591 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 00:58:28.042604 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 28 00:58:28.042617 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 28 00:58:28.042630 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 00:58:28.042643 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 00:58:28.042656 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 00:58:28.042669 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 28 00:58:28.042683 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 28 00:58:28.042700 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 28 00:58:28.042714 kernel: MDS: Mitigation: Clear CPU buffers Jan 28 00:58:28.042726 kernel: MMIO Stale Data: Unknown: No mitigations Jan 28 00:58:28.042739 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 28 00:58:28.042752 kernel: active return thunk: its_return_thunk Jan 28 00:58:28.042764 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 28 00:58:28.042777 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 00:58:28.042790 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 00:58:28.042803 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 00:58:28.042816 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 00:58:28.042833 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 28 00:58:28.042847 kernel: Freeing SMP alternatives memory: 32K Jan 28 00:58:28.042859 kernel: pid_max: default: 32768 minimum: 301 Jan 28 00:58:28.042873 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 00:58:28.042886 kernel: landlock: Up and running. Jan 28 00:58:28.042899 kernel: SELinux: Initializing. Jan 28 00:58:28.042911 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 28 00:58:28.042924 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 28 00:58:28.042937 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 28 00:58:28.042951 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 00:58:28.042964 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 00:58:28.042981 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 00:58:28.042995 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 28 00:58:28.043008 kernel: signal: max sigframe size: 1776 Jan 28 00:58:28.043021 kernel: rcu: Hierarchical SRCU implementation. Jan 28 00:58:28.043035 kernel: rcu: Max phase no-delay instances is 400. Jan 28 00:58:28.043048 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 00:58:28.043061 kernel: smp: Bringing up secondary CPUs ... Jan 28 00:58:28.043074 kernel: smpboot: x86: Booting SMP configuration: Jan 28 00:58:28.043087 kernel: .... node #0, CPUs: #1 Jan 28 00:58:28.043105 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 28 00:58:28.043118 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 00:58:28.043131 kernel: smpboot: Max logical packages: 16 Jan 28 00:58:28.043144 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jan 28 00:58:28.043157 kernel: devtmpfs: initialized Jan 28 00:58:28.043171 kernel: x86/mm: Memory block size: 128MB Jan 28 00:58:28.043184 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 00:58:28.043197 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 28 00:58:28.043210 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 00:58:28.043228 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 00:58:28.043241 kernel: audit: initializing netlink subsys (disabled) Jan 28 00:58:28.043254 kernel: audit: type=2000 audit(1769561906.155:1): state=initialized audit_enabled=0 res=1 Jan 28 00:58:28.043267 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 00:58:28.045311 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 00:58:28.045333 kernel: cpuidle: using governor menu Jan 28 00:58:28.045346 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 00:58:28.045360 kernel: dca service started, version 1.12.1 Jan 28 00:58:28.045373 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 28 00:58:28.045394 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 00:58:28.045408 kernel: PCI: Using configuration type 1 for base access Jan 28 00:58:28.045421 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 00:58:28.045445 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 00:58:28.045460 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 00:58:28.045473 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 00:58:28.045486 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 00:58:28.045499 kernel: ACPI: Added _OSI(Module Device) Jan 28 00:58:28.045512 kernel: ACPI: Added _OSI(Processor Device) Jan 28 00:58:28.045531 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 00:58:28.045544 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 00:58:28.045557 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 28 00:58:28.045570 kernel: ACPI: Interpreter enabled Jan 28 00:58:28.045583 kernel: ACPI: PM: (supports S0 S5) Jan 28 00:58:28.045596 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 00:58:28.045609 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 00:58:28.045623 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 00:58:28.045636 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 00:58:28.045654 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 00:58:28.045946 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 00:58:28.046133 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 28 00:58:28.046337 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 28 00:58:28.046358 kernel: PCI host bridge to bus 0000:00 Jan 28 00:58:28.046597 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 00:58:28.046761 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 00:58:28.046928 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 00:58:28.047083 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 28 00:58:28.047239 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 00:58:28.048486 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 28 00:58:28.048645 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 00:58:28.048850 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 28 00:58:28.049055 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 28 00:58:28.049230 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 28 00:58:28.049428 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 28 00:58:28.049608 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 28 00:58:28.049773 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 00:58:28.049973 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 28 00:58:28.050143 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 28 00:58:28.052255 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 28 00:58:28.052467 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 28 00:58:28.052664 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 28 00:58:28.052838 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 28 00:58:28.053041 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 28 00:58:28.053213 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 28 00:58:28.053540 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 28 00:58:28.053715 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 28 00:58:28.055305 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 28 00:58:28.055518 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 28 00:58:28.055708 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 28 00:58:28.055883 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 28 00:58:28.056077 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 28 00:58:28.056246 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 28 00:58:28.057508 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 28 00:58:28.057688 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 28 00:58:28.057859 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 28 00:58:28.058028 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 28 00:58:28.058195 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 28 00:58:28.059448 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 28 00:58:28.059633 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 28 00:58:28.059807 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 28 00:58:28.059979 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 28 00:58:28.060161 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 28 00:58:28.060358 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 00:58:28.060551 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 28 00:58:28.060729 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 28 00:58:28.060896 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 28 00:58:28.061085 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 28 00:58:28.061253 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 28 00:58:28.063506 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 28 00:58:28.063694 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 28 00:58:28.063884 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 28 00:58:28.064058 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 28 00:58:28.064228 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 00:58:28.065473 kernel: pci_bus 0000:02: extended config space not accessible Jan 28 00:58:28.065670 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 28 00:58:28.065851 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 28 00:58:28.066036 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 28 00:58:28.066208 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 28 00:58:28.067732 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 28 00:58:28.067910 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 28 00:58:28.068082 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 28 00:58:28.068248 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 28 00:58:28.068460 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 00:58:28.068648 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 28 00:58:28.068835 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 28 00:58:28.069006 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 28 00:58:28.069173 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 28 00:58:28.071397 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 00:58:28.071586 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 28 00:58:28.071754 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 28 00:58:28.071923 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 00:58:28.072101 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 28 00:58:28.072266 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 28 00:58:28.073492 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 00:58:28.073665 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 28 00:58:28.073830 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 28 00:58:28.073995 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 00:58:28.074165 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 28 00:58:28.076358 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 28 00:58:28.076551 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 00:58:28.076721 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 28 00:58:28.076887 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 28 00:58:28.077052 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 00:58:28.077073 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 00:58:28.077086 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 00:58:28.077100 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 00:58:28.077113 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 00:58:28.077130 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 00:58:28.077165 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 00:58:28.077188 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 00:58:28.077211 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 00:58:28.077234 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 00:58:28.077257 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 00:58:28.077306 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 00:58:28.077336 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 00:58:28.077361 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 00:58:28.077384 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 00:58:28.077420 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 00:58:28.077455 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 00:58:28.077480 kernel: iommu: Default domain type: Translated Jan 28 00:58:28.077505 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 00:58:28.077528 kernel: PCI: Using ACPI for IRQ routing Jan 28 00:58:28.077552 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 00:58:28.077575 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 28 00:58:28.077598 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 28 00:58:28.077788 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 00:58:28.078010 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 00:58:28.078190 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 00:58:28.078210 kernel: vgaarb: loaded Jan 28 00:58:28.078224 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 00:58:28.078237 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 00:58:28.078251 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 00:58:28.078264 kernel: pnp: PnP ACPI init Jan 28 00:58:28.080552 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 00:58:28.080584 kernel: pnp: PnP ACPI: found 5 devices Jan 28 00:58:28.080598 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 00:58:28.080611 kernel: NET: Registered PF_INET protocol family Jan 28 00:58:28.080625 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 00:58:28.080638 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 28 00:58:28.080652 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 00:58:28.080665 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 28 00:58:28.080679 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 28 00:58:28.080697 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 28 00:58:28.080711 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 28 00:58:28.080724 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 28 00:58:28.080737 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 00:58:28.080751 kernel: NET: Registered PF_XDP protocol family Jan 28 00:58:28.080924 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 28 00:58:28.081098 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 28 00:58:28.081269 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 28 00:58:28.081499 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 28 00:58:28.081673 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 28 00:58:28.081843 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 28 00:58:28.082014 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 28 00:58:28.082184 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 28 00:58:28.083471 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 28 00:58:28.083674 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 28 00:58:28.083849 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 28 00:58:28.084018 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 28 00:58:28.084187 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 28 00:58:28.084385 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 28 00:58:28.084567 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 28 00:58:28.084735 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 28 00:58:28.084908 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 28 00:58:28.085111 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 28 00:58:28.085295 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 28 00:58:28.085479 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 28 00:58:28.085647 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 28 00:58:28.085815 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 00:58:28.085999 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 28 00:58:28.086178 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 28 00:58:28.086377 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 28 00:58:28.086563 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 00:58:28.086739 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 28 00:58:28.086904 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 28 00:58:28.087081 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 28 00:58:28.087269 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 00:58:28.087520 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 28 00:58:28.087689 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 28 00:58:28.087867 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 28 00:58:28.088050 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 00:58:28.088230 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 28 00:58:28.088463 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 28 00:58:28.088632 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 28 00:58:28.088797 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 00:58:28.088962 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 28 00:58:28.089128 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 28 00:58:28.089348 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 28 00:58:28.089530 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 00:58:28.089695 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 28 00:58:28.089858 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 28 00:58:28.090022 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 28 00:58:28.090196 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 00:58:28.090393 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 28 00:58:28.090571 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 28 00:58:28.090736 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 28 00:58:28.090900 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 00:58:28.091056 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 00:58:28.091206 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 00:58:28.091373 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 00:58:28.091537 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 28 00:58:28.091696 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 00:58:28.091846 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 28 00:58:28.092016 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 28 00:58:28.092173 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 28 00:58:28.092353 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 00:58:28.092546 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 28 00:58:28.092716 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 28 00:58:28.092883 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 28 00:58:28.093054 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 00:58:28.093245 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 28 00:58:28.093470 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 28 00:58:28.093630 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 00:58:28.093794 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 28 00:58:28.093959 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 28 00:58:28.094115 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 00:58:28.094354 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 28 00:58:28.094531 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 28 00:58:28.094687 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 00:58:28.094850 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 28 00:58:28.095006 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 28 00:58:28.095170 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 00:58:28.095350 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 28 00:58:28.095521 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 28 00:58:28.095676 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 00:58:28.095839 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 28 00:58:28.095994 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 28 00:58:28.096148 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 00:58:28.096176 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 00:58:28.096191 kernel: PCI: CLS 0 bytes, default 64 Jan 28 00:58:28.096206 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 28 00:58:28.096220 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 28 00:58:28.096234 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 28 00:58:28.096248 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jan 28 00:58:28.096262 kernel: Initialise system trusted keyrings Jan 28 00:58:28.096276 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 28 00:58:28.096329 kernel: Key type asymmetric registered Jan 28 00:58:28.096351 kernel: Asymmetric key parser 'x509' registered Jan 28 00:58:28.096364 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 28 00:58:28.096378 kernel: io scheduler mq-deadline registered Jan 28 00:58:28.096392 kernel: io scheduler kyber registered Jan 28 00:58:28.096406 kernel: io scheduler bfq registered Jan 28 00:58:28.096589 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 28 00:58:28.096759 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 28 00:58:28.096928 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:58:28.097104 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 28 00:58:28.097271 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 28 00:58:28.097468 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:58:28.097637 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 28 00:58:28.097804 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 28 00:58:28.097968 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:58:28.098148 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 28 00:58:28.098368 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 28 00:58:28.098552 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:58:28.098719 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 28 00:58:28.098885 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 28 00:58:28.099051 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:58:28.099227 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 28 00:58:28.099409 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 28 00:58:28.099589 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:58:28.099759 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 28 00:58:28.099926 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 28 00:58:28.100093 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:58:28.100272 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 28 00:58:28.100502 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 28 00:58:28.100671 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 00:58:28.100692 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 00:58:28.100707 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 00:58:28.100721 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 00:58:28.100735 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 00:58:28.100757 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 00:58:28.100771 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 00:58:28.100786 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 00:58:28.100805 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 00:58:28.100819 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 00:58:28.100989 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 28 00:58:28.101147 kernel: rtc_cmos 00:03: registered as rtc0 Jan 28 00:58:28.101317 kernel: rtc_cmos 00:03: setting system clock to 2026-01-28T00:58:27 UTC (1769561907) Jan 28 00:58:28.101539 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 28 00:58:28.101561 kernel: intel_pstate: CPU model not supported Jan 28 00:58:28.101576 kernel: NET: Registered PF_INET6 protocol family Jan 28 00:58:28.101589 kernel: Segment Routing with IPv6 Jan 28 00:58:28.101603 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 00:58:28.101617 kernel: NET: Registered PF_PACKET protocol family Jan 28 00:58:28.101631 kernel: Key type dns_resolver registered Jan 28 00:58:28.101644 kernel: IPI shorthand broadcast: enabled Jan 28 00:58:28.101658 kernel: sched_clock: Marking stable (1283003750, 231710321)->(1641023033, -126308962) Jan 28 00:58:28.101680 kernel: registered taskstats version 1 Jan 28 00:58:28.101694 kernel: Loading compiled-in X.509 certificates Jan 28 00:58:28.101708 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 828aa81885d7116cb1bcfd05d35b5b0a881d685d' Jan 28 00:58:28.101722 kernel: Key type .fscrypt registered Jan 28 00:58:28.101735 kernel: Key type fscrypt-provisioning registered Jan 28 00:58:28.101748 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 00:58:28.101762 kernel: ima: Allocated hash algorithm: sha1 Jan 28 00:58:28.101776 kernel: ima: No architecture policies found Jan 28 00:58:28.101790 kernel: clk: Disabling unused clocks Jan 28 00:58:28.101809 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 28 00:58:28.101823 kernel: Write protecting the kernel read-only data: 36864k Jan 28 00:58:28.101837 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 28 00:58:28.101851 kernel: Run /init as init process Jan 28 00:58:28.101865 kernel: with arguments: Jan 28 00:58:28.101878 kernel: /init Jan 28 00:58:28.101892 kernel: with environment: Jan 28 00:58:28.101905 kernel: HOME=/ Jan 28 00:58:28.101918 kernel: TERM=linux Jan 28 00:58:28.101941 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 00:58:28.101958 systemd[1]: Detected virtualization kvm. Jan 28 00:58:28.101973 systemd[1]: Detected architecture x86-64. Jan 28 00:58:28.101987 systemd[1]: Running in initrd. Jan 28 00:58:28.102002 systemd[1]: No hostname configured, using default hostname. Jan 28 00:58:28.102016 systemd[1]: Hostname set to . Jan 28 00:58:28.102031 systemd[1]: Initializing machine ID from VM UUID. Jan 28 00:58:28.102051 systemd[1]: Queued start job for default target initrd.target. Jan 28 00:58:28.102066 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:58:28.102081 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:58:28.102097 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 00:58:28.102112 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:58:28.102126 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 00:58:28.102141 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 00:58:28.102166 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 00:58:28.102182 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 00:58:28.102197 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:58:28.102212 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:58:28.102227 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:58:28.102241 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:58:28.102256 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:58:28.102271 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:58:28.102338 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:58:28.102355 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:58:28.102371 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 00:58:28.102386 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 00:58:28.102401 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:58:28.102421 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:58:28.102448 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:58:28.102464 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:58:28.102479 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 00:58:28.102500 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:58:28.102533 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 00:58:28.102548 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 00:58:28.102563 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:58:28.102578 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:58:28.102592 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:58:28.102653 systemd-journald[203]: Collecting audit messages is disabled. Jan 28 00:58:28.102694 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 00:58:28.102709 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:58:28.102724 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 00:58:28.102745 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 00:58:28.102761 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 00:58:28.102775 kernel: Bridge firewalling registered Jan 28 00:58:28.102790 systemd-journald[203]: Journal started Jan 28 00:58:28.102823 systemd-journald[203]: Runtime Journal (/run/log/journal/3e024abe83cf4d31ae5ff3483f1ddb4a) is 4.7M, max 38.0M, 33.2M free. Jan 28 00:58:28.034233 systemd-modules-load[204]: Inserted module 'overlay' Jan 28 00:58:28.080543 systemd-modules-load[204]: Inserted module 'br_netfilter' Jan 28 00:58:28.156304 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:58:28.156129 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:58:28.157114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:58:28.162232 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 00:58:28.168479 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:58:28.181576 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:58:28.183715 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:58:28.192065 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:58:28.203996 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:58:28.216505 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:58:28.221390 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:58:28.222610 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:58:28.229527 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 00:58:28.233536 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:58:28.250779 dracut-cmdline[239]: dracut-dracut-053 Jan 28 00:58:28.259690 dracut-cmdline[239]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 00:58:28.288061 systemd-resolved[240]: Positive Trust Anchors: Jan 28 00:58:28.288082 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:58:28.288133 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:58:28.294128 systemd-resolved[240]: Defaulting to hostname 'linux'. Jan 28 00:58:28.295857 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:58:28.297660 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:58:28.368374 kernel: SCSI subsystem initialized Jan 28 00:58:28.379312 kernel: Loading iSCSI transport class v2.0-870. Jan 28 00:58:28.392314 kernel: iscsi: registered transport (tcp) Jan 28 00:58:28.418939 kernel: iscsi: registered transport (qla4xxx) Jan 28 00:58:28.419027 kernel: QLogic iSCSI HBA Driver Jan 28 00:58:28.474873 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 00:58:28.480522 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 00:58:28.521040 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 00:58:28.521127 kernel: device-mapper: uevent: version 1.0.3 Jan 28 00:58:28.521168 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 00:58:28.572323 kernel: raid6: sse2x4 gen() 13861 MB/s Jan 28 00:58:28.590555 kernel: raid6: sse2x2 gen() 9750 MB/s Jan 28 00:58:28.608956 kernel: raid6: sse2x1 gen() 10318 MB/s Jan 28 00:58:28.609043 kernel: raid6: using algorithm sse2x4 gen() 13861 MB/s Jan 28 00:58:28.627989 kernel: raid6: .... xor() 7766 MB/s, rmw enabled Jan 28 00:58:28.628074 kernel: raid6: using ssse3x2 recovery algorithm Jan 28 00:58:28.654317 kernel: xor: automatically using best checksumming function avx Jan 28 00:58:28.847320 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 00:58:28.862039 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:58:28.870694 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:58:28.890584 systemd-udevd[423]: Using default interface naming scheme 'v255'. Jan 28 00:58:28.897644 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:58:28.909495 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 00:58:28.930240 dracut-pre-trigger[433]: rd.md=0: removing MD RAID activation Jan 28 00:58:28.971097 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:58:28.976475 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:58:29.112480 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:58:29.122523 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 00:58:29.158117 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 00:58:29.162270 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:58:29.166002 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:58:29.167497 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:58:29.178537 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 00:58:29.195613 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:58:29.263309 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 00:58:29.272309 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 28 00:58:29.291797 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 00:58:29.292009 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:58:29.295292 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:58:29.296337 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:58:29.296560 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:58:29.298118 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:58:29.324474 kernel: ACPI: bus type USB registered Jan 28 00:58:29.324546 kernel: usbcore: registered new interface driver usbfs Jan 28 00:58:29.324584 kernel: usbcore: registered new interface driver hub Jan 28 00:58:29.324619 kernel: usbcore: registered new device driver usb Jan 28 00:58:29.324653 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 28 00:58:29.317514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:58:29.333147 kernel: AVX version of gcm_enc/dec engaged. Jan 28 00:58:29.333198 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 28 00:58:29.334302 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 28 00:58:29.336652 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 28 00:58:29.345302 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 00:58:29.345336 kernel: GPT:17805311 != 125829119 Jan 28 00:58:29.345356 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 00:58:29.345383 kernel: GPT:17805311 != 125829119 Jan 28 00:58:29.345402 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 00:58:29.345446 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:58:29.354456 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 28 00:58:29.354743 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 28 00:58:29.357619 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 28 00:58:29.357842 kernel: hub 1-0:1.0: USB hub found Jan 28 00:58:29.358086 kernel: hub 1-0:1.0: 4 ports detected Jan 28 00:58:29.358342 kernel: AES CTR mode by8 optimization enabled Jan 28 00:58:29.358365 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 28 00:58:29.362057 kernel: hub 2-0:1.0: USB hub found Jan 28 00:58:29.362323 kernel: hub 2-0:1.0: 4 ports detected Jan 28 00:58:29.378442 kernel: libata version 3.00 loaded. Jan 28 00:58:29.437346 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Jan 28 00:58:29.443341 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 00:58:29.443643 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 00:58:29.443883 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 00:58:29.520401 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 28 00:58:29.520802 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 00:58:29.521020 kernel: BTRFS: device fsid 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (481) Jan 28 00:58:29.521043 kernel: scsi host0: ahci Jan 28 00:58:29.521322 kernel: scsi host1: ahci Jan 28 00:58:29.521545 kernel: scsi host2: ahci Jan 28 00:58:29.521775 kernel: scsi host3: ahci Jan 28 00:58:29.521996 kernel: scsi host4: ahci Jan 28 00:58:29.522210 kernel: scsi host5: ahci Jan 28 00:58:29.522512 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jan 28 00:58:29.522534 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jan 28 00:58:29.522562 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jan 28 00:58:29.522598 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jan 28 00:58:29.522617 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jan 28 00:58:29.522636 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jan 28 00:58:29.521679 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:58:29.535136 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 00:58:29.547572 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 00:58:29.559082 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 28 00:58:29.560095 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 00:58:29.567563 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 00:58:29.576787 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 00:58:29.585260 disk-uuid[567]: Primary Header is updated. Jan 28 00:58:29.585260 disk-uuid[567]: Secondary Entries is updated. Jan 28 00:58:29.585260 disk-uuid[567]: Secondary Header is updated. Jan 28 00:58:29.595323 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:58:29.595402 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 28 00:58:29.605176 kernel: GPT:disk_guids don't match. Jan 28 00:58:29.605261 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 00:58:29.605296 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:58:29.607229 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:58:29.614340 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:58:29.774325 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 00:58:29.778307 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 00:58:29.778349 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 00:58:29.779705 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 00:58:29.781415 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 00:58:29.784237 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 00:58:29.787101 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 28 00:58:29.802342 kernel: usbcore: registered new interface driver usbhid Jan 28 00:58:29.802419 kernel: usbhid: USB HID core driver Jan 28 00:58:29.826316 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 28 00:58:29.832449 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 28 00:58:30.612343 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 00:58:30.614319 disk-uuid[568]: The operation has completed successfully. Jan 28 00:58:30.667461 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 00:58:30.667620 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 00:58:30.686508 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 00:58:30.692583 sh[589]: Success Jan 28 00:58:30.711337 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 28 00:58:30.774992 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 00:58:30.788460 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 00:58:30.791423 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 00:58:30.813828 kernel: BTRFS info (device dm-0): first mount of filesystem 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 Jan 28 00:58:30.813926 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:58:30.816007 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 00:58:30.819366 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 00:58:30.819420 kernel: BTRFS info (device dm-0): using free space tree Jan 28 00:58:30.831305 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 00:58:30.833448 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 00:58:30.839515 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 00:58:30.841324 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 00:58:30.864666 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:58:30.865144 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:58:30.865171 kernel: BTRFS info (device vda6): using free space tree Jan 28 00:58:30.870373 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 00:58:30.885308 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:58:30.885175 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 00:58:30.894111 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 00:58:30.899481 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 00:58:31.003574 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:58:31.015477 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:58:31.048407 ignition[687]: Ignition 2.19.0 Jan 28 00:58:31.048433 ignition[687]: Stage: fetch-offline Jan 28 00:58:31.048540 ignition[687]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:58:31.048559 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 00:58:31.048772 ignition[687]: parsed url from cmdline: "" Jan 28 00:58:31.048785 ignition[687]: no config URL provided Jan 28 00:58:31.048795 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:58:31.053716 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:58:31.048811 ignition[687]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:58:31.057523 systemd-networkd[773]: lo: Link UP Jan 28 00:58:31.048821 ignition[687]: failed to fetch config: resource requires networking Jan 28 00:58:31.057529 systemd-networkd[773]: lo: Gained carrier Jan 28 00:58:31.049266 ignition[687]: Ignition finished successfully Jan 28 00:58:31.060016 systemd-networkd[773]: Enumeration completed Jan 28 00:58:31.060132 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:58:31.060682 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:58:31.060688 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:58:31.063116 systemd[1]: Reached target network.target - Network. Jan 28 00:58:31.063246 systemd-networkd[773]: eth0: Link UP Jan 28 00:58:31.063252 systemd-networkd[773]: eth0: Gained carrier Jan 28 00:58:31.063264 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:58:31.070647 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 00:58:31.093497 ignition[780]: Ignition 2.19.0 Jan 28 00:58:31.093517 ignition[780]: Stage: fetch Jan 28 00:58:31.093819 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:58:31.093840 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 00:58:31.097367 systemd-networkd[773]: eth0: DHCPv4 address 10.244.8.18/30, gateway 10.244.8.17 acquired from 10.244.8.17 Jan 28 00:58:31.093991 ignition[780]: parsed url from cmdline: "" Jan 28 00:58:31.093998 ignition[780]: no config URL provided Jan 28 00:58:31.094008 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 00:58:31.094025 ignition[780]: no config at "/usr/lib/ignition/user.ign" Jan 28 00:58:31.094166 ignition[780]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 28 00:58:31.094210 ignition[780]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 28 00:58:31.094255 ignition[780]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 28 00:58:31.097308 ignition[780]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 28 00:58:31.297526 ignition[780]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Jan 28 00:58:31.313151 ignition[780]: GET result: OK Jan 28 00:58:31.314111 ignition[780]: parsing config with SHA512: 51ae1bb42a8909c6db76e8e8d0203a036944607b30fc4e3b9e3ff4777e01b9c5a7377b1d193a889013a86892ac793453cbd87c457f1cf125004dacc35e17e035 Jan 28 00:58:31.319946 unknown[780]: fetched base config from "system" Jan 28 00:58:31.321113 unknown[780]: fetched base config from "system" Jan 28 00:58:31.321574 ignition[780]: fetch: fetch complete Jan 28 00:58:31.321128 unknown[780]: fetched user config from "openstack" Jan 28 00:58:31.321585 ignition[780]: fetch: fetch passed Jan 28 00:58:31.325765 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 00:58:31.321654 ignition[780]: Ignition finished successfully Jan 28 00:58:31.348523 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 00:58:31.367403 ignition[787]: Ignition 2.19.0 Jan 28 00:58:31.367426 ignition[787]: Stage: kargs Jan 28 00:58:31.367684 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:58:31.367706 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 00:58:31.370944 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 00:58:31.369054 ignition[787]: kargs: kargs passed Jan 28 00:58:31.369129 ignition[787]: Ignition finished successfully Jan 28 00:58:31.383985 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 00:58:31.401719 ignition[793]: Ignition 2.19.0 Jan 28 00:58:31.401741 ignition[793]: Stage: disks Jan 28 00:58:31.401985 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 28 00:58:31.405359 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 00:58:31.402006 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 00:58:31.406798 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 00:58:31.403307 ignition[793]: disks: disks passed Jan 28 00:58:31.407875 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 00:58:31.403391 ignition[793]: Ignition finished successfully Jan 28 00:58:31.409622 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:58:31.411113 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:58:31.412438 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:58:31.422519 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 00:58:31.443032 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 28 00:58:31.448345 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 00:58:31.458456 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 00:58:31.581308 kernel: EXT4-fs (vda9): mounted filesystem 9c67117c-3c4f-4d47-a63c-8955eb7dbc8a r/w with ordered data mode. Quota mode: none. Jan 28 00:58:31.582129 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 00:58:31.583649 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 00:58:31.590409 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:58:31.601546 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 00:58:31.603790 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 00:58:31.607537 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 28 00:58:31.608696 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 00:58:31.608740 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:58:31.620303 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (809) Jan 28 00:58:31.619960 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 00:58:31.635219 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:58:31.635254 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:58:31.635310 kernel: BTRFS info (device vda6): using free space tree Jan 28 00:58:31.644301 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 00:58:31.641724 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 00:58:31.650064 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:58:31.717024 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 00:58:31.725509 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Jan 28 00:58:31.733635 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 00:58:31.740299 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 00:58:31.862113 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 00:58:31.867412 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 00:58:31.873612 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 00:58:31.885014 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 00:58:31.887675 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:58:31.910693 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 00:58:31.929009 ignition[926]: INFO : Ignition 2.19.0 Jan 28 00:58:31.929009 ignition[926]: INFO : Stage: mount Jan 28 00:58:31.931690 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:58:31.931690 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 00:58:31.931690 ignition[926]: INFO : mount: mount passed Jan 28 00:58:31.931690 ignition[926]: INFO : Ignition finished successfully Jan 28 00:58:31.931972 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 00:58:32.736754 systemd-networkd[773]: eth0: Gained IPv6LL Jan 28 00:58:34.247464 systemd-networkd[773]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:204:24:19ff:fef4:812/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:204:24:19ff:fef4:812/64 assigned by NDisc. Jan 28 00:58:34.247482 systemd-networkd[773]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 28 00:58:38.788750 coreos-metadata[811]: Jan 28 00:58:38.788 WARN failed to locate config-drive, using the metadata service API instead Jan 28 00:58:38.813435 coreos-metadata[811]: Jan 28 00:58:38.813 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 28 00:58:38.827984 coreos-metadata[811]: Jan 28 00:58:38.827 INFO Fetch successful Jan 28 00:58:38.830210 coreos-metadata[811]: Jan 28 00:58:38.830 INFO wrote hostname srv-8h12l.gb1.brightbox.com to /sysroot/etc/hostname Jan 28 00:58:38.832832 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 28 00:58:38.833028 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 28 00:58:38.839400 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 00:58:38.877623 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 00:58:38.907075 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Jan 28 00:58:38.911608 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 00:58:38.911650 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 00:58:38.913524 kernel: BTRFS info (device vda6): using free space tree Jan 28 00:58:38.919315 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 00:58:38.922331 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 00:58:38.952610 ignition[960]: INFO : Ignition 2.19.0 Jan 28 00:58:38.952610 ignition[960]: INFO : Stage: files Jan 28 00:58:38.952610 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:58:38.952610 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 00:58:38.952610 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Jan 28 00:58:38.958362 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 00:58:38.958362 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 00:58:38.960446 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 00:58:38.961624 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 00:58:38.963061 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 00:58:38.962878 unknown[960]: wrote ssh authorized keys file for user: core Jan 28 00:58:38.965347 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 00:58:38.966704 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 28 00:58:39.199067 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 28 00:58:39.496350 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 28 00:58:39.496350 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 28 00:58:39.496350 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 00:58:39.496350 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:58:39.496350 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 00:58:39.496350 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:58:39.496350 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 00:58:39.496350 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:58:39.496350 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 00:58:39.512581 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:58:39.512581 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 00:58:39.512581 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 00:58:39.512581 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 00:58:39.512581 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 00:58:39.512581 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 28 00:58:39.991761 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 28 00:58:43.036582 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 28 00:58:43.036582 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 28 00:58:43.041536 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:58:43.041536 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 00:58:43.041536 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 28 00:58:43.041536 ignition[960]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 28 00:58:43.041536 ignition[960]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 00:58:43.041536 ignition[960]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:58:43.041536 ignition[960]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 00:58:43.041536 ignition[960]: INFO : files: files passed Jan 28 00:58:43.041536 ignition[960]: INFO : Ignition finished successfully Jan 28 00:58:43.045228 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 00:58:43.061790 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 00:58:43.071621 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 00:58:43.077702 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 00:58:43.077934 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 00:58:43.091807 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:58:43.091807 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:58:43.094628 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 00:58:43.096795 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:58:43.098803 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 00:58:43.107698 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 00:58:43.140334 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 00:58:43.140531 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 00:58:43.142741 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 00:58:43.143862 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 00:58:43.145607 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 00:58:43.153529 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 00:58:43.172202 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:58:43.176523 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 00:58:43.194407 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:58:43.196219 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:58:43.197108 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 00:58:43.197862 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 00:58:43.198041 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 00:58:43.199716 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 00:58:43.200642 systemd[1]: Stopped target basic.target - Basic System. Jan 28 00:58:43.201979 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 00:58:43.203584 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 00:58:43.205180 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 00:58:43.206711 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 00:58:43.208096 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 00:58:43.209757 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 00:58:43.211398 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 00:58:43.212845 systemd[1]: Stopped target swap.target - Swaps. Jan 28 00:58:43.214365 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 00:58:43.214551 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 00:58:43.216636 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:58:43.217670 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:58:43.219056 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 00:58:43.221342 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:58:43.223149 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 00:58:43.223418 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 00:58:43.225235 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 00:58:43.225430 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 00:58:43.226565 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 00:58:43.226762 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 00:58:43.234510 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 00:58:43.235329 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 00:58:43.235574 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:58:43.244383 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 00:58:43.245169 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 00:58:43.246441 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:58:43.250928 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 00:58:43.251105 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 00:58:43.268954 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 00:58:43.269134 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 00:58:43.273434 ignition[1012]: INFO : Ignition 2.19.0 Jan 28 00:58:43.273434 ignition[1012]: INFO : Stage: umount Jan 28 00:58:43.273434 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 00:58:43.273434 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 00:58:43.279095 ignition[1012]: INFO : umount: umount passed Jan 28 00:58:43.279095 ignition[1012]: INFO : Ignition finished successfully Jan 28 00:58:43.277957 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 00:58:43.278144 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 00:58:43.280053 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 00:58:43.280130 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 00:58:43.282341 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 00:58:43.282440 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 00:58:43.284765 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 00:58:43.284852 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 00:58:43.286236 systemd[1]: Stopped target network.target - Network. Jan 28 00:58:43.292238 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 00:58:43.292353 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 00:58:43.293827 systemd[1]: Stopped target paths.target - Path Units. Jan 28 00:58:43.295404 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 00:58:43.299365 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:58:43.300141 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 00:58:43.301736 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 00:58:43.303147 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 00:58:43.303232 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 00:58:43.304705 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 00:58:43.304769 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 00:58:43.306405 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 00:58:43.306483 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 00:58:43.307859 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 00:58:43.307927 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 00:58:43.309573 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 00:58:43.310818 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 00:58:43.314770 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 00:58:43.315662 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 00:58:43.315842 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 00:58:43.316542 systemd-networkd[773]: eth0: DHCPv6 lease lost Jan 28 00:58:43.317874 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 00:58:43.318022 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 00:58:43.320620 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 00:58:43.320820 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 00:58:43.323809 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 00:58:43.324882 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 00:58:43.327198 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 00:58:43.327692 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:58:43.334445 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 00:58:43.335987 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 00:58:43.336063 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 00:58:43.337997 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 00:58:43.338065 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:58:43.338826 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 00:58:43.338893 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 00:58:43.339663 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 00:58:43.339728 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:58:43.341727 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:58:43.349878 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 00:58:43.351376 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:58:43.354517 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 00:58:43.354608 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 00:58:43.356202 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 00:58:43.356271 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:58:43.357705 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 00:58:43.357775 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 00:58:43.361982 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 00:58:43.362057 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 00:58:43.363433 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 00:58:43.363508 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 00:58:43.371488 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 00:58:43.373400 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 00:58:43.374351 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:58:43.376185 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 00:58:43.376256 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:58:43.377785 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 00:58:43.380549 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 00:58:43.381937 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 00:58:43.382090 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 00:58:43.384150 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 00:58:43.391501 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 00:58:43.402841 systemd[1]: Switching root. Jan 28 00:58:43.438497 systemd-journald[203]: Journal stopped Jan 28 00:58:44.944170 systemd-journald[203]: Received SIGTERM from PID 1 (systemd). Jan 28 00:58:44.949779 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 00:58:44.949844 kernel: SELinux: policy capability open_perms=1 Jan 28 00:58:44.949884 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 00:58:44.949914 kernel: SELinux: policy capability always_check_network=0 Jan 28 00:58:44.949944 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 00:58:44.949975 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 00:58:44.950003 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 00:58:44.950040 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 00:58:44.950080 kernel: audit: type=1403 audit(1769561923.675:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 00:58:44.950118 systemd[1]: Successfully loaded SELinux policy in 59.834ms. Jan 28 00:58:44.950177 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.691ms. Jan 28 00:58:44.950208 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 00:58:44.950237 systemd[1]: Detected virtualization kvm. Jan 28 00:58:44.950272 systemd[1]: Detected architecture x86-64. Jan 28 00:58:44.951795 systemd[1]: Detected first boot. Jan 28 00:58:44.951851 systemd[1]: Hostname set to . Jan 28 00:58:44.951876 systemd[1]: Initializing machine ID from VM UUID. Jan 28 00:58:44.951907 zram_generator::config[1059]: No configuration found. Jan 28 00:58:44.951939 systemd[1]: Populated /etc with preset unit settings. Jan 28 00:58:44.951970 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 28 00:58:44.951998 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 28 00:58:44.952030 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 28 00:58:44.952054 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 00:58:44.952082 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 00:58:44.952117 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 00:58:44.952154 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 00:58:44.952176 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 00:58:44.952198 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 00:58:44.952226 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 00:58:44.952254 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 00:58:44.961315 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 00:58:44.961389 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 00:58:44.961415 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 00:58:44.961459 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 00:58:44.961482 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 00:58:44.961511 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 00:58:44.961540 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 00:58:44.961571 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 00:58:44.961593 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 28 00:58:44.961615 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 28 00:58:44.961649 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 28 00:58:44.961671 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 00:58:44.961706 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 00:58:44.961737 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 00:58:44.961765 systemd[1]: Reached target slices.target - Slice Units. Jan 28 00:58:44.961788 systemd[1]: Reached target swap.target - Swaps. Jan 28 00:58:44.961809 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 00:58:44.961830 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 00:58:44.961865 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 00:58:44.961917 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 00:58:44.961947 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 00:58:44.961970 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 00:58:44.961997 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 00:58:44.962019 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 00:58:44.962053 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 00:58:44.962077 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:58:44.962098 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 00:58:44.962119 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 00:58:44.962160 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 00:58:44.962184 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 00:58:44.962212 systemd[1]: Reached target machines.target - Containers. Jan 28 00:58:44.962234 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 00:58:44.962269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:58:44.967319 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 00:58:44.967363 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 00:58:44.967387 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:58:44.967409 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:58:44.967440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:58:44.967462 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 00:58:44.967491 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:58:44.967528 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 00:58:44.967552 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 28 00:58:44.967573 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 28 00:58:44.967608 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 28 00:58:44.967630 systemd[1]: Stopped systemd-fsck-usr.service. Jan 28 00:58:44.967651 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 00:58:44.967672 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 00:58:44.967693 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 00:58:44.967722 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 00:58:44.967757 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 00:58:44.967786 systemd[1]: verity-setup.service: Deactivated successfully. Jan 28 00:58:44.967809 systemd[1]: Stopped verity-setup.service. Jan 28 00:58:44.967830 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:58:44.967851 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 00:58:44.967880 kernel: loop: module loaded Jan 28 00:58:44.967903 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 00:58:44.967931 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 00:58:44.967959 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 00:58:44.967995 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 00:58:44.968023 kernel: fuse: init (API version 7.39) Jan 28 00:58:44.968044 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 00:58:44.968065 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 00:58:44.968086 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 00:58:44.968120 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 00:58:44.968158 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:58:44.968181 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:58:44.968202 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:58:44.968223 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:58:44.968270 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 00:58:44.969435 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 00:58:44.969464 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:58:44.969486 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:58:44.969508 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 00:58:44.969551 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 00:58:44.969584 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 00:58:44.969606 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 00:58:44.969682 systemd-journald[1141]: Collecting audit messages is disabled. Jan 28 00:58:44.969750 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 00:58:44.969775 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 00:58:44.969797 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 00:58:44.969818 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 00:58:44.969846 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 28 00:58:44.969869 systemd-journald[1141]: Journal started Jan 28 00:58:44.969917 systemd-journald[1141]: Runtime Journal (/run/log/journal/3e024abe83cf4d31ae5ff3483f1ddb4a) is 4.7M, max 38.0M, 33.2M free. Jan 28 00:58:44.976339 kernel: ACPI: bus type drm_connector registered Jan 28 00:58:44.976403 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 00:58:44.497087 systemd[1]: Queued start job for default target multi-user.target. Jan 28 00:58:44.520273 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 00:58:44.521033 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 28 00:58:44.988989 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 00:58:44.989065 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:58:45.000307 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 00:58:45.006454 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:58:45.021143 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 00:58:45.027833 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:58:45.032620 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 00:58:45.043303 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 00:58:45.051710 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 00:58:45.052360 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 00:58:45.054727 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:58:45.054983 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:58:45.056557 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 00:58:45.058590 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 00:58:45.060339 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 00:58:45.095163 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 00:58:45.104385 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 00:58:45.105303 kernel: loop0: detected capacity change from 0 to 8 Jan 28 00:58:45.118757 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 00:58:45.125603 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 28 00:58:45.127193 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 00:58:45.131494 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 00:58:45.160243 systemd-journald[1141]: Time spent on flushing to /var/log/journal/3e024abe83cf4d31ae5ff3483f1ddb4a is 122.011ms for 1146 entries. Jan 28 00:58:45.160243 systemd-journald[1141]: System Journal (/var/log/journal/3e024abe83cf4d31ae5ff3483f1ddb4a) is 8.0M, max 584.8M, 576.8M free. Jan 28 00:58:45.300753 systemd-journald[1141]: Received client request to flush runtime journal. Jan 28 00:58:45.300821 kernel: loop1: detected capacity change from 0 to 140768 Jan 28 00:58:45.300850 kernel: loop2: detected capacity change from 0 to 142488 Jan 28 00:58:45.191724 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 00:58:45.227261 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 00:58:45.229242 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 28 00:58:45.240780 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 00:58:45.252525 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 28 00:58:45.301089 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 00:58:45.320515 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 00:58:45.322605 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 00:58:45.332853 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 28 00:58:45.354348 kernel: loop3: detected capacity change from 0 to 219144 Jan 28 00:58:45.396326 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jan 28 00:58:45.396905 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jan 28 00:58:45.404302 kernel: loop4: detected capacity change from 0 to 8 Jan 28 00:58:45.412300 kernel: loop5: detected capacity change from 0 to 140768 Jan 28 00:58:45.416549 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 00:58:45.440480 kernel: loop6: detected capacity change from 0 to 142488 Jan 28 00:58:45.460268 kernel: loop7: detected capacity change from 0 to 219144 Jan 28 00:58:45.481178 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 28 00:58:45.484089 (sd-merge)[1212]: Merged extensions into '/usr'. Jan 28 00:58:45.492775 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 00:58:45.492925 systemd[1]: Reloading... Jan 28 00:58:45.650837 zram_generator::config[1248]: No configuration found. Jan 28 00:58:45.858268 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 00:58:45.973355 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:58:46.042593 systemd[1]: Reloading finished in 548 ms. Jan 28 00:58:46.075563 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 00:58:46.082081 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 00:58:46.097521 systemd[1]: Starting ensure-sysext.service... Jan 28 00:58:46.112635 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 00:58:46.141173 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 00:58:46.141847 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 00:58:46.143390 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 00:58:46.143790 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 28 00:58:46.143900 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 28 00:58:46.148611 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:58:46.148630 systemd-tmpfiles[1296]: Skipping /boot Jan 28 00:58:46.148943 systemd[1]: Reloading requested from client PID 1295 ('systemctl') (unit ensure-sysext.service)... Jan 28 00:58:46.148977 systemd[1]: Reloading... Jan 28 00:58:46.164126 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 00:58:46.164145 systemd-tmpfiles[1296]: Skipping /boot Jan 28 00:58:46.263320 zram_generator::config[1329]: No configuration found. Jan 28 00:58:46.428484 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:58:46.496031 systemd[1]: Reloading finished in 346 ms. Jan 28 00:58:46.521088 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 00:58:46.526985 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 00:58:46.540515 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 00:58:46.546490 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 00:58:46.555554 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 00:58:46.561540 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 00:58:46.566488 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 00:58:46.577256 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 00:58:46.584293 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:58:46.584572 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:58:46.594214 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 00:58:46.601619 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 00:58:46.607619 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 00:58:46.609555 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:58:46.609727 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:58:46.613315 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:58:46.613586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:58:46.613817 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:58:46.613948 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:58:46.620858 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:58:46.621204 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 00:58:46.628619 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 00:58:46.630503 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 00:58:46.638684 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 00:58:46.639497 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 00:58:46.642082 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 00:58:46.652623 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 00:58:46.655446 systemd[1]: Finished ensure-sysext.service. Jan 28 00:58:46.656821 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 00:58:46.657349 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 00:58:46.679389 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 00:58:46.692529 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 00:58:46.696921 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 00:58:46.712452 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 00:58:46.734897 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 00:58:46.766839 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 00:58:46.767469 augenrules[1418]: No rules Jan 28 00:58:46.767672 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 00:58:46.771348 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 00:58:46.772610 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 00:58:46.773357 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 00:58:46.775751 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 00:58:46.775972 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 00:58:46.785007 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 00:58:46.785212 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 00:58:46.791371 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 00:58:46.812470 systemd-udevd[1392]: Using default interface naming scheme 'v255'. Jan 28 00:58:46.834566 systemd-resolved[1389]: Positive Trust Anchors: Jan 28 00:58:46.835045 systemd-resolved[1389]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 00:58:46.835234 systemd-resolved[1389]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 00:58:46.842529 systemd-resolved[1389]: Using system hostname 'srv-8h12l.gb1.brightbox.com'. Jan 28 00:58:46.845391 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 00:58:46.846640 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 00:58:46.859137 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 00:58:46.869526 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 00:58:46.880177 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 00:58:46.882146 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 00:58:47.004937 systemd-networkd[1431]: lo: Link UP Jan 28 00:58:47.004952 systemd-networkd[1431]: lo: Gained carrier Jan 28 00:58:47.007534 systemd-networkd[1431]: Enumeration completed Jan 28 00:58:47.008127 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:58:47.008133 systemd-networkd[1431]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 00:58:47.008428 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 00:58:47.009361 systemd[1]: Reached target network.target - Network. Jan 28 00:58:47.016381 systemd-networkd[1431]: eth0: Link UP Jan 28 00:58:47.016395 systemd-networkd[1431]: eth0: Gained carrier Jan 28 00:58:47.016419 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:58:47.020542 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 00:58:47.022578 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 28 00:58:47.039378 systemd-networkd[1431]: eth0: DHCPv4 address 10.244.8.18/30, gateway 10.244.8.17 acquired from 10.244.8.17 Jan 28 00:58:47.042195 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jan 28 00:58:47.071377 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 00:58:47.104356 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 28 00:58:47.117313 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1443) Jan 28 00:58:47.122350 kernel: ACPI: button: Power Button [PWRF] Jan 28 00:58:47.153316 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 00:58:47.216307 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 00:58:47.220664 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 28 00:58:47.220951 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 00:58:47.234306 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 28 00:58:47.258154 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 00:58:47.274746 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 00:58:47.324070 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 00:58:47.367749 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 00:58:47.506597 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 00:58:47.530798 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 28 00:58:47.537508 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 28 00:58:47.565260 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 00:58:47.601866 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 28 00:58:47.603066 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 00:58:47.603868 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 00:58:47.604777 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 00:58:47.605795 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 00:58:47.606954 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 00:58:47.607876 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 00:58:47.608702 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 00:58:47.609504 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 00:58:47.609553 systemd[1]: Reached target paths.target - Path Units. Jan 28 00:58:47.610212 systemd[1]: Reached target timers.target - Timer Units. Jan 28 00:58:47.611921 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 00:58:47.614720 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 00:58:47.621571 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 00:58:47.624190 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 28 00:58:47.625584 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 00:58:47.626432 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 00:58:47.627104 systemd[1]: Reached target basic.target - Basic System. Jan 28 00:58:47.627826 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:58:47.627874 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 00:58:47.631475 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 00:58:47.639571 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 00:58:47.643460 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 00:58:47.649488 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 00:58:47.651678 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 00:58:47.657502 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 00:58:47.658266 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 00:58:47.668478 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 00:58:47.670817 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 00:58:47.682516 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 00:58:47.686663 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 00:58:47.698492 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 00:58:47.700088 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 00:58:47.701888 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 00:58:47.704909 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 00:58:47.708582 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 00:58:47.712350 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 28 00:58:47.719483 jq[1476]: false Jan 28 00:58:47.721110 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 00:58:47.722861 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 00:58:47.749677 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 00:58:47.749964 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 00:58:47.759430 jq[1486]: true Jan 28 00:58:47.786534 extend-filesystems[1477]: Found loop4 Jan 28 00:58:47.786534 extend-filesystems[1477]: Found loop5 Jan 28 00:58:47.789390 extend-filesystems[1477]: Found loop6 Jan 28 00:58:47.789390 extend-filesystems[1477]: Found loop7 Jan 28 00:58:47.789390 extend-filesystems[1477]: Found vda Jan 28 00:58:47.789390 extend-filesystems[1477]: Found vda1 Jan 28 00:58:47.789390 extend-filesystems[1477]: Found vda2 Jan 28 00:58:47.789390 extend-filesystems[1477]: Found vda3 Jan 28 00:58:47.789390 extend-filesystems[1477]: Found usr Jan 28 00:58:47.789390 extend-filesystems[1477]: Found vda4 Jan 28 00:58:47.789390 extend-filesystems[1477]: Found vda6 Jan 28 00:58:47.789390 extend-filesystems[1477]: Found vda7 Jan 28 00:58:47.789390 extend-filesystems[1477]: Found vda9 Jan 28 00:58:47.789390 extend-filesystems[1477]: Checking size of /dev/vda9 Jan 28 00:58:47.813400 tar[1488]: linux-amd64/LICENSE Jan 28 00:58:47.813400 tar[1488]: linux-amd64/helm Jan 28 00:58:47.806235 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 00:58:47.806352 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 00:58:47.807241 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 00:58:47.816034 update_engine[1485]: I20260128 00:58:47.815477 1485 main.cc:92] Flatcar Update Engine starting Jan 28 00:58:47.829943 dbus-daemon[1475]: [system] SELinux support is enabled Jan 28 00:58:47.839776 jq[1506]: true Jan 28 00:58:47.830247 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 00:58:47.834913 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 00:58:47.834956 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 00:58:47.837728 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 00:58:47.837756 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 00:58:47.852658 dbus-daemon[1475]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1431 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 28 00:58:47.856445 update_engine[1485]: I20260128 00:58:47.856195 1485 update_check_scheduler.cc:74] Next update check in 7m7s Jan 28 00:58:47.864304 extend-filesystems[1477]: Resized partition /dev/vda9 Jan 28 00:58:47.862690 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 00:58:47.858317 systemd[1]: Started update-engine.service - Update Engine. Jan 28 00:58:47.873003 extend-filesystems[1518]: resize2fs 1.47.1 (20-May-2024) Jan 28 00:58:47.885400 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 28 00:58:47.868948 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 00:58:47.881511 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 28 00:58:47.975440 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1439) Jan 28 00:58:48.103863 systemd-logind[1484]: Watching system buttons on /dev/input/event2 (Power Button) Jan 28 00:58:48.103975 systemd-logind[1484]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 00:58:48.108567 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Jan 28 00:58:48.108159 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 00:58:48.109362 systemd-logind[1484]: New seat seat0. Jan 28 00:58:48.109980 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 28 00:58:48.123371 dbus-daemon[1475]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1520 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 28 00:58:48.131393 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 00:58:48.135814 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 00:58:48.141873 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 28 00:58:48.158557 systemd[1]: Starting polkit.service - Authorization Manager... Jan 28 00:58:48.173236 systemd[1]: Starting sshkeys.service... Jan 28 00:58:48.195548 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 28 00:58:48.203718 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 28 00:58:48.230835 polkitd[1540]: Started polkitd version 121 Jan 28 00:58:48.258214 polkitd[1540]: Loading rules from directory /etc/polkit-1/rules.d Jan 28 00:58:48.272473 polkitd[1540]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 28 00:58:48.281828 polkitd[1540]: Finished loading, compiling and executing 2 rules Jan 28 00:58:48.283328 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 28 00:58:48.283614 systemd[1]: Started polkit.service - Authorization Manager. Jan 28 00:58:48.287378 polkitd[1540]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 28 00:58:48.318439 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 28 00:58:48.323855 systemd-hostnamed[1520]: Hostname set to (static) Jan 28 00:58:48.355368 extend-filesystems[1518]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 00:58:48.355368 extend-filesystems[1518]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 28 00:58:48.355368 extend-filesystems[1518]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 28 00:58:48.368149 extend-filesystems[1477]: Resized filesystem in /dev/vda9 Jan 28 00:58:48.357173 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 00:58:48.358386 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 00:58:48.417834 systemd-networkd[1431]: eth0: Gained IPv6LL Jan 28 00:58:48.423193 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 00:58:48.425982 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 00:58:48.426778 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jan 28 00:58:48.438786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:58:48.447825 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 00:58:48.462039 sshd_keygen[1500]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 00:58:48.475863 containerd[1501]: time="2026-01-28T00:58:48.475728059Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 28 00:58:48.524051 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 00:58:48.563727 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 00:58:48.574809 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 00:58:48.578296 containerd[1501]: time="2026-01-28T00:58:48.577974252Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:58:48.580418 containerd[1501]: time="2026-01-28T00:58:48.580375745Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:58:48.580529 containerd[1501]: time="2026-01-28T00:58:48.580504550Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 28 00:58:48.580629 containerd[1501]: time="2026-01-28T00:58:48.580605076Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 28 00:58:48.582005 containerd[1501]: time="2026-01-28T00:58:48.580937318Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 28 00:58:48.582005 containerd[1501]: time="2026-01-28T00:58:48.580979238Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 28 00:58:48.582005 containerd[1501]: time="2026-01-28T00:58:48.581109228Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:58:48.582005 containerd[1501]: time="2026-01-28T00:58:48.581133910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:58:48.582005 containerd[1501]: time="2026-01-28T00:58:48.581436354Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:58:48.582005 containerd[1501]: time="2026-01-28T00:58:48.581461689Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 28 00:58:48.582005 containerd[1501]: time="2026-01-28T00:58:48.581483445Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:58:48.582005 containerd[1501]: time="2026-01-28T00:58:48.581501727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 28 00:58:48.582005 containerd[1501]: time="2026-01-28T00:58:48.581626434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:58:48.582502 containerd[1501]: time="2026-01-28T00:58:48.582474998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 28 00:58:48.582724 containerd[1501]: time="2026-01-28T00:58:48.582693784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 00:58:48.582822 containerd[1501]: time="2026-01-28T00:58:48.582798529Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 28 00:58:48.583038 containerd[1501]: time="2026-01-28T00:58:48.583011890Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 28 00:58:48.583233 containerd[1501]: time="2026-01-28T00:58:48.583207596Z" level=info msg="metadata content store policy set" policy=shared Jan 28 00:58:48.595691 containerd[1501]: time="2026-01-28T00:58:48.595550291Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 28 00:58:48.596052 containerd[1501]: time="2026-01-28T00:58:48.595840312Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 28 00:58:48.596052 containerd[1501]: time="2026-01-28T00:58:48.595878446Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 28 00:58:48.596169 containerd[1501]: time="2026-01-28T00:58:48.595925411Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 28 00:58:48.597014 containerd[1501]: time="2026-01-28T00:58:48.596981124Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 28 00:58:48.598295 containerd[1501]: time="2026-01-28T00:58:48.598251622Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 28 00:58:48.599400 containerd[1501]: time="2026-01-28T00:58:48.598776344Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 28 00:58:48.599400 containerd[1501]: time="2026-01-28T00:58:48.599009043Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 28 00:58:48.599400 containerd[1501]: time="2026-01-28T00:58:48.599039846Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 28 00:58:48.599400 containerd[1501]: time="2026-01-28T00:58:48.599074861Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 28 00:58:48.599400 containerd[1501]: time="2026-01-28T00:58:48.599100349Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 28 00:58:48.599400 containerd[1501]: time="2026-01-28T00:58:48.599121509Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 28 00:58:48.599400 containerd[1501]: time="2026-01-28T00:58:48.599141498Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 28 00:58:48.599400 containerd[1501]: time="2026-01-28T00:58:48.599165030Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 28 00:58:48.599400 containerd[1501]: time="2026-01-28T00:58:48.599187179Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 28 00:58:48.599400 containerd[1501]: time="2026-01-28T00:58:48.599209913Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 28 00:58:48.599400 containerd[1501]: time="2026-01-28T00:58:48.599230754Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 28 00:58:48.599400 containerd[1501]: time="2026-01-28T00:58:48.599260238Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 28 00:58:48.599400 containerd[1501]: time="2026-01-28T00:58:48.599321143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.599400 containerd[1501]: time="2026-01-28T00:58:48.599350216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.599878 containerd[1501]: time="2026-01-28T00:58:48.599371224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.599878 containerd[1501]: time="2026-01-28T00:58:48.599393913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.599878 containerd[1501]: time="2026-01-28T00:58:48.599413052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.599878 containerd[1501]: time="2026-01-28T00:58:48.599432469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.599878 containerd[1501]: time="2026-01-28T00:58:48.599459983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.599878 containerd[1501]: time="2026-01-28T00:58:48.599481299Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.599878 containerd[1501]: time="2026-01-28T00:58:48.599501483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.599878 containerd[1501]: time="2026-01-28T00:58:48.599525003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.599878 containerd[1501]: time="2026-01-28T00:58:48.599543071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.599878 containerd[1501]: time="2026-01-28T00:58:48.599571736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.599878 containerd[1501]: time="2026-01-28T00:58:48.599593833Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.599878 containerd[1501]: time="2026-01-28T00:58:48.599616834Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 28 00:58:48.599878 containerd[1501]: time="2026-01-28T00:58:48.599655918Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.599878 containerd[1501]: time="2026-01-28T00:58:48.599682262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.599878 containerd[1501]: time="2026-01-28T00:58:48.599701212Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 28 00:58:48.600440 containerd[1501]: time="2026-01-28T00:58:48.599767388Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 28 00:58:48.600440 containerd[1501]: time="2026-01-28T00:58:48.599806468Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 28 00:58:48.600440 containerd[1501]: time="2026-01-28T00:58:48.599826659Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 28 00:58:48.600440 containerd[1501]: time="2026-01-28T00:58:48.599848951Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 28 00:58:48.600440 containerd[1501]: time="2026-01-28T00:58:48.599866534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.600440 containerd[1501]: time="2026-01-28T00:58:48.599885485Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 28 00:58:48.600440 containerd[1501]: time="2026-01-28T00:58:48.599916363Z" level=info msg="NRI interface is disabled by configuration." Jan 28 00:58:48.600440 containerd[1501]: time="2026-01-28T00:58:48.599942346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 28 00:58:48.606095 containerd[1501]: time="2026-01-28T00:58:48.603145178Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 28 00:58:48.606095 containerd[1501]: time="2026-01-28T00:58:48.604530834Z" level=info msg="Connect containerd service" Jan 28 00:58:48.606095 containerd[1501]: time="2026-01-28T00:58:48.604608352Z" level=info msg="using legacy CRI server" Jan 28 00:58:48.606095 containerd[1501]: time="2026-01-28T00:58:48.604627787Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 00:58:48.606095 containerd[1501]: time="2026-01-28T00:58:48.604800375Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 28 00:58:48.608146 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 00:58:48.608446 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 00:58:48.615700 containerd[1501]: time="2026-01-28T00:58:48.613864322Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 00:58:48.616150 containerd[1501]: time="2026-01-28T00:58:48.615824384Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 00:58:48.616150 containerd[1501]: time="2026-01-28T00:58:48.615932139Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 00:58:48.616150 containerd[1501]: time="2026-01-28T00:58:48.615996553Z" level=info msg="Start subscribing containerd event" Jan 28 00:58:48.616150 containerd[1501]: time="2026-01-28T00:58:48.616072464Z" level=info msg="Start recovering state" Jan 28 00:58:48.616338 containerd[1501]: time="2026-01-28T00:58:48.616170820Z" level=info msg="Start event monitor" Jan 28 00:58:48.616338 containerd[1501]: time="2026-01-28T00:58:48.616198861Z" level=info msg="Start snapshots syncer" Jan 28 00:58:48.616338 containerd[1501]: time="2026-01-28T00:58:48.616221998Z" level=info msg="Start cni network conf syncer for default" Jan 28 00:58:48.616338 containerd[1501]: time="2026-01-28T00:58:48.616241026Z" level=info msg="Start streaming server" Jan 28 00:58:48.616457 containerd[1501]: time="2026-01-28T00:58:48.616377740Z" level=info msg="containerd successfully booted in 0.153041s" Jan 28 00:58:48.619794 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 00:58:48.620905 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 00:58:48.661868 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 00:58:48.671139 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 00:58:48.679857 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 00:58:48.681987 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 00:58:48.788890 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 00:58:48.796859 systemd[1]: Started sshd@0-10.244.8.18:22-68.220.241.50:34020.service - OpenSSH per-connection server daemon (68.220.241.50:34020). Jan 28 00:58:49.008598 tar[1488]: linux-amd64/README.md Jan 28 00:58:49.026243 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 00:58:49.399040 sshd[1592]: Accepted publickey for core from 68.220.241.50 port 34020 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 00:58:49.400580 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:49.419356 systemd-logind[1484]: New session 1 of user core. Jan 28 00:58:49.422321 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 00:58:49.432941 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jan 28 00:58:49.434418 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 00:58:49.435088 systemd-networkd[1431]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:204:24:19ff:fef4:812/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:204:24:19ff:fef4:812/64 assigned by NDisc. Jan 28 00:58:49.435104 systemd-networkd[1431]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 28 00:58:49.458909 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 00:58:49.478989 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 00:58:49.492900 (systemd)[1601]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 00:58:49.580532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:58:49.581729 (kubelet)[1612]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:58:49.642776 systemd[1601]: Queued start job for default target default.target. Jan 28 00:58:49.653001 systemd[1601]: Created slice app.slice - User Application Slice. Jan 28 00:58:49.653060 systemd[1601]: Reached target paths.target - Paths. Jan 28 00:58:49.653088 systemd[1601]: Reached target timers.target - Timers. Jan 28 00:58:49.657459 systemd[1601]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 00:58:49.680244 systemd[1601]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 00:58:49.681304 systemd[1601]: Reached target sockets.target - Sockets. Jan 28 00:58:49.681342 systemd[1601]: Reached target basic.target - Basic System. Jan 28 00:58:49.681419 systemd[1601]: Reached target default.target - Main User Target. Jan 28 00:58:49.681485 systemd[1601]: Startup finished in 178ms. Jan 28 00:58:49.682258 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 00:58:49.693744 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 00:58:50.121479 systemd[1]: Started sshd@1-10.244.8.18:22-68.220.241.50:34026.service - OpenSSH per-connection server daemon (68.220.241.50:34026). Jan 28 00:58:50.155244 kubelet[1612]: E0128 00:58:50.155118 1612 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:58:50.158256 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:58:50.158694 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:58:50.657885 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jan 28 00:58:50.678037 sshd[1622]: Accepted publickey for core from 68.220.241.50 port 34026 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 00:58:50.680532 sshd[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:50.688266 systemd-logind[1484]: New session 2 of user core. Jan 28 00:58:50.697713 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 00:58:51.084703 sshd[1622]: pam_unix(sshd:session): session closed for user core Jan 28 00:58:51.091466 systemd[1]: sshd@1-10.244.8.18:22-68.220.241.50:34026.service: Deactivated successfully. Jan 28 00:58:51.093846 systemd[1]: session-2.scope: Deactivated successfully. Jan 28 00:58:51.094972 systemd-logind[1484]: Session 2 logged out. Waiting for processes to exit. Jan 28 00:58:51.096843 systemd-logind[1484]: Removed session 2. Jan 28 00:58:51.192847 systemd[1]: Started sshd@2-10.244.8.18:22-68.220.241.50:34042.service - OpenSSH per-connection server daemon (68.220.241.50:34042). Jan 28 00:58:51.757822 sshd[1632]: Accepted publickey for core from 68.220.241.50 port 34042 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 00:58:51.760334 sshd[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:58:51.767678 systemd-logind[1484]: New session 3 of user core. Jan 28 00:58:51.776686 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 00:58:52.163642 sshd[1632]: pam_unix(sshd:session): session closed for user core Jan 28 00:58:52.168535 systemd[1]: sshd@2-10.244.8.18:22-68.220.241.50:34042.service: Deactivated successfully. Jan 28 00:58:52.170793 systemd[1]: session-3.scope: Deactivated successfully. Jan 28 00:58:52.171981 systemd-logind[1484]: Session 3 logged out. Waiting for processes to exit. Jan 28 00:58:52.173728 systemd-logind[1484]: Removed session 3. Jan 28 00:58:53.741821 login[1588]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 28 00:58:53.742742 login[1589]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 28 00:58:53.751579 systemd-logind[1484]: New session 5 of user core. Jan 28 00:58:53.762625 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 00:58:53.767376 systemd-logind[1484]: New session 4 of user core. Jan 28 00:58:53.774598 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 00:58:54.763958 coreos-metadata[1474]: Jan 28 00:58:54.763 WARN failed to locate config-drive, using the metadata service API instead Jan 28 00:58:54.790001 coreos-metadata[1474]: Jan 28 00:58:54.789 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 28 00:58:54.797164 coreos-metadata[1474]: Jan 28 00:58:54.797 INFO Fetch failed with 404: resource not found Jan 28 00:58:54.797301 coreos-metadata[1474]: Jan 28 00:58:54.797 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 28 00:58:54.797787 coreos-metadata[1474]: Jan 28 00:58:54.797 INFO Fetch successful Jan 28 00:58:54.797942 coreos-metadata[1474]: Jan 28 00:58:54.797 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 28 00:58:54.810202 coreos-metadata[1474]: Jan 28 00:58:54.810 INFO Fetch successful Jan 28 00:58:54.810534 coreos-metadata[1474]: Jan 28 00:58:54.810 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 28 00:58:54.825016 coreos-metadata[1474]: Jan 28 00:58:54.824 INFO Fetch successful Jan 28 00:58:54.825367 coreos-metadata[1474]: Jan 28 00:58:54.825 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 28 00:58:54.851522 coreos-metadata[1474]: Jan 28 00:58:54.851 INFO Fetch successful Jan 28 00:58:54.851522 coreos-metadata[1474]: Jan 28 00:58:54.851 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 28 00:58:54.971519 coreos-metadata[1474]: Jan 28 00:58:54.971 INFO Fetch successful Jan 28 00:58:55.017853 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 00:58:55.020479 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 00:58:55.386998 coreos-metadata[1545]: Jan 28 00:58:55.386 WARN failed to locate config-drive, using the metadata service API instead Jan 28 00:58:55.409967 coreos-metadata[1545]: Jan 28 00:58:55.409 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 28 00:58:55.433319 coreos-metadata[1545]: Jan 28 00:58:55.433 INFO Fetch successful Jan 28 00:58:55.433725 coreos-metadata[1545]: Jan 28 00:58:55.433 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 28 00:58:55.462607 coreos-metadata[1545]: Jan 28 00:58:55.461 INFO Fetch successful Jan 28 00:58:55.466175 unknown[1545]: wrote ssh authorized keys file for user: core Jan 28 00:58:55.491664 update-ssh-keys[1675]: Updated "/home/core/.ssh/authorized_keys" Jan 28 00:58:55.492422 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 28 00:58:55.495197 systemd[1]: Finished sshkeys.service. Jan 28 00:58:55.498200 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 00:58:55.498796 systemd[1]: Startup finished in 1.458s (kernel) + 15.924s (initrd) + 11.873s (userspace) = 29.256s. Jan 28 00:59:00.171101 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 00:59:00.182599 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:59:00.381507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:59:00.384767 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:59:00.465216 kubelet[1686]: E0128 00:59:00.464427 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:59:00.471897 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:59:00.472191 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:59:02.267642 systemd[1]: Started sshd@3-10.244.8.18:22-68.220.241.50:52968.service - OpenSSH per-connection server daemon (68.220.241.50:52968). Jan 28 00:59:02.862520 sshd[1694]: Accepted publickey for core from 68.220.241.50 port 52968 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 00:59:02.865525 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:02.873591 systemd-logind[1484]: New session 6 of user core. Jan 28 00:59:02.880679 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 00:59:03.270498 sshd[1694]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:03.275849 systemd[1]: sshd@3-10.244.8.18:22-68.220.241.50:52968.service: Deactivated successfully. Jan 28 00:59:03.278135 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 00:59:03.279102 systemd-logind[1484]: Session 6 logged out. Waiting for processes to exit. Jan 28 00:59:03.280775 systemd-logind[1484]: Removed session 6. Jan 28 00:59:03.373741 systemd[1]: Started sshd@4-10.244.8.18:22-68.220.241.50:39476.service - OpenSSH per-connection server daemon (68.220.241.50:39476). Jan 28 00:59:03.954067 sshd[1701]: Accepted publickey for core from 68.220.241.50 port 39476 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 00:59:03.956416 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:03.965209 systemd-logind[1484]: New session 7 of user core. Jan 28 00:59:03.976622 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 00:59:04.354232 sshd[1701]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:04.360567 systemd[1]: sshd@4-10.244.8.18:22-68.220.241.50:39476.service: Deactivated successfully. Jan 28 00:59:04.362853 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 00:59:04.363929 systemd-logind[1484]: Session 7 logged out. Waiting for processes to exit. Jan 28 00:59:04.365426 systemd-logind[1484]: Removed session 7. Jan 28 00:59:04.458694 systemd[1]: Started sshd@5-10.244.8.18:22-68.220.241.50:39488.service - OpenSSH per-connection server daemon (68.220.241.50:39488). Jan 28 00:59:05.032881 sshd[1708]: Accepted publickey for core from 68.220.241.50 port 39488 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 00:59:05.035170 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:05.044423 systemd-logind[1484]: New session 8 of user core. Jan 28 00:59:05.047518 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 00:59:05.440004 sshd[1708]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:05.445052 systemd[1]: sshd@5-10.244.8.18:22-68.220.241.50:39488.service: Deactivated successfully. Jan 28 00:59:05.447715 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 00:59:05.449886 systemd-logind[1484]: Session 8 logged out. Waiting for processes to exit. Jan 28 00:59:05.451463 systemd-logind[1484]: Removed session 8. Jan 28 00:59:05.550987 systemd[1]: Started sshd@6-10.244.8.18:22-68.220.241.50:39504.service - OpenSSH per-connection server daemon (68.220.241.50:39504). Jan 28 00:59:06.115593 sshd[1715]: Accepted publickey for core from 68.220.241.50 port 39504 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 00:59:06.118351 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:06.128180 systemd-logind[1484]: New session 9 of user core. Jan 28 00:59:06.137548 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 00:59:06.442878 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 00:59:06.443422 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:59:06.467549 sudo[1718]: pam_unix(sudo:session): session closed for user root Jan 28 00:59:06.557748 sshd[1715]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:06.562797 systemd[1]: sshd@6-10.244.8.18:22-68.220.241.50:39504.service: Deactivated successfully. Jan 28 00:59:06.565529 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 00:59:06.567346 systemd-logind[1484]: Session 9 logged out. Waiting for processes to exit. Jan 28 00:59:06.568974 systemd-logind[1484]: Removed session 9. Jan 28 00:59:06.658417 systemd[1]: Started sshd@7-10.244.8.18:22-68.220.241.50:39508.service - OpenSSH per-connection server daemon (68.220.241.50:39508). Jan 28 00:59:07.233926 sshd[1723]: Accepted publickey for core from 68.220.241.50 port 39508 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 00:59:07.236592 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:07.244156 systemd-logind[1484]: New session 10 of user core. Jan 28 00:59:07.255624 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 00:59:07.552212 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 00:59:07.552733 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:59:07.558815 sudo[1727]: pam_unix(sudo:session): session closed for user root Jan 28 00:59:07.567634 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 28 00:59:07.568108 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:59:07.591094 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 28 00:59:07.593733 auditctl[1730]: No rules Jan 28 00:59:07.595633 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 00:59:07.596013 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 28 00:59:07.602713 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 00:59:07.651500 augenrules[1748]: No rules Jan 28 00:59:07.652493 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 00:59:07.654057 sudo[1726]: pam_unix(sudo:session): session closed for user root Jan 28 00:59:07.744510 sshd[1723]: pam_unix(sshd:session): session closed for user core Jan 28 00:59:07.750116 systemd[1]: sshd@7-10.244.8.18:22-68.220.241.50:39508.service: Deactivated successfully. Jan 28 00:59:07.752576 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 00:59:07.753701 systemd-logind[1484]: Session 10 logged out. Waiting for processes to exit. Jan 28 00:59:07.755076 systemd-logind[1484]: Removed session 10. Jan 28 00:59:07.852678 systemd[1]: Started sshd@8-10.244.8.18:22-68.220.241.50:39524.service - OpenSSH per-connection server daemon (68.220.241.50:39524). Jan 28 00:59:08.437806 sshd[1756]: Accepted publickey for core from 68.220.241.50 port 39524 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 00:59:08.440099 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 00:59:08.447600 systemd-logind[1484]: New session 11 of user core. Jan 28 00:59:08.454539 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 00:59:08.760709 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 00:59:08.761189 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 00:59:09.228179 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 00:59:09.228622 (dockerd)[1774]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 00:59:09.689316 dockerd[1774]: time="2026-01-28T00:59:09.689150752Z" level=info msg="Starting up" Jan 28 00:59:09.853081 dockerd[1774]: time="2026-01-28T00:59:09.852165812Z" level=info msg="Loading containers: start." Jan 28 00:59:10.008811 kernel: Initializing XFRM netlink socket Jan 28 00:59:10.043003 systemd-timesyncd[1406]: Network configuration changed, trying to establish connection. Jan 28 00:59:10.112610 systemd-networkd[1431]: docker0: Link UP Jan 28 00:59:10.132330 dockerd[1774]: time="2026-01-28T00:59:10.132192895Z" level=info msg="Loading containers: done." Jan 28 00:59:10.152982 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4189912612-merged.mount: Deactivated successfully. Jan 28 00:59:10.153796 dockerd[1774]: time="2026-01-28T00:59:10.153596664Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 00:59:10.153890 dockerd[1774]: time="2026-01-28T00:59:10.153822329Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 28 00:59:10.154499 dockerd[1774]: time="2026-01-28T00:59:10.153995677Z" level=info msg="Daemon has completed initialization" Jan 28 00:59:10.192602 dockerd[1774]: time="2026-01-28T00:59:10.192514049Z" level=info msg="API listen on /run/docker.sock" Jan 28 00:59:10.192851 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 00:59:10.670989 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 00:59:10.681604 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:59:10.867550 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:59:10.870407 (kubelet)[1922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:59:10.944381 kubelet[1922]: E0128 00:59:10.944107 1922 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:59:10.947991 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:59:10.948217 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:59:10.994448 systemd-timesyncd[1406]: Contacted time server [2a03:b0c0:1:d0::1f9:f001]:123 (2.flatcar.pool.ntp.org). Jan 28 00:59:10.994563 systemd-timesyncd[1406]: Initial clock synchronization to Wed 2026-01-28 00:59:11.176140 UTC. Jan 28 00:59:11.574811 containerd[1501]: time="2026-01-28T00:59:11.574650479Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 28 00:59:12.561497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount709389858.mount: Deactivated successfully. Jan 28 00:59:14.430267 containerd[1501]: time="2026-01-28T00:59:14.429536749Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:14.430267 containerd[1501]: time="2026-01-28T00:59:14.431390091Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068081" Jan 28 00:59:14.436325 containerd[1501]: time="2026-01-28T00:59:14.435681753Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:14.438815 containerd[1501]: time="2026-01-28T00:59:14.437755732Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.862965063s" Jan 28 00:59:14.438815 containerd[1501]: time="2026-01-28T00:59:14.437854675Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 28 00:59:14.440874 containerd[1501]: time="2026-01-28T00:59:14.440837879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:14.442092 containerd[1501]: time="2026-01-28T00:59:14.442054662Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 28 00:59:16.592218 containerd[1501]: time="2026-01-28T00:59:16.590489550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:16.592218 containerd[1501]: time="2026-01-28T00:59:16.592160604Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162448" Jan 28 00:59:16.593094 containerd[1501]: time="2026-01-28T00:59:16.593056419Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:16.596885 containerd[1501]: time="2026-01-28T00:59:16.596833400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:16.598759 containerd[1501]: time="2026-01-28T00:59:16.598720030Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 2.156615832s" Jan 28 00:59:16.598963 containerd[1501]: time="2026-01-28T00:59:16.598932112Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 28 00:59:16.604793 containerd[1501]: time="2026-01-28T00:59:16.604721096Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 28 00:59:18.179598 containerd[1501]: time="2026-01-28T00:59:18.179302113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:18.182136 containerd[1501]: time="2026-01-28T00:59:18.181766229Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725935" Jan 28 00:59:18.183486 containerd[1501]: time="2026-01-28T00:59:18.183112767Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:18.189207 containerd[1501]: time="2026-01-28T00:59:18.189149676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:18.190803 containerd[1501]: time="2026-01-28T00:59:18.190762104Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.585973756s" Jan 28 00:59:18.190947 containerd[1501]: time="2026-01-28T00:59:18.190918816Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 28 00:59:18.193041 containerd[1501]: time="2026-01-28T00:59:18.192976564Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 28 00:59:19.454572 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 28 00:59:19.868831 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3552400098.mount: Deactivated successfully. Jan 28 00:59:20.366644 containerd[1501]: time="2026-01-28T00:59:20.366555333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:20.367879 containerd[1501]: time="2026-01-28T00:59:20.367810507Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965301" Jan 28 00:59:20.369156 containerd[1501]: time="2026-01-28T00:59:20.368741693Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:20.371596 containerd[1501]: time="2026-01-28T00:59:20.371551398Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:20.372803 containerd[1501]: time="2026-01-28T00:59:20.372750381Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 2.179714729s" Jan 28 00:59:20.372908 containerd[1501]: time="2026-01-28T00:59:20.372838728Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 28 00:59:20.374561 containerd[1501]: time="2026-01-28T00:59:20.374518766Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 28 00:59:20.986327 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 00:59:20.996606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:59:21.016279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3147447495.mount: Deactivated successfully. Jan 28 00:59:21.290185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:59:21.305912 (kubelet)[2023]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:59:21.414207 kubelet[2023]: E0128 00:59:21.414045 2023 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:59:21.418260 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:59:21.418588 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:59:22.850187 containerd[1501]: time="2026-01-28T00:59:22.850031271Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:22.853084 containerd[1501]: time="2026-01-28T00:59:22.852645698Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Jan 28 00:59:22.853084 containerd[1501]: time="2026-01-28T00:59:22.853015642Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:22.858108 containerd[1501]: time="2026-01-28T00:59:22.858022777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:22.860315 containerd[1501]: time="2026-01-28T00:59:22.859869744Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.485120667s" Jan 28 00:59:22.860315 containerd[1501]: time="2026-01-28T00:59:22.859930927Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 28 00:59:22.862945 containerd[1501]: time="2026-01-28T00:59:22.862914584Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 28 00:59:23.394795 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050850192.mount: Deactivated successfully. Jan 28 00:59:23.401695 containerd[1501]: time="2026-01-28T00:59:23.401559721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:23.403943 containerd[1501]: time="2026-01-28T00:59:23.403845072Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Jan 28 00:59:23.405160 containerd[1501]: time="2026-01-28T00:59:23.405077236Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:23.408081 containerd[1501]: time="2026-01-28T00:59:23.407998389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:23.410097 containerd[1501]: time="2026-01-28T00:59:23.409156897Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 545.967521ms" Jan 28 00:59:23.410097 containerd[1501]: time="2026-01-28T00:59:23.409212096Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 28 00:59:23.411726 containerd[1501]: time="2026-01-28T00:59:23.411423126Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 28 00:59:24.036620 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2839672204.mount: Deactivated successfully. Jan 28 00:59:28.644487 containerd[1501]: time="2026-01-28T00:59:28.644243096Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:28.647449 containerd[1501]: time="2026-01-28T00:59:28.647339509Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166822" Jan 28 00:59:28.650308 containerd[1501]: time="2026-01-28T00:59:28.648993533Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:28.655370 containerd[1501]: time="2026-01-28T00:59:28.655321796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:28.656567 containerd[1501]: time="2026-01-28T00:59:28.656528990Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 5.245051342s" Jan 28 00:59:28.657426 containerd[1501]: time="2026-01-28T00:59:28.657379695Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 28 00:59:31.422353 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 00:59:31.434644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:59:31.618546 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:59:31.630159 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 00:59:31.749453 kubelet[2162]: E0128 00:59:31.748039 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 00:59:31.751655 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 00:59:31.752531 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 00:59:33.214365 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:59:33.224668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:59:33.257153 update_engine[1485]: I20260128 00:59:33.256867 1485 update_attempter.cc:509] Updating boot flags... Jan 28 00:59:33.268515 systemd[1]: Reloading requested from client PID 2176 ('systemctl') (unit session-11.scope)... Jan 28 00:59:33.268561 systemd[1]: Reloading... Jan 28 00:59:33.449355 zram_generator::config[2222]: No configuration found. Jan 28 00:59:33.525320 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2252) Jan 28 00:59:33.635325 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2257) Jan 28 00:59:33.677462 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:59:33.710344 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2257) Jan 28 00:59:33.802086 systemd[1]: Reloading finished in 532 ms. Jan 28 00:59:33.883463 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:59:33.885011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:59:33.949514 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:59:33.958386 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 00:59:33.959229 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:59:33.971325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:59:34.140438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:59:34.152062 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:59:34.260715 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:59:34.260715 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:59:34.265354 kubelet[2305]: I0128 00:59:34.260528 2305 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:59:35.047667 kubelet[2305]: I0128 00:59:35.047595 2305 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 28 00:59:35.047667 kubelet[2305]: I0128 00:59:35.047648 2305 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:59:35.047904 kubelet[2305]: I0128 00:59:35.047707 2305 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 28 00:59:35.047904 kubelet[2305]: I0128 00:59:35.047724 2305 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:59:35.048128 kubelet[2305]: I0128 00:59:35.048093 2305 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 00:59:35.072788 kubelet[2305]: E0128 00:59:35.072700 2305 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.244.8.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.8.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 00:59:35.074488 kubelet[2305]: I0128 00:59:35.074212 2305 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:59:35.090999 kubelet[2305]: E0128 00:59:35.090929 2305 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 00:59:35.091168 kubelet[2305]: I0128 00:59:35.091026 2305 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 28 00:59:35.104525 kubelet[2305]: I0128 00:59:35.104471 2305 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 28 00:59:35.105610 kubelet[2305]: I0128 00:59:35.105548 2305 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:59:35.107222 kubelet[2305]: I0128 00:59:35.105597 2305 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-8h12l.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:59:35.107222 kubelet[2305]: I0128 00:59:35.107215 2305 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:59:35.107588 kubelet[2305]: I0128 00:59:35.107233 2305 container_manager_linux.go:306] "Creating device plugin manager" Jan 28 00:59:35.107588 kubelet[2305]: I0128 00:59:35.107410 2305 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 28 00:59:35.110686 kubelet[2305]: I0128 00:59:35.110347 2305 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:59:35.112076 kubelet[2305]: I0128 00:59:35.112052 2305 kubelet.go:475] "Attempting to sync node with API server" Jan 28 00:59:35.112250 kubelet[2305]: I0128 00:59:35.112228 2305 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:59:35.112443 kubelet[2305]: I0128 00:59:35.112422 2305 kubelet.go:387] "Adding apiserver pod source" Jan 28 00:59:35.112583 kubelet[2305]: I0128 00:59:35.112564 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:59:35.115353 kubelet[2305]: E0128 00:59:35.115318 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.8.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-8h12l.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.8.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 00:59:35.115634 kubelet[2305]: E0128 00:59:35.115603 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.8.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.8.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 00:59:35.116502 kubelet[2305]: I0128 00:59:35.116305 2305 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 00:59:35.119424 kubelet[2305]: I0128 00:59:35.119397 2305 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 00:59:35.119569 kubelet[2305]: I0128 00:59:35.119549 2305 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 28 00:59:35.122453 kubelet[2305]: W0128 00:59:35.122428 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 00:59:35.132314 kubelet[2305]: I0128 00:59:35.131342 2305 server.go:1262] "Started kubelet" Jan 28 00:59:35.137225 kubelet[2305]: E0128 00:59:35.135694 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.8.18:6443/api/v1/namespaces/default/events\": dial tcp 10.244.8.18:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-8h12l.gb1.brightbox.com.188ebf3da7bbfb0e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-8h12l.gb1.brightbox.com,UID:srv-8h12l.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-8h12l.gb1.brightbox.com,},FirstTimestamp:2026-01-28 00:59:35.131208462 +0000 UTC m=+0.969966474,LastTimestamp:2026-01-28 00:59:35.131208462 +0000 UTC m=+0.969966474,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-8h12l.gb1.brightbox.com,}" Jan 28 00:59:35.138599 kubelet[2305]: I0128 00:59:35.138547 2305 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:59:35.138683 kubelet[2305]: I0128 00:59:35.138632 2305 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 28 00:59:35.139108 kubelet[2305]: I0128 00:59:35.139083 2305 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:59:35.139413 kubelet[2305]: I0128 00:59:35.139391 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:59:35.142427 kubelet[2305]: I0128 00:59:35.141325 2305 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:59:35.151312 kubelet[2305]: I0128 00:59:35.150679 2305 server.go:310] "Adding debug handlers to kubelet server" Jan 28 00:59:35.153624 kubelet[2305]: I0128 00:59:35.152574 2305 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:59:35.153624 kubelet[2305]: I0128 00:59:35.152912 2305 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 28 00:59:35.153624 kubelet[2305]: E0128 00:59:35.153202 2305 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-8h12l.gb1.brightbox.com\" not found" Jan 28 00:59:35.158147 kubelet[2305]: I0128 00:59:35.158061 2305 factory.go:223] Registration of the systemd container factory successfully Jan 28 00:59:35.158305 kubelet[2305]: I0128 00:59:35.158218 2305 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:59:35.160311 kubelet[2305]: I0128 00:59:35.160269 2305 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 00:59:35.160523 kubelet[2305]: I0128 00:59:35.160503 2305 reconciler.go:29] "Reconciler: start to sync state" Jan 28 00:59:35.163390 kubelet[2305]: E0128 00:59:35.163361 2305 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:59:35.163756 kubelet[2305]: E0128 00:59:35.163725 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.8.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.8.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 00:59:35.164020 kubelet[2305]: E0128 00:59:35.163982 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.8.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-8h12l.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.8.18:6443: connect: connection refused" interval="200ms" Jan 28 00:59:35.166670 kubelet[2305]: I0128 00:59:35.166629 2305 factory.go:223] Registration of the containerd container factory successfully Jan 28 00:59:35.191776 kubelet[2305]: I0128 00:59:35.191744 2305 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:59:35.192060 kubelet[2305]: I0128 00:59:35.191833 2305 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:59:35.192060 kubelet[2305]: I0128 00:59:35.191872 2305 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:59:35.195058 kubelet[2305]: I0128 00:59:35.194649 2305 policy_none.go:49] "None policy: Start" Jan 28 00:59:35.195058 kubelet[2305]: I0128 00:59:35.194688 2305 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 28 00:59:35.195058 kubelet[2305]: I0128 00:59:35.194714 2305 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 28 00:59:35.197330 kubelet[2305]: I0128 00:59:35.196466 2305 policy_none.go:47] "Start" Jan 28 00:59:35.200786 kubelet[2305]: I0128 00:59:35.200756 2305 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 28 00:59:35.203137 kubelet[2305]: I0128 00:59:35.203112 2305 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 28 00:59:35.203326 kubelet[2305]: I0128 00:59:35.203274 2305 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 28 00:59:35.206409 kubelet[2305]: I0128 00:59:35.206386 2305 kubelet.go:2427] "Starting kubelet main sync loop" Jan 28 00:59:35.206617 kubelet[2305]: E0128 00:59:35.206578 2305 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:59:35.209550 kubelet[2305]: E0128 00:59:35.209520 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.8.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.8.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 00:59:35.213680 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 28 00:59:35.232001 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 28 00:59:35.243561 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 28 00:59:35.245899 kubelet[2305]: E0128 00:59:35.245726 2305 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 00:59:35.246333 kubelet[2305]: I0128 00:59:35.246271 2305 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:59:35.246420 kubelet[2305]: I0128 00:59:35.246316 2305 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:59:35.249748 kubelet[2305]: E0128 00:59:35.249713 2305 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:59:35.249843 kubelet[2305]: E0128 00:59:35.249799 2305 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-8h12l.gb1.brightbox.com\" not found" Jan 28 00:59:35.250111 kubelet[2305]: I0128 00:59:35.249996 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:59:35.328772 systemd[1]: Created slice kubepods-burstable-podb5f3743e28222cbefc17b449981b037f.slice - libcontainer container kubepods-burstable-podb5f3743e28222cbefc17b449981b037f.slice. Jan 28 00:59:35.343050 kubelet[2305]: E0128 00:59:35.342996 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-8h12l.gb1.brightbox.com\" not found" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.348852 systemd[1]: Created slice kubepods-burstable-podb54587e432e96a8f7012ebeb7819fdb8.slice - libcontainer container kubepods-burstable-podb54587e432e96a8f7012ebeb7819fdb8.slice. Jan 28 00:59:35.352798 kubelet[2305]: E0128 00:59:35.352763 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-8h12l.gb1.brightbox.com\" not found" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.353564 kubelet[2305]: I0128 00:59:35.353533 2305 kubelet_node_status.go:75] "Attempting to register node" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.354372 kubelet[2305]: E0128 00:59:35.354151 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.8.18:6443/api/v1/nodes\": dial tcp 10.244.8.18:6443: connect: connection refused" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.358022 systemd[1]: Created slice kubepods-burstable-podb1725913ba562f5c72b4a471c16d6707.slice - libcontainer container kubepods-burstable-podb1725913ba562f5c72b4a471c16d6707.slice. Jan 28 00:59:35.362597 kubelet[2305]: E0128 00:59:35.362565 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-8h12l.gb1.brightbox.com\" not found" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.365033 kubelet[2305]: E0128 00:59:35.364976 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.8.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-8h12l.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.8.18:6443: connect: connection refused" interval="400ms" Jan 28 00:59:35.461826 kubelet[2305]: I0128 00:59:35.461770 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b1725913ba562f5c72b4a471c16d6707-kubeconfig\") pod \"kube-scheduler-srv-8h12l.gb1.brightbox.com\" (UID: \"b1725913ba562f5c72b4a471c16d6707\") " pod="kube-system/kube-scheduler-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.462492 kubelet[2305]: I0128 00:59:35.461950 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5f3743e28222cbefc17b449981b037f-k8s-certs\") pod \"kube-apiserver-srv-8h12l.gb1.brightbox.com\" (UID: \"b5f3743e28222cbefc17b449981b037f\") " pod="kube-system/kube-apiserver-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.462492 kubelet[2305]: I0128 00:59:35.461998 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5f3743e28222cbefc17b449981b037f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-8h12l.gb1.brightbox.com\" (UID: \"b5f3743e28222cbefc17b449981b037f\") " pod="kube-system/kube-apiserver-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.462492 kubelet[2305]: I0128 00:59:35.462091 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b54587e432e96a8f7012ebeb7819fdb8-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-8h12l.gb1.brightbox.com\" (UID: \"b54587e432e96a8f7012ebeb7819fdb8\") " pod="kube-system/kube-controller-manager-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.462492 kubelet[2305]: I0128 00:59:35.462135 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5f3743e28222cbefc17b449981b037f-ca-certs\") pod \"kube-apiserver-srv-8h12l.gb1.brightbox.com\" (UID: \"b5f3743e28222cbefc17b449981b037f\") " pod="kube-system/kube-apiserver-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.462492 kubelet[2305]: I0128 00:59:35.462167 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b54587e432e96a8f7012ebeb7819fdb8-ca-certs\") pod \"kube-controller-manager-srv-8h12l.gb1.brightbox.com\" (UID: \"b54587e432e96a8f7012ebeb7819fdb8\") " pod="kube-system/kube-controller-manager-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.462901 kubelet[2305]: I0128 00:59:35.462193 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b54587e432e96a8f7012ebeb7819fdb8-flexvolume-dir\") pod \"kube-controller-manager-srv-8h12l.gb1.brightbox.com\" (UID: \"b54587e432e96a8f7012ebeb7819fdb8\") " pod="kube-system/kube-controller-manager-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.462901 kubelet[2305]: I0128 00:59:35.462231 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b54587e432e96a8f7012ebeb7819fdb8-k8s-certs\") pod \"kube-controller-manager-srv-8h12l.gb1.brightbox.com\" (UID: \"b54587e432e96a8f7012ebeb7819fdb8\") " pod="kube-system/kube-controller-manager-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.462901 kubelet[2305]: I0128 00:59:35.462259 2305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b54587e432e96a8f7012ebeb7819fdb8-kubeconfig\") pod \"kube-controller-manager-srv-8h12l.gb1.brightbox.com\" (UID: \"b54587e432e96a8f7012ebeb7819fdb8\") " pod="kube-system/kube-controller-manager-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.557860 kubelet[2305]: I0128 00:59:35.557819 2305 kubelet_node_status.go:75] "Attempting to register node" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.558272 kubelet[2305]: E0128 00:59:35.558232 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.8.18:6443/api/v1/nodes\": dial tcp 10.244.8.18:6443: connect: connection refused" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.647039 containerd[1501]: time="2026-01-28T00:59:35.646857864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-8h12l.gb1.brightbox.com,Uid:b5f3743e28222cbefc17b449981b037f,Namespace:kube-system,Attempt:0,}" Jan 28 00:59:35.666111 containerd[1501]: time="2026-01-28T00:59:35.665996515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-8h12l.gb1.brightbox.com,Uid:b54587e432e96a8f7012ebeb7819fdb8,Namespace:kube-system,Attempt:0,}" Jan 28 00:59:35.668391 containerd[1501]: time="2026-01-28T00:59:35.668052138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-8h12l.gb1.brightbox.com,Uid:b1725913ba562f5c72b4a471c16d6707,Namespace:kube-system,Attempt:0,}" Jan 28 00:59:35.766545 kubelet[2305]: E0128 00:59:35.766472 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.8.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-8h12l.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.8.18:6443: connect: connection refused" interval="800ms" Jan 28 00:59:35.937487 kubelet[2305]: E0128 00:59:35.937367 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.244.8.18:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.8.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 28 00:59:35.961560 kubelet[2305]: I0128 00:59:35.961156 2305 kubelet_node_status.go:75] "Attempting to register node" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:35.961696 kubelet[2305]: E0128 00:59:35.961598 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.8.18:6443/api/v1/nodes\": dial tcp 10.244.8.18:6443: connect: connection refused" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:36.184129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1971047618.mount: Deactivated successfully. Jan 28 00:59:36.195236 containerd[1501]: time="2026-01-28T00:59:36.195044311Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:59:36.196769 containerd[1501]: time="2026-01-28T00:59:36.196708494Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:59:36.198020 containerd[1501]: time="2026-01-28T00:59:36.197955736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 00:59:36.198682 containerd[1501]: time="2026-01-28T00:59:36.198631061Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 28 00:59:36.201302 containerd[1501]: time="2026-01-28T00:59:36.199808946Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:59:36.201302 containerd[1501]: time="2026-01-28T00:59:36.201197585Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 00:59:36.203562 containerd[1501]: time="2026-01-28T00:59:36.203529443Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:59:36.206838 containerd[1501]: time="2026-01-28T00:59:36.206793607Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 559.727749ms" Jan 28 00:59:36.208087 containerd[1501]: time="2026-01-28T00:59:36.208011937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 00:59:36.211592 containerd[1501]: time="2026-01-28T00:59:36.211526785Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 545.066938ms" Jan 28 00:59:36.212530 containerd[1501]: time="2026-01-28T00:59:36.212483222Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.319093ms" Jan 28 00:59:36.288839 kubelet[2305]: E0128 00:59:36.288203 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.244.8.18:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.8.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 28 00:59:36.325339 kubelet[2305]: E0128 00:59:36.325260 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.244.8.18:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-8h12l.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.8.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 28 00:59:36.434965 containerd[1501]: time="2026-01-28T00:59:36.434826761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:59:36.436676 containerd[1501]: time="2026-01-28T00:59:36.433375710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:59:36.436676 containerd[1501]: time="2026-01-28T00:59:36.436403686Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:59:36.436676 containerd[1501]: time="2026-01-28T00:59:36.436425327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:36.436676 containerd[1501]: time="2026-01-28T00:59:36.436579504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:36.436676 containerd[1501]: time="2026-01-28T00:59:36.435760697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:59:36.439971 containerd[1501]: time="2026-01-28T00:59:36.439711929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:36.439971 containerd[1501]: time="2026-01-28T00:59:36.439830555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:36.442587 containerd[1501]: time="2026-01-28T00:59:36.442497342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:59:36.442758 containerd[1501]: time="2026-01-28T00:59:36.442567735Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:59:36.443003 containerd[1501]: time="2026-01-28T00:59:36.442864396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:36.447072 containerd[1501]: time="2026-01-28T00:59:36.444349807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:36.485531 systemd[1]: Started cri-containerd-20fe7c40beefa685246225acc3588bc9973cc57ab2f9079c62594688a1a83cf0.scope - libcontainer container 20fe7c40beefa685246225acc3588bc9973cc57ab2f9079c62594688a1a83cf0. Jan 28 00:59:36.499520 systemd[1]: Started cri-containerd-77abbbcaf56a8bc4c2907a7113d3130ebbd989d543cd65a60ac02451caa94e4e.scope - libcontainer container 77abbbcaf56a8bc4c2907a7113d3130ebbd989d543cd65a60ac02451caa94e4e. Jan 28 00:59:36.502520 systemd[1]: Started cri-containerd-a1665226c2af4c2220b2bbaa840dbc9d89b7f9272cc7ee6613ee831bed0d6f9c.scope - libcontainer container a1665226c2af4c2220b2bbaa840dbc9d89b7f9272cc7ee6613ee831bed0d6f9c. Jan 28 00:59:36.568205 kubelet[2305]: E0128 00:59:36.567991 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.8.18:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-8h12l.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.8.18:6443: connect: connection refused" interval="1.6s" Jan 28 00:59:36.596320 containerd[1501]: time="2026-01-28T00:59:36.595607765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-8h12l.gb1.brightbox.com,Uid:b54587e432e96a8f7012ebeb7819fdb8,Namespace:kube-system,Attempt:0,} returns sandbox id \"20fe7c40beefa685246225acc3588bc9973cc57ab2f9079c62594688a1a83cf0\"" Jan 28 00:59:36.614209 kubelet[2305]: E0128 00:59:36.614139 2305 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.244.8.18:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.8.18:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 28 00:59:36.615876 containerd[1501]: time="2026-01-28T00:59:36.615832275Z" level=info msg="CreateContainer within sandbox \"20fe7c40beefa685246225acc3588bc9973cc57ab2f9079c62594688a1a83cf0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 00:59:36.618763 containerd[1501]: time="2026-01-28T00:59:36.618712074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-8h12l.gb1.brightbox.com,Uid:b5f3743e28222cbefc17b449981b037f,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1665226c2af4c2220b2bbaa840dbc9d89b7f9272cc7ee6613ee831bed0d6f9c\"" Jan 28 00:59:36.627157 containerd[1501]: time="2026-01-28T00:59:36.627008206Z" level=info msg="CreateContainer within sandbox \"a1665226c2af4c2220b2bbaa840dbc9d89b7f9272cc7ee6613ee831bed0d6f9c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 00:59:36.629358 containerd[1501]: time="2026-01-28T00:59:36.629237291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-8h12l.gb1.brightbox.com,Uid:b1725913ba562f5c72b4a471c16d6707,Namespace:kube-system,Attempt:0,} returns sandbox id \"77abbbcaf56a8bc4c2907a7113d3130ebbd989d543cd65a60ac02451caa94e4e\"" Jan 28 00:59:36.640946 containerd[1501]: time="2026-01-28T00:59:36.640907449Z" level=info msg="CreateContainer within sandbox \"77abbbcaf56a8bc4c2907a7113d3130ebbd989d543cd65a60ac02451caa94e4e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 00:59:36.676233 containerd[1501]: time="2026-01-28T00:59:36.676151280Z" level=info msg="CreateContainer within sandbox \"20fe7c40beefa685246225acc3588bc9973cc57ab2f9079c62594688a1a83cf0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e27d5ebdb37035ffbe8a780537e32087139d7b2d019b2d5666ce04ad912f24a0\"" Jan 28 00:59:36.677516 containerd[1501]: time="2026-01-28T00:59:36.677480264Z" level=info msg="StartContainer for \"e27d5ebdb37035ffbe8a780537e32087139d7b2d019b2d5666ce04ad912f24a0\"" Jan 28 00:59:36.694449 containerd[1501]: time="2026-01-28T00:59:36.694211478Z" level=info msg="CreateContainer within sandbox \"a1665226c2af4c2220b2bbaa840dbc9d89b7f9272cc7ee6613ee831bed0d6f9c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"21166a9716113fd3603cfc7328fcd08b8a73bfe20c0f44f9467a95df4fa6011b\"" Jan 28 00:59:36.696451 containerd[1501]: time="2026-01-28T00:59:36.695131501Z" level=info msg="StartContainer for \"21166a9716113fd3603cfc7328fcd08b8a73bfe20c0f44f9467a95df4fa6011b\"" Jan 28 00:59:36.721490 systemd[1]: Started cri-containerd-e27d5ebdb37035ffbe8a780537e32087139d7b2d019b2d5666ce04ad912f24a0.scope - libcontainer container e27d5ebdb37035ffbe8a780537e32087139d7b2d019b2d5666ce04ad912f24a0. Jan 28 00:59:36.723968 containerd[1501]: time="2026-01-28T00:59:36.723926612Z" level=info msg="CreateContainer within sandbox \"77abbbcaf56a8bc4c2907a7113d3130ebbd989d543cd65a60ac02451caa94e4e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6e6f08d726bcd96a64718f40905a813fc72a2c8d546a17bedcf26e6478d8e48d\"" Jan 28 00:59:36.726683 containerd[1501]: time="2026-01-28T00:59:36.726644110Z" level=info msg="StartContainer for \"6e6f08d726bcd96a64718f40905a813fc72a2c8d546a17bedcf26e6478d8e48d\"" Jan 28 00:59:36.746924 systemd[1]: Started cri-containerd-21166a9716113fd3603cfc7328fcd08b8a73bfe20c0f44f9467a95df4fa6011b.scope - libcontainer container 21166a9716113fd3603cfc7328fcd08b8a73bfe20c0f44f9467a95df4fa6011b. Jan 28 00:59:36.765803 kubelet[2305]: I0128 00:59:36.765725 2305 kubelet_node_status.go:75] "Attempting to register node" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:36.766190 kubelet[2305]: E0128 00:59:36.766121 2305 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.8.18:6443/api/v1/nodes\": dial tcp 10.244.8.18:6443: connect: connection refused" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:36.778627 systemd[1]: Started cri-containerd-6e6f08d726bcd96a64718f40905a813fc72a2c8d546a17bedcf26e6478d8e48d.scope - libcontainer container 6e6f08d726bcd96a64718f40905a813fc72a2c8d546a17bedcf26e6478d8e48d. Jan 28 00:59:36.860310 containerd[1501]: time="2026-01-28T00:59:36.860034298Z" level=info msg="StartContainer for \"e27d5ebdb37035ffbe8a780537e32087139d7b2d019b2d5666ce04ad912f24a0\" returns successfully" Jan 28 00:59:36.872208 containerd[1501]: time="2026-01-28T00:59:36.872113184Z" level=info msg="StartContainer for \"21166a9716113fd3603cfc7328fcd08b8a73bfe20c0f44f9467a95df4fa6011b\" returns successfully" Jan 28 00:59:36.894344 containerd[1501]: time="2026-01-28T00:59:36.894167415Z" level=info msg="StartContainer for \"6e6f08d726bcd96a64718f40905a813fc72a2c8d546a17bedcf26e6478d8e48d\" returns successfully" Jan 28 00:59:37.076605 kubelet[2305]: E0128 00:59:37.076434 2305 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.244.8.18:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.8.18:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 28 00:59:37.221906 kubelet[2305]: E0128 00:59:37.221096 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-8h12l.gb1.brightbox.com\" not found" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:37.226462 kubelet[2305]: E0128 00:59:37.226249 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-8h12l.gb1.brightbox.com\" not found" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:37.230271 kubelet[2305]: E0128 00:59:37.230246 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-8h12l.gb1.brightbox.com\" not found" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:38.236782 kubelet[2305]: E0128 00:59:38.236347 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-8h12l.gb1.brightbox.com\" not found" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:38.236782 kubelet[2305]: E0128 00:59:38.236440 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-8h12l.gb1.brightbox.com\" not found" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:38.370588 kubelet[2305]: I0128 00:59:38.370521 2305 kubelet_node_status.go:75] "Attempting to register node" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:39.237090 kubelet[2305]: E0128 00:59:39.237045 2305 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-8h12l.gb1.brightbox.com\" not found" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:39.887784 kubelet[2305]: E0128 00:59:39.887665 2305 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-8h12l.gb1.brightbox.com\" not found" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:40.009073 kubelet[2305]: I0128 00:59:40.009019 2305 kubelet_node_status.go:78] "Successfully registered node" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:40.009073 kubelet[2305]: E0128 00:59:40.009077 2305 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"srv-8h12l.gb1.brightbox.com\": node \"srv-8h12l.gb1.brightbox.com\" not found" Jan 28 00:59:40.070658 kubelet[2305]: I0128 00:59:40.070605 2305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:40.081441 kubelet[2305]: E0128 00:59:40.081148 2305 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-8h12l.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:40.081441 kubelet[2305]: I0128 00:59:40.081189 2305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:40.083538 kubelet[2305]: E0128 00:59:40.083503 2305 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-8h12l.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:40.083538 kubelet[2305]: I0128 00:59:40.083537 2305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:40.086630 kubelet[2305]: E0128 00:59:40.086571 2305 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-8h12l.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:40.117875 kubelet[2305]: I0128 00:59:40.117300 2305 apiserver.go:52] "Watching apiserver" Jan 28 00:59:40.162308 kubelet[2305]: I0128 00:59:40.162135 2305 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 00:59:40.761968 kubelet[2305]: I0128 00:59:40.761414 2305 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:40.771816 kubelet[2305]: I0128 00:59:40.771760 2305 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 00:59:42.194318 systemd[1]: Reloading requested from client PID 2589 ('systemctl') (unit session-11.scope)... Jan 28 00:59:42.194998 systemd[1]: Reloading... Jan 28 00:59:42.333332 zram_generator::config[2634]: No configuration found. Jan 28 00:59:42.523078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 00:59:42.654842 systemd[1]: Reloading finished in 458 ms. Jan 28 00:59:42.719644 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:59:42.720261 kubelet[2305]: I0128 00:59:42.719811 2305 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:59:42.738098 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 00:59:42.738548 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:59:42.738674 systemd[1]: kubelet.service: Consumed 1.529s CPU time, 128.1M memory peak, 0B memory swap peak. Jan 28 00:59:42.747941 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 00:59:43.011917 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 00:59:43.024856 (kubelet)[2692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 00:59:43.138813 kubelet[2692]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 00:59:43.138813 kubelet[2692]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 00:59:43.138813 kubelet[2692]: I0128 00:59:43.138400 2692 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 00:59:43.165327 kubelet[2692]: I0128 00:59:43.164344 2692 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 28 00:59:43.165327 kubelet[2692]: I0128 00:59:43.164388 2692 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 00:59:43.165327 kubelet[2692]: I0128 00:59:43.164444 2692 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 28 00:59:43.165327 kubelet[2692]: I0128 00:59:43.164462 2692 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 00:59:43.165327 kubelet[2692]: I0128 00:59:43.164877 2692 server.go:956] "Client rotation is on, will bootstrap in background" Jan 28 00:59:43.167388 kubelet[2692]: I0128 00:59:43.167361 2692 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 28 00:59:43.171367 kubelet[2692]: I0128 00:59:43.171240 2692 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 00:59:43.178116 kubelet[2692]: E0128 00:59:43.178070 2692 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 00:59:43.178524 kubelet[2692]: I0128 00:59:43.178503 2692 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 28 00:59:43.192413 kubelet[2692]: I0128 00:59:43.192370 2692 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 28 00:59:43.194330 kubelet[2692]: I0128 00:59:43.194111 2692 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 00:59:43.194973 kubelet[2692]: I0128 00:59:43.194171 2692 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-8h12l.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 00:59:43.194973 kubelet[2692]: I0128 00:59:43.194646 2692 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 00:59:43.194973 kubelet[2692]: I0128 00:59:43.194666 2692 container_manager_linux.go:306] "Creating device plugin manager" Jan 28 00:59:43.194973 kubelet[2692]: I0128 00:59:43.194715 2692 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 28 00:59:43.196466 kubelet[2692]: I0128 00:59:43.196442 2692 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:59:43.207809 kubelet[2692]: I0128 00:59:43.207414 2692 kubelet.go:475] "Attempting to sync node with API server" Jan 28 00:59:43.207809 kubelet[2692]: I0128 00:59:43.207746 2692 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 00:59:43.211072 kubelet[2692]: I0128 00:59:43.210364 2692 kubelet.go:387] "Adding apiserver pod source" Jan 28 00:59:43.211072 kubelet[2692]: I0128 00:59:43.210411 2692 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 00:59:43.219305 kubelet[2692]: I0128 00:59:43.217939 2692 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 00:59:43.222379 kubelet[2692]: I0128 00:59:43.219921 2692 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 28 00:59:43.222379 kubelet[2692]: I0128 00:59:43.220073 2692 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 28 00:59:43.238729 kubelet[2692]: I0128 00:59:43.238694 2692 server.go:1262] "Started kubelet" Jan 28 00:59:43.251172 kubelet[2692]: I0128 00:59:43.248926 2692 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 00:59:43.253120 kubelet[2692]: I0128 00:59:43.252517 2692 server.go:310] "Adding debug handlers to kubelet server" Jan 28 00:59:43.254733 kubelet[2692]: I0128 00:59:43.254661 2692 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 00:59:43.255009 kubelet[2692]: I0128 00:59:43.254982 2692 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 28 00:59:43.255716 kubelet[2692]: I0128 00:59:43.255555 2692 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 00:59:43.268000 kubelet[2692]: I0128 00:59:43.267216 2692 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 00:59:43.271812 kubelet[2692]: I0128 00:59:43.271780 2692 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 00:59:43.281353 kubelet[2692]: I0128 00:59:43.281307 2692 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 00:59:43.283415 kubelet[2692]: I0128 00:59:43.282724 2692 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 28 00:59:43.288268 kubelet[2692]: I0128 00:59:43.286582 2692 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 00:59:43.293997 kubelet[2692]: E0128 00:59:43.293588 2692 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 00:59:43.299494 kubelet[2692]: I0128 00:59:43.298698 2692 reconciler.go:29] "Reconciler: start to sync state" Jan 28 00:59:43.301513 kubelet[2692]: I0128 00:59:43.301362 2692 factory.go:223] Registration of the containerd container factory successfully Jan 28 00:59:43.301513 kubelet[2692]: I0128 00:59:43.301503 2692 factory.go:223] Registration of the systemd container factory successfully Jan 28 00:59:43.338162 kubelet[2692]: I0128 00:59:43.338088 2692 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 28 00:59:43.341682 kubelet[2692]: I0128 00:59:43.341655 2692 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 28 00:59:43.341854 kubelet[2692]: I0128 00:59:43.341835 2692 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 28 00:59:43.341980 kubelet[2692]: I0128 00:59:43.341960 2692 kubelet.go:2427] "Starting kubelet main sync loop" Jan 28 00:59:43.342138 kubelet[2692]: E0128 00:59:43.342107 2692 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 00:59:43.441596 kubelet[2692]: I0128 00:59:43.441543 2692 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 00:59:43.441970 kubelet[2692]: I0128 00:59:43.441581 2692 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 00:59:43.441970 kubelet[2692]: I0128 00:59:43.441868 2692 state_mem.go:36] "Initialized new in-memory state store" Jan 28 00:59:43.444479 kubelet[2692]: E0128 00:59:43.442526 2692 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 00:59:43.447070 kubelet[2692]: I0128 00:59:43.447023 2692 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 00:59:43.447186 kubelet[2692]: I0128 00:59:43.447070 2692 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 00:59:43.449843 kubelet[2692]: I0128 00:59:43.449787 2692 policy_none.go:49] "None policy: Start" Jan 28 00:59:43.449959 kubelet[2692]: I0128 00:59:43.449877 2692 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 28 00:59:43.449959 kubelet[2692]: I0128 00:59:43.449913 2692 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 28 00:59:43.451126 kubelet[2692]: I0128 00:59:43.450523 2692 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 28 00:59:43.451126 kubelet[2692]: I0128 00:59:43.450600 2692 policy_none.go:47] "Start" Jan 28 00:59:43.486881 kubelet[2692]: E0128 00:59:43.483968 2692 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 28 00:59:43.506846 kubelet[2692]: I0128 00:59:43.506518 2692 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 00:59:43.507437 kubelet[2692]: I0128 00:59:43.506640 2692 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 00:59:43.515329 kubelet[2692]: I0128 00:59:43.507589 2692 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 00:59:43.522233 kubelet[2692]: E0128 00:59:43.522075 2692 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 00:59:43.645351 kubelet[2692]: I0128 00:59:43.645187 2692 kubelet_node_status.go:75] "Attempting to register node" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:43.647574 kubelet[2692]: I0128 00:59:43.646855 2692 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:43.649355 kubelet[2692]: I0128 00:59:43.649192 2692 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:43.678320 kubelet[2692]: I0128 00:59:43.677917 2692 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:43.693319 kubelet[2692]: I0128 00:59:43.692417 2692 kubelet_node_status.go:124] "Node was previously registered" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:43.693319 kubelet[2692]: I0128 00:59:43.692542 2692 kubelet_node_status.go:78] "Successfully registered node" node="srv-8h12l.gb1.brightbox.com" Jan 28 00:59:43.713310 kubelet[2692]: I0128 00:59:43.712075 2692 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 00:59:43.717302 kubelet[2692]: I0128 00:59:43.713744 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5f3743e28222cbefc17b449981b037f-ca-certs\") pod \"kube-apiserver-srv-8h12l.gb1.brightbox.com\" (UID: \"b5f3743e28222cbefc17b449981b037f\") " pod="kube-system/kube-apiserver-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:43.717302 kubelet[2692]: I0128 00:59:43.716348 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5f3743e28222cbefc17b449981b037f-k8s-certs\") pod \"kube-apiserver-srv-8h12l.gb1.brightbox.com\" (UID: \"b5f3743e28222cbefc17b449981b037f\") " pod="kube-system/kube-apiserver-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:43.717302 kubelet[2692]: I0128 00:59:43.716432 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b54587e432e96a8f7012ebeb7819fdb8-ca-certs\") pod \"kube-controller-manager-srv-8h12l.gb1.brightbox.com\" (UID: \"b54587e432e96a8f7012ebeb7819fdb8\") " pod="kube-system/kube-controller-manager-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:43.717302 kubelet[2692]: I0128 00:59:43.716475 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b54587e432e96a8f7012ebeb7819fdb8-k8s-certs\") pod \"kube-controller-manager-srv-8h12l.gb1.brightbox.com\" (UID: \"b54587e432e96a8f7012ebeb7819fdb8\") " pod="kube-system/kube-controller-manager-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:43.717302 kubelet[2692]: I0128 00:59:43.716508 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5f3743e28222cbefc17b449981b037f-usr-share-ca-certificates\") pod \"kube-apiserver-srv-8h12l.gb1.brightbox.com\" (UID: \"b5f3743e28222cbefc17b449981b037f\") " pod="kube-system/kube-apiserver-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:43.717616 kubelet[2692]: I0128 00:59:43.716542 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b54587e432e96a8f7012ebeb7819fdb8-flexvolume-dir\") pod \"kube-controller-manager-srv-8h12l.gb1.brightbox.com\" (UID: \"b54587e432e96a8f7012ebeb7819fdb8\") " pod="kube-system/kube-controller-manager-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:43.717616 kubelet[2692]: I0128 00:59:43.716598 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b54587e432e96a8f7012ebeb7819fdb8-kubeconfig\") pod \"kube-controller-manager-srv-8h12l.gb1.brightbox.com\" (UID: \"b54587e432e96a8f7012ebeb7819fdb8\") " pod="kube-system/kube-controller-manager-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:43.717616 kubelet[2692]: I0128 00:59:43.716631 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b54587e432e96a8f7012ebeb7819fdb8-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-8h12l.gb1.brightbox.com\" (UID: \"b54587e432e96a8f7012ebeb7819fdb8\") " pod="kube-system/kube-controller-manager-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:43.717616 kubelet[2692]: I0128 00:59:43.716688 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b1725913ba562f5c72b4a471c16d6707-kubeconfig\") pod \"kube-scheduler-srv-8h12l.gb1.brightbox.com\" (UID: \"b1725913ba562f5c72b4a471c16d6707\") " pod="kube-system/kube-scheduler-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:43.717616 kubelet[2692]: I0128 00:59:43.714005 2692 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 00:59:43.717616 kubelet[2692]: I0128 00:59:43.714362 2692 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 00:59:43.717931 kubelet[2692]: E0128 00:59:43.716962 2692 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-8h12l.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:44.217609 kubelet[2692]: I0128 00:59:44.217484 2692 apiserver.go:52] "Watching apiserver" Jan 28 00:59:44.282787 kubelet[2692]: I0128 00:59:44.282665 2692 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 00:59:44.382514 kubelet[2692]: I0128 00:59:44.382473 2692 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:44.395229 kubelet[2692]: I0128 00:59:44.395148 2692 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 28 00:59:44.395457 kubelet[2692]: E0128 00:59:44.395244 2692 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-8h12l.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-8h12l.gb1.brightbox.com" Jan 28 00:59:44.438791 kubelet[2692]: I0128 00:59:44.438709 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-8h12l.gb1.brightbox.com" podStartSLOduration=1.438673388 podStartE2EDuration="1.438673388s" podCreationTimestamp="2026-01-28 00:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:59:44.414921326 +0000 UTC m=+1.361058921" watchObservedRunningTime="2026-01-28 00:59:44.438673388 +0000 UTC m=+1.384810967" Jan 28 00:59:44.455075 kubelet[2692]: I0128 00:59:44.454840 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-8h12l.gb1.brightbox.com" podStartSLOduration=1.454813531 podStartE2EDuration="1.454813531s" podCreationTimestamp="2026-01-28 00:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:59:44.440229023 +0000 UTC m=+1.386366635" watchObservedRunningTime="2026-01-28 00:59:44.454813531 +0000 UTC m=+1.400951135" Jan 28 00:59:44.480467 kubelet[2692]: I0128 00:59:44.479642 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-8h12l.gb1.brightbox.com" podStartSLOduration=4.479623405 podStartE2EDuration="4.479623405s" podCreationTimestamp="2026-01-28 00:59:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:59:44.455044988 +0000 UTC m=+1.401182584" watchObservedRunningTime="2026-01-28 00:59:44.479623405 +0000 UTC m=+1.425761012" Jan 28 00:59:48.510467 kubelet[2692]: I0128 00:59:48.510264 2692 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 00:59:48.513224 kubelet[2692]: I0128 00:59:48.512523 2692 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 00:59:48.513439 containerd[1501]: time="2026-01-28T00:59:48.511303541Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 00:59:49.379979 systemd[1]: Created slice kubepods-besteffort-pod83ed3f5a_78e3_48fd_bf2b_d3107ac7546d.slice - libcontainer container kubepods-besteffort-pod83ed3f5a_78e3_48fd_bf2b_d3107ac7546d.slice. Jan 28 00:59:49.454670 kubelet[2692]: I0128 00:59:49.454522 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83ed3f5a-78e3-48fd-bf2b-d3107ac7546d-xtables-lock\") pod \"kube-proxy-chwmv\" (UID: \"83ed3f5a-78e3-48fd-bf2b-d3107ac7546d\") " pod="kube-system/kube-proxy-chwmv" Jan 28 00:59:49.455551 kubelet[2692]: I0128 00:59:49.455340 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83ed3f5a-78e3-48fd-bf2b-d3107ac7546d-lib-modules\") pod \"kube-proxy-chwmv\" (UID: \"83ed3f5a-78e3-48fd-bf2b-d3107ac7546d\") " pod="kube-system/kube-proxy-chwmv" Jan 28 00:59:49.455551 kubelet[2692]: I0128 00:59:49.455444 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjjgf\" (UniqueName: \"kubernetes.io/projected/83ed3f5a-78e3-48fd-bf2b-d3107ac7546d-kube-api-access-pjjgf\") pod \"kube-proxy-chwmv\" (UID: \"83ed3f5a-78e3-48fd-bf2b-d3107ac7546d\") " pod="kube-system/kube-proxy-chwmv" Jan 28 00:59:49.455551 kubelet[2692]: I0128 00:59:49.455504 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/83ed3f5a-78e3-48fd-bf2b-d3107ac7546d-kube-proxy\") pod \"kube-proxy-chwmv\" (UID: \"83ed3f5a-78e3-48fd-bf2b-d3107ac7546d\") " pod="kube-system/kube-proxy-chwmv" Jan 28 00:59:49.687607 systemd[1]: Created slice kubepods-besteffort-pod6974371e_db5b_4510_8d7f_e8bbd77af6cd.slice - libcontainer container kubepods-besteffort-pod6974371e_db5b_4510_8d7f_e8bbd77af6cd.slice. Jan 28 00:59:49.697302 containerd[1501]: time="2026-01-28T00:59:49.697098355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chwmv,Uid:83ed3f5a-78e3-48fd-bf2b-d3107ac7546d,Namespace:kube-system,Attempt:0,}" Jan 28 00:59:49.749322 containerd[1501]: time="2026-01-28T00:59:49.748036135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:59:49.749322 containerd[1501]: time="2026-01-28T00:59:49.749153774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:59:49.749322 containerd[1501]: time="2026-01-28T00:59:49.749241162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:49.750320 containerd[1501]: time="2026-01-28T00:59:49.749526037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:49.759062 kubelet[2692]: I0128 00:59:49.758791 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6974371e-db5b-4510-8d7f-e8bbd77af6cd-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-9j4j9\" (UID: \"6974371e-db5b-4510-8d7f-e8bbd77af6cd\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-9j4j9" Jan 28 00:59:49.759062 kubelet[2692]: I0128 00:59:49.758854 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfwlw\" (UniqueName: \"kubernetes.io/projected/6974371e-db5b-4510-8d7f-e8bbd77af6cd-kube-api-access-tfwlw\") pod \"tigera-operator-65cdcdfd6d-9j4j9\" (UID: \"6974371e-db5b-4510-8d7f-e8bbd77af6cd\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-9j4j9" Jan 28 00:59:49.781730 systemd[1]: run-containerd-runc-k8s.io-afb7990dcf55b32867db08af35d52556eb9a93e76067cc8f8d17db2f3c91eb1b-runc.iPs7ha.mount: Deactivated successfully. Jan 28 00:59:49.791573 systemd[1]: Started cri-containerd-afb7990dcf55b32867db08af35d52556eb9a93e76067cc8f8d17db2f3c91eb1b.scope - libcontainer container afb7990dcf55b32867db08af35d52556eb9a93e76067cc8f8d17db2f3c91eb1b. Jan 28 00:59:49.836261 containerd[1501]: time="2026-01-28T00:59:49.836170074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-chwmv,Uid:83ed3f5a-78e3-48fd-bf2b-d3107ac7546d,Namespace:kube-system,Attempt:0,} returns sandbox id \"afb7990dcf55b32867db08af35d52556eb9a93e76067cc8f8d17db2f3c91eb1b\"" Jan 28 00:59:49.848848 containerd[1501]: time="2026-01-28T00:59:49.848799257Z" level=info msg="CreateContainer within sandbox \"afb7990dcf55b32867db08af35d52556eb9a93e76067cc8f8d17db2f3c91eb1b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 00:59:49.877630 containerd[1501]: time="2026-01-28T00:59:49.877563866Z" level=info msg="CreateContainer within sandbox \"afb7990dcf55b32867db08af35d52556eb9a93e76067cc8f8d17db2f3c91eb1b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"86e35192aed9b1c6aba94989a338af935dfab6725211dc782027b7c29259caad\"" Jan 28 00:59:49.883317 containerd[1501]: time="2026-01-28T00:59:49.883261179Z" level=info msg="StartContainer for \"86e35192aed9b1c6aba94989a338af935dfab6725211dc782027b7c29259caad\"" Jan 28 00:59:49.928531 systemd[1]: Started cri-containerd-86e35192aed9b1c6aba94989a338af935dfab6725211dc782027b7c29259caad.scope - libcontainer container 86e35192aed9b1c6aba94989a338af935dfab6725211dc782027b7c29259caad. Jan 28 00:59:49.982312 containerd[1501]: time="2026-01-28T00:59:49.982111715Z" level=info msg="StartContainer for \"86e35192aed9b1c6aba94989a338af935dfab6725211dc782027b7c29259caad\" returns successfully" Jan 28 00:59:49.997239 containerd[1501]: time="2026-01-28T00:59:49.997123859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-9j4j9,Uid:6974371e-db5b-4510-8d7f-e8bbd77af6cd,Namespace:tigera-operator,Attempt:0,}" Jan 28 00:59:50.061261 containerd[1501]: time="2026-01-28T00:59:50.060833714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 00:59:50.061261 containerd[1501]: time="2026-01-28T00:59:50.060960066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 00:59:50.061261 containerd[1501]: time="2026-01-28T00:59:50.061002559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:50.061261 containerd[1501]: time="2026-01-28T00:59:50.061186056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 00:59:50.090590 systemd[1]: Started cri-containerd-c821f2b257d9c9b22d2a90e0ee9141a2f1456a4ac3a6e9ab3fac22ea58926e07.scope - libcontainer container c821f2b257d9c9b22d2a90e0ee9141a2f1456a4ac3a6e9ab3fac22ea58926e07. Jan 28 00:59:50.175615 containerd[1501]: time="2026-01-28T00:59:50.174918914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-9j4j9,Uid:6974371e-db5b-4510-8d7f-e8bbd77af6cd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c821f2b257d9c9b22d2a90e0ee9141a2f1456a4ac3a6e9ab3fac22ea58926e07\"" Jan 28 00:59:50.179656 containerd[1501]: time="2026-01-28T00:59:50.179334069Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 00:59:50.422994 kubelet[2692]: I0128 00:59:50.422903 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-chwmv" podStartSLOduration=1.42284381 podStartE2EDuration="1.42284381s" podCreationTimestamp="2026-01-28 00:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 00:59:50.422509166 +0000 UTC m=+7.368646778" watchObservedRunningTime="2026-01-28 00:59:50.42284381 +0000 UTC m=+7.368981420" Jan 28 00:59:52.224872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3426312373.mount: Deactivated successfully. Jan 28 00:59:53.323349 containerd[1501]: time="2026-01-28T00:59:53.322174597Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:53.325332 containerd[1501]: time="2026-01-28T00:59:53.323837986Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 28 00:59:53.325332 containerd[1501]: time="2026-01-28T00:59:53.325173828Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:53.331628 containerd[1501]: time="2026-01-28T00:59:53.330815040Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 00:59:53.332448 containerd[1501]: time="2026-01-28T00:59:53.332412186Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.153015925s" Jan 28 00:59:53.332585 containerd[1501]: time="2026-01-28T00:59:53.332453519Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 00:59:53.355996 containerd[1501]: time="2026-01-28T00:59:53.355785160Z" level=info msg="CreateContainer within sandbox \"c821f2b257d9c9b22d2a90e0ee9141a2f1456a4ac3a6e9ab3fac22ea58926e07\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 00:59:53.380453 containerd[1501]: time="2026-01-28T00:59:53.380262305Z" level=info msg="CreateContainer within sandbox \"c821f2b257d9c9b22d2a90e0ee9141a2f1456a4ac3a6e9ab3fac22ea58926e07\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c77acc8756e7e9272fe8ab4b40ef55eb2b60369568a9c355763437f1b586e4af\"" Jan 28 00:59:53.381585 containerd[1501]: time="2026-01-28T00:59:53.381522117Z" level=info msg="StartContainer for \"c77acc8756e7e9272fe8ab4b40ef55eb2b60369568a9c355763437f1b586e4af\"" Jan 28 00:59:53.440891 systemd[1]: run-containerd-runc-k8s.io-c77acc8756e7e9272fe8ab4b40ef55eb2b60369568a9c355763437f1b586e4af-runc.xdy4Mk.mount: Deactivated successfully. Jan 28 00:59:53.453229 systemd[1]: Started cri-containerd-c77acc8756e7e9272fe8ab4b40ef55eb2b60369568a9c355763437f1b586e4af.scope - libcontainer container c77acc8756e7e9272fe8ab4b40ef55eb2b60369568a9c355763437f1b586e4af. Jan 28 00:59:53.501522 containerd[1501]: time="2026-01-28T00:59:53.501468226Z" level=info msg="StartContainer for \"c77acc8756e7e9272fe8ab4b40ef55eb2b60369568a9c355763437f1b586e4af\" returns successfully" Jan 28 00:59:54.458594 kubelet[2692]: I0128 00:59:54.455567 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-9j4j9" podStartSLOduration=2.290555333 podStartE2EDuration="5.455531198s" podCreationTimestamp="2026-01-28 00:59:49 +0000 UTC" firstStartedPulling="2026-01-28 00:59:50.177558037 +0000 UTC m=+7.123695616" lastFinishedPulling="2026-01-28 00:59:53.342533895 +0000 UTC m=+10.288671481" observedRunningTime="2026-01-28 00:59:54.455102746 +0000 UTC m=+11.401240353" watchObservedRunningTime="2026-01-28 00:59:54.455531198 +0000 UTC m=+11.401668788" Jan 28 01:00:01.323324 sudo[1759]: pam_unix(sudo:session): session closed for user root Jan 28 01:00:01.425129 sshd[1756]: pam_unix(sshd:session): session closed for user core Jan 28 01:00:01.437505 systemd[1]: sshd@8-10.244.8.18:22-68.220.241.50:39524.service: Deactivated successfully. Jan 28 01:00:01.448168 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 01:00:01.449585 systemd[1]: session-11.scope: Consumed 7.083s CPU time, 145.5M memory peak, 0B memory swap peak. Jan 28 01:00:01.453655 systemd-logind[1484]: Session 11 logged out. Waiting for processes to exit. Jan 28 01:00:01.457134 systemd-logind[1484]: Removed session 11. Jan 28 01:00:09.864838 systemd[1]: Created slice kubepods-besteffort-podb3116146_781e_4c5f_9a3f_9a38e7b10ccc.slice - libcontainer container kubepods-besteffort-podb3116146_781e_4c5f_9a3f_9a38e7b10ccc.slice. Jan 28 01:00:09.930894 kubelet[2692]: I0128 01:00:09.930624 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b3116146-781e-4c5f-9a3f-9a38e7b10ccc-typha-certs\") pod \"calico-typha-775b6f47-vllzh\" (UID: \"b3116146-781e-4c5f-9a3f-9a38e7b10ccc\") " pod="calico-system/calico-typha-775b6f47-vllzh" Jan 28 01:00:09.930894 kubelet[2692]: I0128 01:00:09.930796 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52x6r\" (UniqueName: \"kubernetes.io/projected/b3116146-781e-4c5f-9a3f-9a38e7b10ccc-kube-api-access-52x6r\") pod \"calico-typha-775b6f47-vllzh\" (UID: \"b3116146-781e-4c5f-9a3f-9a38e7b10ccc\") " pod="calico-system/calico-typha-775b6f47-vllzh" Jan 28 01:00:09.930894 kubelet[2692]: I0128 01:00:09.930877 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3116146-781e-4c5f-9a3f-9a38e7b10ccc-tigera-ca-bundle\") pod \"calico-typha-775b6f47-vllzh\" (UID: \"b3116146-781e-4c5f-9a3f-9a38e7b10ccc\") " pod="calico-system/calico-typha-775b6f47-vllzh" Jan 28 01:00:10.012519 systemd[1]: Created slice kubepods-besteffort-pod989b54f5_055d_478b_ae8e_7d7479244852.slice - libcontainer container kubepods-besteffort-pod989b54f5_055d_478b_ae8e_7d7479244852.slice. Jan 28 01:00:10.138074 kubelet[2692]: I0128 01:00:10.135829 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/989b54f5-055d-478b-ae8e-7d7479244852-cni-log-dir\") pod \"calico-node-7k5sr\" (UID: \"989b54f5-055d-478b-ae8e-7d7479244852\") " pod="calico-system/calico-node-7k5sr" Jan 28 01:00:10.138074 kubelet[2692]: I0128 01:00:10.135902 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/989b54f5-055d-478b-ae8e-7d7479244852-var-lib-calico\") pod \"calico-node-7k5sr\" (UID: \"989b54f5-055d-478b-ae8e-7d7479244852\") " pod="calico-system/calico-node-7k5sr" Jan 28 01:00:10.138074 kubelet[2692]: I0128 01:00:10.135935 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/989b54f5-055d-478b-ae8e-7d7479244852-tigera-ca-bundle\") pod \"calico-node-7k5sr\" (UID: \"989b54f5-055d-478b-ae8e-7d7479244852\") " pod="calico-system/calico-node-7k5sr" Jan 28 01:00:10.138074 kubelet[2692]: I0128 01:00:10.135989 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/989b54f5-055d-478b-ae8e-7d7479244852-xtables-lock\") pod \"calico-node-7k5sr\" (UID: \"989b54f5-055d-478b-ae8e-7d7479244852\") " pod="calico-system/calico-node-7k5sr" Jan 28 01:00:10.138074 kubelet[2692]: I0128 01:00:10.136017 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92jb5\" (UniqueName: \"kubernetes.io/projected/989b54f5-055d-478b-ae8e-7d7479244852-kube-api-access-92jb5\") pod \"calico-node-7k5sr\" (UID: \"989b54f5-055d-478b-ae8e-7d7479244852\") " pod="calico-system/calico-node-7k5sr" Jan 28 01:00:10.138513 kubelet[2692]: I0128 01:00:10.136066 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/989b54f5-055d-478b-ae8e-7d7479244852-cni-bin-dir\") pod \"calico-node-7k5sr\" (UID: \"989b54f5-055d-478b-ae8e-7d7479244852\") " pod="calico-system/calico-node-7k5sr" Jan 28 01:00:10.138513 kubelet[2692]: I0128 01:00:10.136092 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/989b54f5-055d-478b-ae8e-7d7479244852-var-run-calico\") pod \"calico-node-7k5sr\" (UID: \"989b54f5-055d-478b-ae8e-7d7479244852\") " pod="calico-system/calico-node-7k5sr" Jan 28 01:00:10.138513 kubelet[2692]: I0128 01:00:10.136152 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/989b54f5-055d-478b-ae8e-7d7479244852-flexvol-driver-host\") pod \"calico-node-7k5sr\" (UID: \"989b54f5-055d-478b-ae8e-7d7479244852\") " pod="calico-system/calico-node-7k5sr" Jan 28 01:00:10.138513 kubelet[2692]: I0128 01:00:10.136184 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/989b54f5-055d-478b-ae8e-7d7479244852-policysync\") pod \"calico-node-7k5sr\" (UID: \"989b54f5-055d-478b-ae8e-7d7479244852\") " pod="calico-system/calico-node-7k5sr" Jan 28 01:00:10.138513 kubelet[2692]: I0128 01:00:10.136222 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/989b54f5-055d-478b-ae8e-7d7479244852-cni-net-dir\") pod \"calico-node-7k5sr\" (UID: \"989b54f5-055d-478b-ae8e-7d7479244852\") " pod="calico-system/calico-node-7k5sr" Jan 28 01:00:10.138780 kubelet[2692]: I0128 01:00:10.136256 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/989b54f5-055d-478b-ae8e-7d7479244852-lib-modules\") pod \"calico-node-7k5sr\" (UID: \"989b54f5-055d-478b-ae8e-7d7479244852\") " pod="calico-system/calico-node-7k5sr" Jan 28 01:00:10.138780 kubelet[2692]: I0128 01:00:10.136787 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/989b54f5-055d-478b-ae8e-7d7479244852-node-certs\") pod \"calico-node-7k5sr\" (UID: \"989b54f5-055d-478b-ae8e-7d7479244852\") " pod="calico-system/calico-node-7k5sr" Jan 28 01:00:10.174559 containerd[1501]: time="2026-01-28T01:00:10.174125459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-775b6f47-vllzh,Uid:b3116146-781e-4c5f-9a3f-9a38e7b10ccc,Namespace:calico-system,Attempt:0,}" Jan 28 01:00:10.203517 kubelet[2692]: E0128 01:00:10.203395 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:00:10.259154 kubelet[2692]: E0128 01:00:10.258710 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.259154 kubelet[2692]: W0128 01:00:10.258759 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.259154 kubelet[2692]: E0128 01:00:10.258817 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.277743 kubelet[2692]: E0128 01:00:10.275373 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.277743 kubelet[2692]: W0128 01:00:10.275427 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.277743 kubelet[2692]: E0128 01:00:10.275459 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.292312 kubelet[2692]: E0128 01:00:10.292027 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.292854 kubelet[2692]: W0128 01:00:10.292519 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.292854 kubelet[2692]: E0128 01:00:10.292561 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.294034 kubelet[2692]: E0128 01:00:10.293267 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.294034 kubelet[2692]: W0128 01:00:10.293310 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.294034 kubelet[2692]: E0128 01:00:10.293329 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.295121 kubelet[2692]: E0128 01:00:10.294517 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.295121 kubelet[2692]: W0128 01:00:10.294537 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.295121 kubelet[2692]: E0128 01:00:10.294556 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.296381 kubelet[2692]: E0128 01:00:10.296359 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.296481 kubelet[2692]: W0128 01:00:10.296461 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.296753 kubelet[2692]: E0128 01:00:10.296579 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.297442 kubelet[2692]: E0128 01:00:10.296977 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.297442 kubelet[2692]: W0128 01:00:10.296996 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.297442 kubelet[2692]: E0128 01:00:10.297012 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.298503 kubelet[2692]: E0128 01:00:10.298377 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.298503 kubelet[2692]: W0128 01:00:10.298402 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.298503 kubelet[2692]: E0128 01:00:10.298422 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.298740 kubelet[2692]: E0128 01:00:10.298715 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.298740 kubelet[2692]: W0128 01:00:10.298737 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.298858 kubelet[2692]: E0128 01:00:10.298793 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.299677 kubelet[2692]: E0128 01:00:10.299102 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.299677 kubelet[2692]: W0128 01:00:10.299123 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.299677 kubelet[2692]: E0128 01:00:10.299139 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.299677 kubelet[2692]: E0128 01:00:10.299441 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.299677 kubelet[2692]: W0128 01:00:10.299455 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.299677 kubelet[2692]: E0128 01:00:10.299472 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.301026 kubelet[2692]: E0128 01:00:10.300193 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.301026 kubelet[2692]: W0128 01:00:10.300215 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.301026 kubelet[2692]: E0128 01:00:10.300238 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.301026 kubelet[2692]: E0128 01:00:10.300605 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.301026 kubelet[2692]: W0128 01:00:10.300622 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.301026 kubelet[2692]: E0128 01:00:10.300639 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.302452 kubelet[2692]: E0128 01:00:10.301100 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.302452 kubelet[2692]: W0128 01:00:10.301116 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.302452 kubelet[2692]: E0128 01:00:10.301141 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.302452 kubelet[2692]: E0128 01:00:10.301936 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.302452 kubelet[2692]: W0128 01:00:10.301951 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.302452 kubelet[2692]: E0128 01:00:10.301994 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.302832 kubelet[2692]: E0128 01:00:10.302494 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.302832 kubelet[2692]: W0128 01:00:10.302508 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.302832 kubelet[2692]: E0128 01:00:10.302523 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.303451 kubelet[2692]: E0128 01:00:10.302987 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.303451 kubelet[2692]: W0128 01:00:10.303007 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.303451 kubelet[2692]: E0128 01:00:10.303023 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.304910 kubelet[2692]: E0128 01:00:10.303649 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.304910 kubelet[2692]: W0128 01:00:10.303664 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.304910 kubelet[2692]: E0128 01:00:10.303679 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.304910 kubelet[2692]: E0128 01:00:10.304611 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.304910 kubelet[2692]: W0128 01:00:10.304627 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.304910 kubelet[2692]: E0128 01:00:10.304642 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.305197 kubelet[2692]: E0128 01:00:10.304917 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.305197 kubelet[2692]: W0128 01:00:10.304931 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.305197 kubelet[2692]: E0128 01:00:10.304946 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.305372 kubelet[2692]: E0128 01:00:10.305231 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.305372 kubelet[2692]: W0128 01:00:10.305246 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.305372 kubelet[2692]: E0128 01:00:10.305261 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.306515 kubelet[2692]: E0128 01:00:10.305702 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.306515 kubelet[2692]: W0128 01:00:10.305723 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.306515 kubelet[2692]: E0128 01:00:10.305738 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.308474 containerd[1501]: time="2026-01-28T01:00:10.307692278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:10.308474 containerd[1501]: time="2026-01-28T01:00:10.307813640Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:10.308474 containerd[1501]: time="2026-01-28T01:00:10.307852407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:10.311191 containerd[1501]: time="2026-01-28T01:00:10.310197250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:10.327745 containerd[1501]: time="2026-01-28T01:00:10.327634043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7k5sr,Uid:989b54f5-055d-478b-ae8e-7d7479244852,Namespace:calico-system,Attempt:0,}" Jan 28 01:00:10.340095 kubelet[2692]: E0128 01:00:10.340047 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.340095 kubelet[2692]: W0128 01:00:10.340081 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.340350 kubelet[2692]: E0128 01:00:10.340109 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.340350 kubelet[2692]: I0128 01:00:10.340153 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d6fe3f19-c2cb-4440-ac98-4f17244eae9f-registration-dir\") pod \"csi-node-driver-v8h92\" (UID: \"d6fe3f19-c2cb-4440-ac98-4f17244eae9f\") " pod="calico-system/csi-node-driver-v8h92" Jan 28 01:00:10.344313 kubelet[2692]: E0128 01:00:10.340597 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.344313 kubelet[2692]: W0128 01:00:10.340621 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.344313 kubelet[2692]: E0128 01:00:10.340657 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.344313 kubelet[2692]: I0128 01:00:10.340696 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d6fe3f19-c2cb-4440-ac98-4f17244eae9f-socket-dir\") pod \"csi-node-driver-v8h92\" (UID: \"d6fe3f19-c2cb-4440-ac98-4f17244eae9f\") " pod="calico-system/csi-node-driver-v8h92" Jan 28 01:00:10.344313 kubelet[2692]: E0128 01:00:10.341078 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.344313 kubelet[2692]: W0128 01:00:10.341114 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.344313 kubelet[2692]: E0128 01:00:10.341136 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.344313 kubelet[2692]: E0128 01:00:10.341524 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.344313 kubelet[2692]: W0128 01:00:10.341539 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.344821 kubelet[2692]: E0128 01:00:10.341554 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.344821 kubelet[2692]: E0128 01:00:10.342030 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.344821 kubelet[2692]: W0128 01:00:10.342046 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.344821 kubelet[2692]: E0128 01:00:10.342117 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.344821 kubelet[2692]: I0128 01:00:10.342197 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d6fe3f19-c2cb-4440-ac98-4f17244eae9f-varrun\") pod \"csi-node-driver-v8h92\" (UID: \"d6fe3f19-c2cb-4440-ac98-4f17244eae9f\") " pod="calico-system/csi-node-driver-v8h92" Jan 28 01:00:10.344821 kubelet[2692]: E0128 01:00:10.342660 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.344821 kubelet[2692]: W0128 01:00:10.342677 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.344821 kubelet[2692]: E0128 01:00:10.342693 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.344821 kubelet[2692]: I0128 01:00:10.342757 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d6fe3f19-c2cb-4440-ac98-4f17244eae9f-kubelet-dir\") pod \"csi-node-driver-v8h92\" (UID: \"d6fe3f19-c2cb-4440-ac98-4f17244eae9f\") " pod="calico-system/csi-node-driver-v8h92" Jan 28 01:00:10.345369 kubelet[2692]: E0128 01:00:10.343216 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.345369 kubelet[2692]: W0128 01:00:10.343230 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.345369 kubelet[2692]: E0128 01:00:10.343245 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.345369 kubelet[2692]: E0128 01:00:10.343667 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.345369 kubelet[2692]: W0128 01:00:10.343681 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.345369 kubelet[2692]: E0128 01:00:10.343696 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.345369 kubelet[2692]: E0128 01:00:10.344066 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.345369 kubelet[2692]: W0128 01:00:10.344081 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.345369 kubelet[2692]: E0128 01:00:10.344095 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.345765 kubelet[2692]: I0128 01:00:10.344129 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxvrd\" (UniqueName: \"kubernetes.io/projected/d6fe3f19-c2cb-4440-ac98-4f17244eae9f-kube-api-access-qxvrd\") pod \"csi-node-driver-v8h92\" (UID: \"d6fe3f19-c2cb-4440-ac98-4f17244eae9f\") " pod="calico-system/csi-node-driver-v8h92" Jan 28 01:00:10.345765 kubelet[2692]: E0128 01:00:10.344464 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.345765 kubelet[2692]: W0128 01:00:10.344480 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.345765 kubelet[2692]: E0128 01:00:10.344495 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.345765 kubelet[2692]: E0128 01:00:10.344801 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.345765 kubelet[2692]: W0128 01:00:10.344815 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.345765 kubelet[2692]: E0128 01:00:10.344830 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.345765 kubelet[2692]: E0128 01:00:10.345223 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.345765 kubelet[2692]: W0128 01:00:10.345248 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.346200 kubelet[2692]: E0128 01:00:10.345265 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.346200 kubelet[2692]: E0128 01:00:10.345591 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.346200 kubelet[2692]: W0128 01:00:10.345625 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.346200 kubelet[2692]: E0128 01:00:10.345645 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.346200 kubelet[2692]: E0128 01:00:10.345941 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.346200 kubelet[2692]: W0128 01:00:10.345955 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.346200 kubelet[2692]: E0128 01:00:10.345984 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.348296 kubelet[2692]: E0128 01:00:10.346306 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.348296 kubelet[2692]: W0128 01:00:10.346321 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.348296 kubelet[2692]: E0128 01:00:10.346335 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.425068 systemd[1]: Started cri-containerd-ad5b5044da6b0196037530d5d5dd943fb4ba87b430471ed2e2dcc23aeab26e29.scope - libcontainer container ad5b5044da6b0196037530d5d5dd943fb4ba87b430471ed2e2dcc23aeab26e29. Jan 28 01:00:10.449880 kubelet[2692]: E0128 01:00:10.447767 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.450707 kubelet[2692]: W0128 01:00:10.450144 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.450707 kubelet[2692]: E0128 01:00:10.450187 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.451463 kubelet[2692]: E0128 01:00:10.451378 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.451463 kubelet[2692]: W0128 01:00:10.451398 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.451463 kubelet[2692]: E0128 01:00:10.451434 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.452811 kubelet[2692]: E0128 01:00:10.452139 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.452811 kubelet[2692]: W0128 01:00:10.452159 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.452811 kubelet[2692]: E0128 01:00:10.452175 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.454189 kubelet[2692]: E0128 01:00:10.453505 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.454189 kubelet[2692]: W0128 01:00:10.453525 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.454189 kubelet[2692]: E0128 01:00:10.454041 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.455206 kubelet[2692]: E0128 01:00:10.454811 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.455206 kubelet[2692]: W0128 01:00:10.454831 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.455206 kubelet[2692]: E0128 01:00:10.454861 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.456498 kubelet[2692]: E0128 01:00:10.456178 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.456498 kubelet[2692]: W0128 01:00:10.456216 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.456498 kubelet[2692]: E0128 01:00:10.456234 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.457348 kubelet[2692]: E0128 01:00:10.457326 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.457575 kubelet[2692]: W0128 01:00:10.457553 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.457877 kubelet[2692]: E0128 01:00:10.457782 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.459373 kubelet[2692]: E0128 01:00:10.459195 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.459373 kubelet[2692]: W0128 01:00:10.459215 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.459373 kubelet[2692]: E0128 01:00:10.459232 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.460259 kubelet[2692]: E0128 01:00:10.459903 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.460259 kubelet[2692]: W0128 01:00:10.459921 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.460259 kubelet[2692]: E0128 01:00:10.459937 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.461471 kubelet[2692]: E0128 01:00:10.461168 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.461471 kubelet[2692]: W0128 01:00:10.461193 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.461471 kubelet[2692]: E0128 01:00:10.461225 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.462548 kubelet[2692]: E0128 01:00:10.462399 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.462548 kubelet[2692]: W0128 01:00:10.462419 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.462548 kubelet[2692]: E0128 01:00:10.462435 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.464512 kubelet[2692]: E0128 01:00:10.463513 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.464512 kubelet[2692]: W0128 01:00:10.463533 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.464512 kubelet[2692]: E0128 01:00:10.463550 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.464512 kubelet[2692]: E0128 01:00:10.464352 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.464512 kubelet[2692]: W0128 01:00:10.464371 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.464512 kubelet[2692]: E0128 01:00:10.464388 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.465719 kubelet[2692]: E0128 01:00:10.465503 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.465719 kubelet[2692]: W0128 01:00:10.465522 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.465719 kubelet[2692]: E0128 01:00:10.465538 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.467017 kubelet[2692]: E0128 01:00:10.466469 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.467017 kubelet[2692]: W0128 01:00:10.466488 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.467017 kubelet[2692]: E0128 01:00:10.466507 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.467990 kubelet[2692]: E0128 01:00:10.467388 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.467990 kubelet[2692]: W0128 01:00:10.467406 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.467990 kubelet[2692]: E0128 01:00:10.467422 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.468873 kubelet[2692]: E0128 01:00:10.468469 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.468873 kubelet[2692]: W0128 01:00:10.468559 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.468873 kubelet[2692]: E0128 01:00:10.468579 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.470625 kubelet[2692]: E0128 01:00:10.470370 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.470625 kubelet[2692]: W0128 01:00:10.470389 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.470625 kubelet[2692]: E0128 01:00:10.470405 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.471137 kubelet[2692]: E0128 01:00:10.470957 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.471137 kubelet[2692]: W0128 01:00:10.470989 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.471137 kubelet[2692]: E0128 01:00:10.471007 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.472521 kubelet[2692]: E0128 01:00:10.472501 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.472633 kubelet[2692]: W0128 01:00:10.472612 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.472840 kubelet[2692]: E0128 01:00:10.472723 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.474128 kubelet[2692]: E0128 01:00:10.473257 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.474128 kubelet[2692]: W0128 01:00:10.473275 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.474128 kubelet[2692]: E0128 01:00:10.473307 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.475647 kubelet[2692]: E0128 01:00:10.475124 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.475647 kubelet[2692]: W0128 01:00:10.475143 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.475647 kubelet[2692]: E0128 01:00:10.475160 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.478187 kubelet[2692]: E0128 01:00:10.476767 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.478187 kubelet[2692]: W0128 01:00:10.476787 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.478187 kubelet[2692]: E0128 01:00:10.476804 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.479176 kubelet[2692]: E0128 01:00:10.478996 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.479176 kubelet[2692]: W0128 01:00:10.479034 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.479176 kubelet[2692]: E0128 01:00:10.479055 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.480226 kubelet[2692]: E0128 01:00:10.480139 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.480226 kubelet[2692]: W0128 01:00:10.480158 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.480226 kubelet[2692]: E0128 01:00:10.480175 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.492112 containerd[1501]: time="2026-01-28T01:00:10.490609521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:10.492112 containerd[1501]: time="2026-01-28T01:00:10.490709683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:10.492112 containerd[1501]: time="2026-01-28T01:00:10.490728124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:10.492112 containerd[1501]: time="2026-01-28T01:00:10.490885529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:10.507658 kubelet[2692]: E0128 01:00:10.507353 2692 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 01:00:10.507658 kubelet[2692]: W0128 01:00:10.507496 2692 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 01:00:10.507658 kubelet[2692]: E0128 01:00:10.507529 2692 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 01:00:10.528972 systemd[1]: Started cri-containerd-3d3f21e4782c6ac20e8e76ffd22594fe18ca0f7e0be473b3a05b8346c5f8d7fa.scope - libcontainer container 3d3f21e4782c6ac20e8e76ffd22594fe18ca0f7e0be473b3a05b8346c5f8d7fa. Jan 28 01:00:10.610489 containerd[1501]: time="2026-01-28T01:00:10.610332447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7k5sr,Uid:989b54f5-055d-478b-ae8e-7d7479244852,Namespace:calico-system,Attempt:0,} returns sandbox id \"3d3f21e4782c6ac20e8e76ffd22594fe18ca0f7e0be473b3a05b8346c5f8d7fa\"" Jan 28 01:00:10.619389 containerd[1501]: time="2026-01-28T01:00:10.619013367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 01:00:10.660223 containerd[1501]: time="2026-01-28T01:00:10.660169071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-775b6f47-vllzh,Uid:b3116146-781e-4c5f-9a3f-9a38e7b10ccc,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad5b5044da6b0196037530d5d5dd943fb4ba87b430471ed2e2dcc23aeab26e29\"" Jan 28 01:00:12.347524 kubelet[2692]: E0128 01:00:12.347391 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:00:14.344251 kubelet[2692]: E0128 01:00:14.343752 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:00:16.343361 kubelet[2692]: E0128 01:00:16.342504 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:00:18.343373 kubelet[2692]: E0128 01:00:18.342850 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:00:19.676135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1902873839.mount: Deactivated successfully. Jan 28 01:00:19.879451 containerd[1501]: time="2026-01-28T01:00:19.876947210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:19.881395 containerd[1501]: time="2026-01-28T01:00:19.881245150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 28 01:00:19.882550 containerd[1501]: time="2026-01-28T01:00:19.882476641Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:19.887964 containerd[1501]: time="2026-01-28T01:00:19.887894593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:19.890415 containerd[1501]: time="2026-01-28T01:00:19.889706105Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 9.270621404s" Jan 28 01:00:19.890415 containerd[1501]: time="2026-01-28T01:00:19.889785538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 01:00:19.894508 containerd[1501]: time="2026-01-28T01:00:19.894137640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 01:00:19.900232 containerd[1501]: time="2026-01-28T01:00:19.900184832Z" level=info msg="CreateContainer within sandbox \"3d3f21e4782c6ac20e8e76ffd22594fe18ca0f7e0be473b3a05b8346c5f8d7fa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 01:00:19.921024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount647449487.mount: Deactivated successfully. Jan 28 01:00:19.955660 containerd[1501]: time="2026-01-28T01:00:19.955469536Z" level=info msg="CreateContainer within sandbox \"3d3f21e4782c6ac20e8e76ffd22594fe18ca0f7e0be473b3a05b8346c5f8d7fa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"30bbe9963970210e57374340ffbda40436ba6d2b5cf93c0f8939cfc3a2b8ee93\"" Jan 28 01:00:19.957315 containerd[1501]: time="2026-01-28T01:00:19.957105961Z" level=info msg="StartContainer for \"30bbe9963970210e57374340ffbda40436ba6d2b5cf93c0f8939cfc3a2b8ee93\"" Jan 28 01:00:20.027613 systemd[1]: Started cri-containerd-30bbe9963970210e57374340ffbda40436ba6d2b5cf93c0f8939cfc3a2b8ee93.scope - libcontainer container 30bbe9963970210e57374340ffbda40436ba6d2b5cf93c0f8939cfc3a2b8ee93. Jan 28 01:00:20.081619 containerd[1501]: time="2026-01-28T01:00:20.081558892Z" level=info msg="StartContainer for \"30bbe9963970210e57374340ffbda40436ba6d2b5cf93c0f8939cfc3a2b8ee93\" returns successfully" Jan 28 01:00:20.103927 systemd[1]: cri-containerd-30bbe9963970210e57374340ffbda40436ba6d2b5cf93c0f8939cfc3a2b8ee93.scope: Deactivated successfully. Jan 28 01:00:20.148706 containerd[1501]: time="2026-01-28T01:00:20.148526983Z" level=info msg="shim disconnected" id=30bbe9963970210e57374340ffbda40436ba6d2b5cf93c0f8939cfc3a2b8ee93 namespace=k8s.io Jan 28 01:00:20.148706 containerd[1501]: time="2026-01-28T01:00:20.148687268Z" level=warning msg="cleaning up after shim disconnected" id=30bbe9963970210e57374340ffbda40436ba6d2b5cf93c0f8939cfc3a2b8ee93 namespace=k8s.io Jan 28 01:00:20.148706 containerd[1501]: time="2026-01-28T01:00:20.148710881Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:00:20.168245 containerd[1501]: time="2026-01-28T01:00:20.168150185Z" level=warning msg="cleanup warnings time=\"2026-01-28T01:00:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 28 01:00:20.343852 kubelet[2692]: E0128 01:00:20.343598 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:00:20.584753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30bbe9963970210e57374340ffbda40436ba6d2b5cf93c0f8939cfc3a2b8ee93-rootfs.mount: Deactivated successfully. Jan 28 01:00:22.344466 kubelet[2692]: E0128 01:00:22.344327 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:00:24.343054 kubelet[2692]: E0128 01:00:24.342957 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:00:26.135317 containerd[1501]: time="2026-01-28T01:00:26.135209406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:26.136921 containerd[1501]: time="2026-01-28T01:00:26.136873319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Jan 28 01:00:26.141018 containerd[1501]: time="2026-01-28T01:00:26.139886645Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:26.145176 containerd[1501]: time="2026-01-28T01:00:26.145134173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:26.146347 containerd[1501]: time="2026-01-28T01:00:26.146275004Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 6.252082498s" Jan 28 01:00:26.146501 containerd[1501]: time="2026-01-28T01:00:26.146472530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 01:00:26.149128 containerd[1501]: time="2026-01-28T01:00:26.149086327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 01:00:26.191248 containerd[1501]: time="2026-01-28T01:00:26.191164881Z" level=info msg="CreateContainer within sandbox \"ad5b5044da6b0196037530d5d5dd943fb4ba87b430471ed2e2dcc23aeab26e29\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 01:00:26.207372 containerd[1501]: time="2026-01-28T01:00:26.207261710Z" level=info msg="CreateContainer within sandbox \"ad5b5044da6b0196037530d5d5dd943fb4ba87b430471ed2e2dcc23aeab26e29\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5d98b731622aa8f2f2b85306451c22f3a915b0321aaec992bb8de3e2e29eb87d\"" Jan 28 01:00:26.208651 containerd[1501]: time="2026-01-28T01:00:26.208611425Z" level=info msg="StartContainer for \"5d98b731622aa8f2f2b85306451c22f3a915b0321aaec992bb8de3e2e29eb87d\"" Jan 28 01:00:26.301675 systemd[1]: Started cri-containerd-5d98b731622aa8f2f2b85306451c22f3a915b0321aaec992bb8de3e2e29eb87d.scope - libcontainer container 5d98b731622aa8f2f2b85306451c22f3a915b0321aaec992bb8de3e2e29eb87d. Jan 28 01:00:26.343345 kubelet[2692]: E0128 01:00:26.343088 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:00:26.384683 containerd[1501]: time="2026-01-28T01:00:26.384525905Z" level=info msg="StartContainer for \"5d98b731622aa8f2f2b85306451c22f3a915b0321aaec992bb8de3e2e29eb87d\" returns successfully" Jan 28 01:00:26.635934 kubelet[2692]: I0128 01:00:26.635661 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-775b6f47-vllzh" podStartSLOduration=2.150068867 podStartE2EDuration="17.635606033s" podCreationTimestamp="2026-01-28 01:00:09 +0000 UTC" firstStartedPulling="2026-01-28 01:00:10.662685058 +0000 UTC m=+27.608822650" lastFinishedPulling="2026-01-28 01:00:26.148222224 +0000 UTC m=+43.094359816" observedRunningTime="2026-01-28 01:00:26.63515739 +0000 UTC m=+43.581295003" watchObservedRunningTime="2026-01-28 01:00:26.635606033 +0000 UTC m=+43.581743629" Jan 28 01:00:28.343318 kubelet[2692]: E0128 01:00:28.343220 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:00:30.343716 kubelet[2692]: E0128 01:00:30.343615 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:00:32.305388 containerd[1501]: time="2026-01-28T01:00:32.303860249Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:32.307430 containerd[1501]: time="2026-01-28T01:00:32.307217253Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 28 01:00:32.309310 containerd[1501]: time="2026-01-28T01:00:32.308357988Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:32.312227 containerd[1501]: time="2026-01-28T01:00:32.312176384Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:32.313636 containerd[1501]: time="2026-01-28T01:00:32.313570426Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 6.164425004s" Jan 28 01:00:32.313790 containerd[1501]: time="2026-01-28T01:00:32.313761305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 01:00:32.327031 containerd[1501]: time="2026-01-28T01:00:32.326789503Z" level=info msg="CreateContainer within sandbox \"3d3f21e4782c6ac20e8e76ffd22594fe18ca0f7e0be473b3a05b8346c5f8d7fa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 01:00:32.343229 kubelet[2692]: E0128 01:00:32.343154 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:00:32.370654 containerd[1501]: time="2026-01-28T01:00:32.370356415Z" level=info msg="CreateContainer within sandbox \"3d3f21e4782c6ac20e8e76ffd22594fe18ca0f7e0be473b3a05b8346c5f8d7fa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"932ae91cd3fb49253d77a74ed6e1d22876f4bc5958f0b2adba72510e45e7b2c2\"" Jan 28 01:00:32.372673 containerd[1501]: time="2026-01-28T01:00:32.372226132Z" level=info msg="StartContainer for \"932ae91cd3fb49253d77a74ed6e1d22876f4bc5958f0b2adba72510e45e7b2c2\"" Jan 28 01:00:32.446252 systemd[1]: Started cri-containerd-932ae91cd3fb49253d77a74ed6e1d22876f4bc5958f0b2adba72510e45e7b2c2.scope - libcontainer container 932ae91cd3fb49253d77a74ed6e1d22876f4bc5958f0b2adba72510e45e7b2c2. Jan 28 01:00:32.516527 containerd[1501]: time="2026-01-28T01:00:32.516472473Z" level=info msg="StartContainer for \"932ae91cd3fb49253d77a74ed6e1d22876f4bc5958f0b2adba72510e45e7b2c2\" returns successfully" Jan 28 01:00:33.412638 systemd[1]: cri-containerd-932ae91cd3fb49253d77a74ed6e1d22876f4bc5958f0b2adba72510e45e7b2c2.scope: Deactivated successfully. Jan 28 01:00:33.498322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-932ae91cd3fb49253d77a74ed6e1d22876f4bc5958f0b2adba72510e45e7b2c2-rootfs.mount: Deactivated successfully. Jan 28 01:00:33.546350 kubelet[2692]: I0128 01:00:33.544547 2692 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 28 01:00:33.709623 containerd[1501]: time="2026-01-28T01:00:33.709429065Z" level=info msg="shim disconnected" id=932ae91cd3fb49253d77a74ed6e1d22876f4bc5958f0b2adba72510e45e7b2c2 namespace=k8s.io Jan 28 01:00:33.709623 containerd[1501]: time="2026-01-28T01:00:33.709641324Z" level=warning msg="cleaning up after shim disconnected" id=932ae91cd3fb49253d77a74ed6e1d22876f4bc5958f0b2adba72510e45e7b2c2 namespace=k8s.io Jan 28 01:00:33.711382 containerd[1501]: time="2026-01-28T01:00:33.709666976Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 01:00:33.788992 systemd[1]: Created slice kubepods-burstable-pod054a4d87_77d7_4fd5_ba18_4966e01b6356.slice - libcontainer container kubepods-burstable-pod054a4d87_77d7_4fd5_ba18_4966e01b6356.slice. Jan 28 01:00:33.843647 systemd[1]: Created slice kubepods-burstable-pod628696b9_5871_452c_9749_f01c86f7c8e5.slice - libcontainer container kubepods-burstable-pod628696b9_5871_452c_9749_f01c86f7c8e5.slice. Jan 28 01:00:33.873702 systemd[1]: Created slice kubepods-besteffort-pod1af325e3_7600_48af_bd7f_f8e9f715489b.slice - libcontainer container kubepods-besteffort-pod1af325e3_7600_48af_bd7f_f8e9f715489b.slice. Jan 28 01:00:33.888981 systemd[1]: Created slice kubepods-besteffort-pod215504c8_12e3_45d1_b60d_0c358a1645a5.slice - libcontainer container kubepods-besteffort-pod215504c8_12e3_45d1_b60d_0c358a1645a5.slice. Jan 28 01:00:33.895879 kubelet[2692]: I0128 01:00:33.895820 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/215504c8-12e3-45d1-b60d-0c358a1645a5-calico-apiserver-certs\") pod \"calico-apiserver-6768b4f5db-5thvw\" (UID: \"215504c8-12e3-45d1-b60d-0c358a1645a5\") " pod="calico-apiserver/calico-apiserver-6768b4f5db-5thvw" Jan 28 01:00:33.896061 kubelet[2692]: I0128 01:00:33.895884 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq8lv\" (UniqueName: \"kubernetes.io/projected/054a4d87-77d7-4fd5-ba18-4966e01b6356-kube-api-access-rq8lv\") pod \"coredns-66bc5c9577-cj4z5\" (UID: \"054a4d87-77d7-4fd5-ba18-4966e01b6356\") " pod="kube-system/coredns-66bc5c9577-cj4z5" Jan 28 01:00:33.896061 kubelet[2692]: I0128 01:00:33.895917 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4chp6\" (UniqueName: \"kubernetes.io/projected/628696b9-5871-452c-9749-f01c86f7c8e5-kube-api-access-4chp6\") pod \"coredns-66bc5c9577-2dnhz\" (UID: \"628696b9-5871-452c-9749-f01c86f7c8e5\") " pod="kube-system/coredns-66bc5c9577-2dnhz" Jan 28 01:00:33.896061 kubelet[2692]: I0128 01:00:33.895954 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb7cb13d-31ca-4384-944f-1754705dfa3e-config\") pod \"goldmane-7c778bb748-9r9k6\" (UID: \"eb7cb13d-31ca-4384-944f-1754705dfa3e\") " pod="calico-system/goldmane-7c778bb748-9r9k6" Jan 28 01:00:33.896061 kubelet[2692]: I0128 01:00:33.895987 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prbt5\" (UniqueName: \"kubernetes.io/projected/eb7cb13d-31ca-4384-944f-1754705dfa3e-kube-api-access-prbt5\") pod \"goldmane-7c778bb748-9r9k6\" (UID: \"eb7cb13d-31ca-4384-944f-1754705dfa3e\") " pod="calico-system/goldmane-7c778bb748-9r9k6" Jan 28 01:00:33.896061 kubelet[2692]: I0128 01:00:33.896016 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/054a4d87-77d7-4fd5-ba18-4966e01b6356-config-volume\") pod \"coredns-66bc5c9577-cj4z5\" (UID: \"054a4d87-77d7-4fd5-ba18-4966e01b6356\") " pod="kube-system/coredns-66bc5c9577-cj4z5" Jan 28 01:00:33.899144 kubelet[2692]: I0128 01:00:33.896044 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgkbp\" (UniqueName: \"kubernetes.io/projected/215504c8-12e3-45d1-b60d-0c358a1645a5-kube-api-access-pgkbp\") pod \"calico-apiserver-6768b4f5db-5thvw\" (UID: \"215504c8-12e3-45d1-b60d-0c358a1645a5\") " pod="calico-apiserver/calico-apiserver-6768b4f5db-5thvw" Jan 28 01:00:33.899144 kubelet[2692]: I0128 01:00:33.896075 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb7cb13d-31ca-4384-944f-1754705dfa3e-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-9r9k6\" (UID: \"eb7cb13d-31ca-4384-944f-1754705dfa3e\") " pod="calico-system/goldmane-7c778bb748-9r9k6" Jan 28 01:00:33.899144 kubelet[2692]: I0128 01:00:33.896116 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4ef09ae9-4abf-45ab-835f-f8b9901cd23b-calico-apiserver-certs\") pod \"calico-apiserver-6768b4f5db-r4vpr\" (UID: \"4ef09ae9-4abf-45ab-835f-f8b9901cd23b\") " pod="calico-apiserver/calico-apiserver-6768b4f5db-r4vpr" Jan 28 01:00:33.899144 kubelet[2692]: I0128 01:00:33.896146 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1af325e3-7600-48af-bd7f-f8e9f715489b-tigera-ca-bundle\") pod \"calico-kube-controllers-7fcd5d865b-hrj24\" (UID: \"1af325e3-7600-48af-bd7f-f8e9f715489b\") " pod="calico-system/calico-kube-controllers-7fcd5d865b-hrj24" Jan 28 01:00:33.899144 kubelet[2692]: I0128 01:00:33.896174 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/628696b9-5871-452c-9749-f01c86f7c8e5-config-volume\") pod \"coredns-66bc5c9577-2dnhz\" (UID: \"628696b9-5871-452c-9749-f01c86f7c8e5\") " pod="kube-system/coredns-66bc5c9577-2dnhz" Jan 28 01:00:33.899466 kubelet[2692]: I0128 01:00:33.896212 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76v5q\" (UniqueName: \"kubernetes.io/projected/4ef09ae9-4abf-45ab-835f-f8b9901cd23b-kube-api-access-76v5q\") pod \"calico-apiserver-6768b4f5db-r4vpr\" (UID: \"4ef09ae9-4abf-45ab-835f-f8b9901cd23b\") " pod="calico-apiserver/calico-apiserver-6768b4f5db-r4vpr" Jan 28 01:00:33.899466 kubelet[2692]: I0128 01:00:33.896263 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2925\" (UniqueName: \"kubernetes.io/projected/1af325e3-7600-48af-bd7f-f8e9f715489b-kube-api-access-p2925\") pod \"calico-kube-controllers-7fcd5d865b-hrj24\" (UID: \"1af325e3-7600-48af-bd7f-f8e9f715489b\") " pod="calico-system/calico-kube-controllers-7fcd5d865b-hrj24" Jan 28 01:00:33.899466 kubelet[2692]: I0128 01:00:33.896340 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/eb7cb13d-31ca-4384-944f-1754705dfa3e-goldmane-key-pair\") pod \"goldmane-7c778bb748-9r9k6\" (UID: \"eb7cb13d-31ca-4384-944f-1754705dfa3e\") " pod="calico-system/goldmane-7c778bb748-9r9k6" Jan 28 01:00:33.905770 systemd[1]: Created slice kubepods-besteffort-pod4ef09ae9_4abf_45ab_835f_f8b9901cd23b.slice - libcontainer container kubepods-besteffort-pod4ef09ae9_4abf_45ab_835f_f8b9901cd23b.slice. Jan 28 01:00:33.923616 systemd[1]: Created slice kubepods-besteffort-podeb7cb13d_31ca_4384_944f_1754705dfa3e.slice - libcontainer container kubepods-besteffort-podeb7cb13d_31ca_4384_944f_1754705dfa3e.slice. Jan 28 01:00:33.937253 systemd[1]: Created slice kubepods-besteffort-podc29ba3ff_ac29_4eda_99c2_465b53ee5c1d.slice - libcontainer container kubepods-besteffort-podc29ba3ff_ac29_4eda_99c2_465b53ee5c1d.slice. Jan 28 01:00:33.997324 kubelet[2692]: I0128 01:00:33.997034 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c29ba3ff-ac29-4eda-99c2-465b53ee5c1d-whisker-ca-bundle\") pod \"whisker-795bbb5d6-zbqwm\" (UID: \"c29ba3ff-ac29-4eda-99c2-465b53ee5c1d\") " pod="calico-system/whisker-795bbb5d6-zbqwm" Jan 28 01:00:33.997324 kubelet[2692]: I0128 01:00:33.997114 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsdqb\" (UniqueName: \"kubernetes.io/projected/c29ba3ff-ac29-4eda-99c2-465b53ee5c1d-kube-api-access-hsdqb\") pod \"whisker-795bbb5d6-zbqwm\" (UID: \"c29ba3ff-ac29-4eda-99c2-465b53ee5c1d\") " pod="calico-system/whisker-795bbb5d6-zbqwm" Jan 28 01:00:34.016000 kubelet[2692]: I0128 01:00:34.008358 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c29ba3ff-ac29-4eda-99c2-465b53ee5c1d-whisker-backend-key-pair\") pod \"whisker-795bbb5d6-zbqwm\" (UID: \"c29ba3ff-ac29-4eda-99c2-465b53ee5c1d\") " pod="calico-system/whisker-795bbb5d6-zbqwm" Jan 28 01:00:34.167506 containerd[1501]: time="2026-01-28T01:00:34.167437992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cj4z5,Uid:054a4d87-77d7-4fd5-ba18-4966e01b6356,Namespace:kube-system,Attempt:0,}" Jan 28 01:00:34.168928 containerd[1501]: time="2026-01-28T01:00:34.168881090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2dnhz,Uid:628696b9-5871-452c-9749-f01c86f7c8e5,Namespace:kube-system,Attempt:0,}" Jan 28 01:00:34.185351 containerd[1501]: time="2026-01-28T01:00:34.185269983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcd5d865b-hrj24,Uid:1af325e3-7600-48af-bd7f-f8e9f715489b,Namespace:calico-system,Attempt:0,}" Jan 28 01:00:34.204301 containerd[1501]: time="2026-01-28T01:00:34.203367739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6768b4f5db-5thvw,Uid:215504c8-12e3-45d1-b60d-0c358a1645a5,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:00:34.228352 containerd[1501]: time="2026-01-28T01:00:34.227791273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6768b4f5db-r4vpr,Uid:4ef09ae9-4abf-45ab-835f-f8b9901cd23b,Namespace:calico-apiserver,Attempt:0,}" Jan 28 01:00:34.289311 containerd[1501]: time="2026-01-28T01:00:34.288986301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-795bbb5d6-zbqwm,Uid:c29ba3ff-ac29-4eda-99c2-465b53ee5c1d,Namespace:calico-system,Attempt:0,}" Jan 28 01:00:34.311006 containerd[1501]: time="2026-01-28T01:00:34.310469850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9r9k6,Uid:eb7cb13d-31ca-4384-944f-1754705dfa3e,Namespace:calico-system,Attempt:0,}" Jan 28 01:00:34.362832 systemd[1]: Created slice kubepods-besteffort-podd6fe3f19_c2cb_4440_ac98_4f17244eae9f.slice - libcontainer container kubepods-besteffort-podd6fe3f19_c2cb_4440_ac98_4f17244eae9f.slice. Jan 28 01:00:34.382551 containerd[1501]: time="2026-01-28T01:00:34.381139541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8h92,Uid:d6fe3f19-c2cb-4440-ac98-4f17244eae9f,Namespace:calico-system,Attempt:0,}" Jan 28 01:00:34.645610 containerd[1501]: time="2026-01-28T01:00:34.644928997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 01:00:35.031895 containerd[1501]: time="2026-01-28T01:00:35.031807830Z" level=error msg="Failed to destroy network for sandbox \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.036894 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4-shm.mount: Deactivated successfully. Jan 28 01:00:35.038835 containerd[1501]: time="2026-01-28T01:00:35.038464335Z" level=error msg="Failed to destroy network for sandbox \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.051141 containerd[1501]: time="2026-01-28T01:00:35.049644872Z" level=error msg="Failed to destroy network for sandbox \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.051141 containerd[1501]: time="2026-01-28T01:00:35.050144031Z" level=error msg="encountered an error cleaning up failed sandbox \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.051141 containerd[1501]: time="2026-01-28T01:00:35.050233154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcd5d865b-hrj24,Uid:1af325e3-7600-48af-bd7f-f8e9f715489b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.063273 containerd[1501]: time="2026-01-28T01:00:35.063198927Z" level=error msg="Failed to destroy network for sandbox \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.064706 containerd[1501]: time="2026-01-28T01:00:35.064665672Z" level=error msg="encountered an error cleaning up failed sandbox \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.064799 containerd[1501]: time="2026-01-28T01:00:35.064744153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6768b4f5db-5thvw,Uid:215504c8-12e3-45d1-b60d-0c358a1645a5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.064919 containerd[1501]: time="2026-01-28T01:00:35.064841738Z" level=error msg="encountered an error cleaning up failed sandbox \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.064919 containerd[1501]: time="2026-01-28T01:00:35.064895174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2dnhz,Uid:628696b9-5871-452c-9749-f01c86f7c8e5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.065106 containerd[1501]: time="2026-01-28T01:00:35.064969315Z" level=error msg="encountered an error cleaning up failed sandbox \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.065106 containerd[1501]: time="2026-01-28T01:00:35.065015028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9r9k6,Uid:eb7cb13d-31ca-4384-944f-1754705dfa3e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.066310 kubelet[2692]: E0128 01:00:35.065580 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.066310 kubelet[2692]: E0128 01:00:35.065719 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-9r9k6" Jan 28 01:00:35.066310 kubelet[2692]: E0128 01:00:35.065785 2692 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-9r9k6" Jan 28 01:00:35.069095 kubelet[2692]: E0128 01:00:35.065913 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-9r9k6_calico-system(eb7cb13d-31ca-4384-944f-1754705dfa3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-9r9k6_calico-system(eb7cb13d-31ca-4384-944f-1754705dfa3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-9r9k6" podUID="eb7cb13d-31ca-4384-944f-1754705dfa3e" Jan 28 01:00:35.069095 kubelet[2692]: E0128 01:00:35.066014 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.069095 kubelet[2692]: E0128 01:00:35.066048 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fcd5d865b-hrj24" Jan 28 01:00:35.070456 containerd[1501]: time="2026-01-28T01:00:35.067723938Z" level=error msg="Failed to destroy network for sandbox \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.070456 containerd[1501]: time="2026-01-28T01:00:35.067726190Z" level=error msg="Failed to destroy network for sandbox \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.070594 kubelet[2692]: E0128 01:00:35.066070 2692 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fcd5d865b-hrj24" Jan 28 01:00:35.070594 kubelet[2692]: E0128 01:00:35.066146 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fcd5d865b-hrj24_calico-system(1af325e3-7600-48af-bd7f-f8e9f715489b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fcd5d865b-hrj24_calico-system(1af325e3-7600-48af-bd7f-f8e9f715489b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fcd5d865b-hrj24" podUID="1af325e3-7600-48af-bd7f-f8e9f715489b" Jan 28 01:00:35.070594 kubelet[2692]: E0128 01:00:35.066236 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.070805 kubelet[2692]: E0128 01:00:35.067134 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.070805 kubelet[2692]: E0128 01:00:35.067201 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2dnhz" Jan 28 01:00:35.070805 kubelet[2692]: E0128 01:00:35.067235 2692 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2dnhz" Jan 28 01:00:35.070965 kubelet[2692]: E0128 01:00:35.067438 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-2dnhz_kube-system(628696b9-5871-452c-9749-f01c86f7c8e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-2dnhz_kube-system(628696b9-5871-452c-9749-f01c86f7c8e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2dnhz" podUID="628696b9-5871-452c-9749-f01c86f7c8e5" Jan 28 01:00:35.070965 kubelet[2692]: E0128 01:00:35.066269 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6768b4f5db-5thvw" Jan 28 01:00:35.070965 kubelet[2692]: E0128 01:00:35.069996 2692 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6768b4f5db-5thvw" Jan 28 01:00:35.073661 kubelet[2692]: E0128 01:00:35.072068 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6768b4f5db-5thvw_calico-apiserver(215504c8-12e3-45d1-b60d-0c358a1645a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6768b4f5db-5thvw_calico-apiserver(215504c8-12e3-45d1-b60d-0c358a1645a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-5thvw" podUID="215504c8-12e3-45d1-b60d-0c358a1645a5" Jan 28 01:00:35.073761 containerd[1501]: time="2026-01-28T01:00:35.073706725Z" level=error msg="encountered an error cleaning up failed sandbox \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.074532 containerd[1501]: time="2026-01-28T01:00:35.073765720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-795bbb5d6-zbqwm,Uid:c29ba3ff-ac29-4eda-99c2-465b53ee5c1d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.074629 kubelet[2692]: E0128 01:00:35.073968 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.074629 kubelet[2692]: E0128 01:00:35.074013 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-795bbb5d6-zbqwm" Jan 28 01:00:35.074629 kubelet[2692]: E0128 01:00:35.074052 2692 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-795bbb5d6-zbqwm" Jan 28 01:00:35.074786 kubelet[2692]: E0128 01:00:35.074109 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-795bbb5d6-zbqwm_calico-system(c29ba3ff-ac29-4eda-99c2-465b53ee5c1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-795bbb5d6-zbqwm_calico-system(c29ba3ff-ac29-4eda-99c2-465b53ee5c1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-795bbb5d6-zbqwm" podUID="c29ba3ff-ac29-4eda-99c2-465b53ee5c1d" Jan 28 01:00:35.075236 containerd[1501]: time="2026-01-28T01:00:35.075045544Z" level=error msg="encountered an error cleaning up failed sandbox \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.075364 containerd[1501]: time="2026-01-28T01:00:35.075257737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cj4z5,Uid:054a4d87-77d7-4fd5-ba18-4966e01b6356,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.075787 kubelet[2692]: E0128 01:00:35.075505 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.075787 kubelet[2692]: E0128 01:00:35.075579 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cj4z5" Jan 28 01:00:35.076332 kubelet[2692]: E0128 01:00:35.075939 2692 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cj4z5" Jan 28 01:00:35.076332 kubelet[2692]: E0128 01:00:35.076037 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-cj4z5_kube-system(054a4d87-77d7-4fd5-ba18-4966e01b6356)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-cj4z5_kube-system(054a4d87-77d7-4fd5-ba18-4966e01b6356)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-cj4z5" podUID="054a4d87-77d7-4fd5-ba18-4966e01b6356" Jan 28 01:00:35.094091 containerd[1501]: time="2026-01-28T01:00:35.094030551Z" level=error msg="Failed to destroy network for sandbox \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.094834 containerd[1501]: time="2026-01-28T01:00:35.094729954Z" level=error msg="encountered an error cleaning up failed sandbox \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.095424 containerd[1501]: time="2026-01-28T01:00:35.094825557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8h92,Uid:d6fe3f19-c2cb-4440-ac98-4f17244eae9f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.095525 kubelet[2692]: E0128 01:00:35.095138 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.095525 kubelet[2692]: E0128 01:00:35.095217 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v8h92" Jan 28 01:00:35.095525 kubelet[2692]: E0128 01:00:35.095250 2692 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v8h92" Jan 28 01:00:35.096896 kubelet[2692]: E0128 01:00:35.096732 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v8h92_calico-system(d6fe3f19-c2cb-4440-ac98-4f17244eae9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v8h92_calico-system(d6fe3f19-c2cb-4440-ac98-4f17244eae9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:00:35.113974 containerd[1501]: time="2026-01-28T01:00:35.113903817Z" level=error msg="Failed to destroy network for sandbox \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.114561 containerd[1501]: time="2026-01-28T01:00:35.114513840Z" level=error msg="encountered an error cleaning up failed sandbox \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.114650 containerd[1501]: time="2026-01-28T01:00:35.114599772Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6768b4f5db-r4vpr,Uid:4ef09ae9-4abf-45ab-835f-f8b9901cd23b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.114966 kubelet[2692]: E0128 01:00:35.114917 2692 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.115066 kubelet[2692]: E0128 01:00:35.115000 2692 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6768b4f5db-r4vpr" Jan 28 01:00:35.115066 kubelet[2692]: E0128 01:00:35.115032 2692 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6768b4f5db-r4vpr" Jan 28 01:00:35.115198 kubelet[2692]: E0128 01:00:35.115103 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6768b4f5db-r4vpr_calico-apiserver(4ef09ae9-4abf-45ab-835f-f8b9901cd23b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6768b4f5db-r4vpr_calico-apiserver(4ef09ae9-4abf-45ab-835f-f8b9901cd23b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-r4vpr" podUID="4ef09ae9-4abf-45ab-835f-f8b9901cd23b" Jan 28 01:00:35.498733 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe-shm.mount: Deactivated successfully. Jan 28 01:00:35.498933 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5-shm.mount: Deactivated successfully. Jan 28 01:00:35.499047 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a-shm.mount: Deactivated successfully. Jan 28 01:00:35.499152 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d-shm.mount: Deactivated successfully. Jan 28 01:00:35.499274 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41-shm.mount: Deactivated successfully. Jan 28 01:00:35.499420 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb-shm.mount: Deactivated successfully. Jan 28 01:00:35.499542 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6-shm.mount: Deactivated successfully. Jan 28 01:00:35.634521 kubelet[2692]: I0128 01:00:35.633916 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Jan 28 01:00:35.638497 kubelet[2692]: I0128 01:00:35.637791 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Jan 28 01:00:35.643118 containerd[1501]: time="2026-01-28T01:00:35.642613394Z" level=info msg="StopPodSandbox for \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\"" Jan 28 01:00:35.645064 containerd[1501]: time="2026-01-28T01:00:35.643795496Z" level=info msg="StopPodSandbox for \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\"" Jan 28 01:00:35.645224 containerd[1501]: time="2026-01-28T01:00:35.645187218Z" level=info msg="Ensure that sandbox 4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a in task-service has been cleanup successfully" Jan 28 01:00:35.645594 containerd[1501]: time="2026-01-28T01:00:35.645555071Z" level=info msg="Ensure that sandbox 598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4 in task-service has been cleanup successfully" Jan 28 01:00:35.679404 kubelet[2692]: I0128 01:00:35.678248 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Jan 28 01:00:35.679953 containerd[1501]: time="2026-01-28T01:00:35.679905565Z" level=info msg="StopPodSandbox for \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\"" Jan 28 01:00:35.680469 containerd[1501]: time="2026-01-28T01:00:35.680437378Z" level=info msg="Ensure that sandbox 60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5 in task-service has been cleanup successfully" Jan 28 01:00:35.685651 kubelet[2692]: I0128 01:00:35.684903 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Jan 28 01:00:35.691184 containerd[1501]: time="2026-01-28T01:00:35.689681430Z" level=info msg="StopPodSandbox for \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\"" Jan 28 01:00:35.693548 containerd[1501]: time="2026-01-28T01:00:35.692632444Z" level=info msg="Ensure that sandbox 5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb in task-service has been cleanup successfully" Jan 28 01:00:35.693686 kubelet[2692]: I0128 01:00:35.693316 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Jan 28 01:00:35.696057 containerd[1501]: time="2026-01-28T01:00:35.695971275Z" level=info msg="StopPodSandbox for \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\"" Jan 28 01:00:35.698198 containerd[1501]: time="2026-01-28T01:00:35.697706907Z" level=info msg="Ensure that sandbox f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe in task-service has been cleanup successfully" Jan 28 01:00:35.709834 kubelet[2692]: I0128 01:00:35.709784 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Jan 28 01:00:35.715216 containerd[1501]: time="2026-01-28T01:00:35.715103288Z" level=info msg="StopPodSandbox for \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\"" Jan 28 01:00:35.717709 containerd[1501]: time="2026-01-28T01:00:35.717631733Z" level=info msg="Ensure that sandbox 006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d in task-service has been cleanup successfully" Jan 28 01:00:35.723504 kubelet[2692]: I0128 01:00:35.722340 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Jan 28 01:00:35.724527 containerd[1501]: time="2026-01-28T01:00:35.723444735Z" level=info msg="StopPodSandbox for \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\"" Jan 28 01:00:35.724527 containerd[1501]: time="2026-01-28T01:00:35.724073002Z" level=info msg="Ensure that sandbox a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6 in task-service has been cleanup successfully" Jan 28 01:00:35.739958 kubelet[2692]: I0128 01:00:35.739906 2692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Jan 28 01:00:35.743717 containerd[1501]: time="2026-01-28T01:00:35.743175766Z" level=info msg="StopPodSandbox for \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\"" Jan 28 01:00:35.746318 containerd[1501]: time="2026-01-28T01:00:35.746167381Z" level=info msg="Ensure that sandbox 5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41 in task-service has been cleanup successfully" Jan 28 01:00:35.949716 containerd[1501]: time="2026-01-28T01:00:35.949654600Z" level=error msg="StopPodSandbox for \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\" failed" error="failed to destroy network for sandbox \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.950498 kubelet[2692]: E0128 01:00:35.950414 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Jan 28 01:00:35.951599 kubelet[2692]: E0128 01:00:35.950788 2692 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a"} Jan 28 01:00:35.951599 kubelet[2692]: E0128 01:00:35.951412 2692 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c29ba3ff-ac29-4eda-99c2-465b53ee5c1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:00:35.951599 kubelet[2692]: E0128 01:00:35.951529 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c29ba3ff-ac29-4eda-99c2-465b53ee5c1d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-795bbb5d6-zbqwm" podUID="c29ba3ff-ac29-4eda-99c2-465b53ee5c1d" Jan 28 01:00:35.960707 containerd[1501]: time="2026-01-28T01:00:35.954555542Z" level=error msg="StopPodSandbox for \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\" failed" error="failed to destroy network for sandbox \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.960909 kubelet[2692]: E0128 01:00:35.954810 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Jan 28 01:00:35.960909 kubelet[2692]: E0128 01:00:35.954876 2692 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41"} Jan 28 01:00:35.960909 kubelet[2692]: E0128 01:00:35.954930 2692 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"215504c8-12e3-45d1-b60d-0c358a1645a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:00:35.960909 kubelet[2692]: E0128 01:00:35.954966 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"215504c8-12e3-45d1-b60d-0c358a1645a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-5thvw" podUID="215504c8-12e3-45d1-b60d-0c358a1645a5" Jan 28 01:00:35.961706 containerd[1501]: time="2026-01-28T01:00:35.961529694Z" level=error msg="StopPodSandbox for \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\" failed" error="failed to destroy network for sandbox \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.961872 kubelet[2692]: E0128 01:00:35.961817 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Jan 28 01:00:35.961984 kubelet[2692]: E0128 01:00:35.961890 2692 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6"} Jan 28 01:00:35.961984 kubelet[2692]: E0128 01:00:35.961937 2692 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"054a4d87-77d7-4fd5-ba18-4966e01b6356\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:00:35.962172 kubelet[2692]: E0128 01:00:35.961975 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"054a4d87-77d7-4fd5-ba18-4966e01b6356\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-cj4z5" podUID="054a4d87-77d7-4fd5-ba18-4966e01b6356" Jan 28 01:00:35.981590 containerd[1501]: time="2026-01-28T01:00:35.981298678Z" level=error msg="StopPodSandbox for \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\" failed" error="failed to destroy network for sandbox \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:35.982579 kubelet[2692]: E0128 01:00:35.982522 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Jan 28 01:00:35.982675 kubelet[2692]: E0128 01:00:35.982598 2692 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4"} Jan 28 01:00:35.982675 kubelet[2692]: E0128 01:00:35.982644 2692 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb7cb13d-31ca-4384-944f-1754705dfa3e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:00:35.982848 kubelet[2692]: E0128 01:00:35.982684 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb7cb13d-31ca-4384-944f-1754705dfa3e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-9r9k6" podUID="eb7cb13d-31ca-4384-944f-1754705dfa3e" Jan 28 01:00:36.001344 containerd[1501]: time="2026-01-28T01:00:36.000649330Z" level=error msg="StopPodSandbox for \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\" failed" error="failed to destroy network for sandbox \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:36.001836 kubelet[2692]: E0128 01:00:36.001650 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Jan 28 01:00:36.001836 kubelet[2692]: E0128 01:00:36.001736 2692 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe"} Jan 28 01:00:36.001836 kubelet[2692]: E0128 01:00:36.001784 2692 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d6fe3f19-c2cb-4440-ac98-4f17244eae9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:00:36.002048 kubelet[2692]: E0128 01:00:36.001842 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d6fe3f19-c2cb-4440-ac98-4f17244eae9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:00:36.024978 containerd[1501]: time="2026-01-28T01:00:36.024492004Z" level=error msg="StopPodSandbox for \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\" failed" error="failed to destroy network for sandbox \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:36.025370 kubelet[2692]: E0128 01:00:36.024928 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Jan 28 01:00:36.025370 kubelet[2692]: E0128 01:00:36.025017 2692 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb"} Jan 28 01:00:36.025370 kubelet[2692]: E0128 01:00:36.025065 2692 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"628696b9-5871-452c-9749-f01c86f7c8e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:00:36.025370 kubelet[2692]: E0128 01:00:36.025122 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"628696b9-5871-452c-9749-f01c86f7c8e5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2dnhz" podUID="628696b9-5871-452c-9749-f01c86f7c8e5" Jan 28 01:00:36.027621 containerd[1501]: time="2026-01-28T01:00:36.027577734Z" level=error msg="StopPodSandbox for \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\" failed" error="failed to destroy network for sandbox \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:36.027911 kubelet[2692]: E0128 01:00:36.027858 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Jan 28 01:00:36.027986 kubelet[2692]: E0128 01:00:36.027914 2692 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5"} Jan 28 01:00:36.027986 kubelet[2692]: E0128 01:00:36.027949 2692 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4ef09ae9-4abf-45ab-835f-f8b9901cd23b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:00:36.028111 kubelet[2692]: E0128 01:00:36.027995 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4ef09ae9-4abf-45ab-835f-f8b9901cd23b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-r4vpr" podUID="4ef09ae9-4abf-45ab-835f-f8b9901cd23b" Jan 28 01:00:36.033811 containerd[1501]: time="2026-01-28T01:00:36.033746197Z" level=error msg="StopPodSandbox for \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\" failed" error="failed to destroy network for sandbox \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:36.034888 kubelet[2692]: E0128 01:00:36.034053 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Jan 28 01:00:36.034888 kubelet[2692]: E0128 01:00:36.034112 2692 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d"} Jan 28 01:00:36.034888 kubelet[2692]: E0128 01:00:36.034152 2692 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1af325e3-7600-48af-bd7f-f8e9f715489b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:00:36.034888 kubelet[2692]: E0128 01:00:36.034187 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1af325e3-7600-48af-bd7f-f8e9f715489b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fcd5d865b-hrj24" podUID="1af325e3-7600-48af-bd7f-f8e9f715489b" Jan 28 01:00:45.787922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1892193750.mount: Deactivated successfully. Jan 28 01:00:45.908323 containerd[1501]: time="2026-01-28T01:00:45.899508557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 28 01:00:45.910016 containerd[1501]: time="2026-01-28T01:00:45.909272445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:45.921769 containerd[1501]: time="2026-01-28T01:00:45.921704418Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:45.979124 containerd[1501]: time="2026-01-28T01:00:45.979065307Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 01:00:45.980219 containerd[1501]: time="2026-01-28T01:00:45.980025815Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 11.33501798s" Jan 28 01:00:45.980219 containerd[1501]: time="2026-01-28T01:00:45.980080314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 01:00:46.044491 containerd[1501]: time="2026-01-28T01:00:46.044311048Z" level=info msg="CreateContainer within sandbox \"3d3f21e4782c6ac20e8e76ffd22594fe18ca0f7e0be473b3a05b8346c5f8d7fa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 01:00:46.096976 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4150354348.mount: Deactivated successfully. Jan 28 01:00:46.108311 containerd[1501]: time="2026-01-28T01:00:46.108109611Z" level=info msg="CreateContainer within sandbox \"3d3f21e4782c6ac20e8e76ffd22594fe18ca0f7e0be473b3a05b8346c5f8d7fa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5789540232ae3f0c6c778d275bac09d9d2d14a22a32307bd4ae8ee2a174c1122\"" Jan 28 01:00:46.116310 containerd[1501]: time="2026-01-28T01:00:46.116239156Z" level=info msg="StartContainer for \"5789540232ae3f0c6c778d275bac09d9d2d14a22a32307bd4ae8ee2a174c1122\"" Jan 28 01:00:46.262889 systemd[1]: Started cri-containerd-5789540232ae3f0c6c778d275bac09d9d2d14a22a32307bd4ae8ee2a174c1122.scope - libcontainer container 5789540232ae3f0c6c778d275bac09d9d2d14a22a32307bd4ae8ee2a174c1122. Jan 28 01:00:46.323180 containerd[1501]: time="2026-01-28T01:00:46.323011985Z" level=info msg="StartContainer for \"5789540232ae3f0c6c778d275bac09d9d2d14a22a32307bd4ae8ee2a174c1122\" returns successfully" Jan 28 01:00:46.347003 containerd[1501]: time="2026-01-28T01:00:46.346124770Z" level=info msg="StopPodSandbox for \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\"" Jan 28 01:00:46.348041 containerd[1501]: time="2026-01-28T01:00:46.348010347Z" level=info msg="StopPodSandbox for \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\"" Jan 28 01:00:46.460456 containerd[1501]: time="2026-01-28T01:00:46.460361335Z" level=error msg="StopPodSandbox for \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\" failed" error="failed to destroy network for sandbox \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:46.462319 containerd[1501]: time="2026-01-28T01:00:46.461447893Z" level=error msg="StopPodSandbox for \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\" failed" error="failed to destroy network for sandbox \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 01:00:46.462411 kubelet[2692]: E0128 01:00:46.461685 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Jan 28 01:00:46.462411 kubelet[2692]: E0128 01:00:46.461926 2692 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe"} Jan 28 01:00:46.462411 kubelet[2692]: E0128 01:00:46.461995 2692 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d6fe3f19-c2cb-4440-ac98-4f17244eae9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:00:46.462411 kubelet[2692]: E0128 01:00:46.462075 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d6fe3f19-c2cb-4440-ac98-4f17244eae9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:00:46.464371 kubelet[2692]: E0128 01:00:46.461818 2692 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Jan 28 01:00:46.464371 kubelet[2692]: E0128 01:00:46.462163 2692 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4"} Jan 28 01:00:46.464371 kubelet[2692]: E0128 01:00:46.462193 2692 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb7cb13d-31ca-4384-944f-1754705dfa3e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 01:00:46.464371 kubelet[2692]: E0128 01:00:46.462220 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb7cb13d-31ca-4384-944f-1754705dfa3e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-9r9k6" podUID="eb7cb13d-31ca-4384-944f-1754705dfa3e" Jan 28 01:00:46.627384 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 01:00:46.655515 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 01:00:46.871110 kubelet[2692]: I0128 01:00:46.866262 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7k5sr" podStartSLOduration=2.484947486 podStartE2EDuration="37.852480926s" podCreationTimestamp="2026-01-28 01:00:09 +0000 UTC" firstStartedPulling="2026-01-28 01:00:10.613946221 +0000 UTC m=+27.560083800" lastFinishedPulling="2026-01-28 01:00:45.981479648 +0000 UTC m=+62.927617240" observedRunningTime="2026-01-28 01:00:46.846887867 +0000 UTC m=+63.793025470" watchObservedRunningTime="2026-01-28 01:00:46.852480926 +0000 UTC m=+63.798618523" Jan 28 01:00:47.093642 containerd[1501]: time="2026-01-28T01:00:47.093374514Z" level=info msg="StopPodSandbox for \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\"" Jan 28 01:00:47.538495 containerd[1501]: 2026-01-28 01:00:47.237 [INFO][3904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Jan 28 01:00:47.538495 containerd[1501]: 2026-01-28 01:00:47.237 [INFO][3904] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" iface="eth0" netns="/var/run/netns/cni-2449f9b7-ed59-5266-dfe8-c7f0effdaeee" Jan 28 01:00:47.538495 containerd[1501]: 2026-01-28 01:00:47.238 [INFO][3904] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" iface="eth0" netns="/var/run/netns/cni-2449f9b7-ed59-5266-dfe8-c7f0effdaeee" Jan 28 01:00:47.538495 containerd[1501]: 2026-01-28 01:00:47.239 [INFO][3904] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" iface="eth0" netns="/var/run/netns/cni-2449f9b7-ed59-5266-dfe8-c7f0effdaeee" Jan 28 01:00:47.538495 containerd[1501]: 2026-01-28 01:00:47.239 [INFO][3904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Jan 28 01:00:47.538495 containerd[1501]: 2026-01-28 01:00:47.239 [INFO][3904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Jan 28 01:00:47.538495 containerd[1501]: 2026-01-28 01:00:47.504 [INFO][3911] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" HandleID="k8s-pod-network.4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Workload="srv--8h12l.gb1.brightbox.com-k8s-whisker--795bbb5d6--zbqwm-eth0" Jan 28 01:00:47.538495 containerd[1501]: 2026-01-28 01:00:47.507 [INFO][3911] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:00:47.538495 containerd[1501]: 2026-01-28 01:00:47.508 [INFO][3911] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:00:47.538495 containerd[1501]: 2026-01-28 01:00:47.526 [WARNING][3911] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" HandleID="k8s-pod-network.4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Workload="srv--8h12l.gb1.brightbox.com-k8s-whisker--795bbb5d6--zbqwm-eth0" Jan 28 01:00:47.538495 containerd[1501]: 2026-01-28 01:00:47.527 [INFO][3911] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" HandleID="k8s-pod-network.4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Workload="srv--8h12l.gb1.brightbox.com-k8s-whisker--795bbb5d6--zbqwm-eth0" Jan 28 01:00:47.538495 containerd[1501]: 2026-01-28 01:00:47.532 [INFO][3911] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:00:47.538495 containerd[1501]: 2026-01-28 01:00:47.535 [INFO][3904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Jan 28 01:00:47.539436 containerd[1501]: time="2026-01-28T01:00:47.538740374Z" level=info msg="TearDown network for sandbox \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\" successfully" Jan 28 01:00:47.539436 containerd[1501]: time="2026-01-28T01:00:47.538776762Z" level=info msg="StopPodSandbox for \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\" returns successfully" Jan 28 01:00:47.543604 systemd[1]: run-netns-cni\x2d2449f9b7\x2ded59\x2d5266\x2ddfe8\x2dc7f0effdaeee.mount: Deactivated successfully. Jan 28 01:00:47.757323 kubelet[2692]: I0128 01:00:47.756736 2692 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c29ba3ff-ac29-4eda-99c2-465b53ee5c1d-whisker-backend-key-pair\") pod \"c29ba3ff-ac29-4eda-99c2-465b53ee5c1d\" (UID: \"c29ba3ff-ac29-4eda-99c2-465b53ee5c1d\") " Jan 28 01:00:47.757323 kubelet[2692]: I0128 01:00:47.756840 2692 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsdqb\" (UniqueName: \"kubernetes.io/projected/c29ba3ff-ac29-4eda-99c2-465b53ee5c1d-kube-api-access-hsdqb\") pod \"c29ba3ff-ac29-4eda-99c2-465b53ee5c1d\" (UID: \"c29ba3ff-ac29-4eda-99c2-465b53ee5c1d\") " Jan 28 01:00:47.757323 kubelet[2692]: I0128 01:00:47.756910 2692 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c29ba3ff-ac29-4eda-99c2-465b53ee5c1d-whisker-ca-bundle\") pod \"c29ba3ff-ac29-4eda-99c2-465b53ee5c1d\" (UID: \"c29ba3ff-ac29-4eda-99c2-465b53ee5c1d\") " Jan 28 01:00:47.781578 systemd[1]: var-lib-kubelet-pods-c29ba3ff\x2dac29\x2d4eda\x2d99c2\x2d465b53ee5c1d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhsdqb.mount: Deactivated successfully. Jan 28 01:00:47.781754 systemd[1]: var-lib-kubelet-pods-c29ba3ff\x2dac29\x2d4eda\x2d99c2\x2d465b53ee5c1d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 01:00:47.792619 kubelet[2692]: I0128 01:00:47.788846 2692 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c29ba3ff-ac29-4eda-99c2-465b53ee5c1d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c29ba3ff-ac29-4eda-99c2-465b53ee5c1d" (UID: "c29ba3ff-ac29-4eda-99c2-465b53ee5c1d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 01:00:47.792619 kubelet[2692]: I0128 01:00:47.792498 2692 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c29ba3ff-ac29-4eda-99c2-465b53ee5c1d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c29ba3ff-ac29-4eda-99c2-465b53ee5c1d" (UID: "c29ba3ff-ac29-4eda-99c2-465b53ee5c1d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 01:00:47.792941 kubelet[2692]: I0128 01:00:47.788855 2692 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c29ba3ff-ac29-4eda-99c2-465b53ee5c1d-kube-api-access-hsdqb" (OuterVolumeSpecName: "kube-api-access-hsdqb") pod "c29ba3ff-ac29-4eda-99c2-465b53ee5c1d" (UID: "c29ba3ff-ac29-4eda-99c2-465b53ee5c1d"). InnerVolumeSpecName "kube-api-access-hsdqb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 01:00:47.831145 systemd[1]: Removed slice kubepods-besteffort-podc29ba3ff_ac29_4eda_99c2_465b53ee5c1d.slice - libcontainer container kubepods-besteffort-podc29ba3ff_ac29_4eda_99c2_465b53ee5c1d.slice. Jan 28 01:00:47.858879 kubelet[2692]: I0128 01:00:47.858815 2692 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c29ba3ff-ac29-4eda-99c2-465b53ee5c1d-whisker-ca-bundle\") on node \"srv-8h12l.gb1.brightbox.com\" DevicePath \"\"" Jan 28 01:00:47.858879 kubelet[2692]: I0128 01:00:47.858860 2692 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c29ba3ff-ac29-4eda-99c2-465b53ee5c1d-whisker-backend-key-pair\") on node \"srv-8h12l.gb1.brightbox.com\" DevicePath \"\"" Jan 28 01:00:47.858879 kubelet[2692]: I0128 01:00:47.858877 2692 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hsdqb\" (UniqueName: \"kubernetes.io/projected/c29ba3ff-ac29-4eda-99c2-465b53ee5c1d-kube-api-access-hsdqb\") on node \"srv-8h12l.gb1.brightbox.com\" DevicePath \"\"" Jan 28 01:00:47.971617 systemd[1]: Created slice kubepods-besteffort-pod95e6d4a0_89ab_461c_a749_32d8a8aa1de6.slice - libcontainer container kubepods-besteffort-pod95e6d4a0_89ab_461c_a749_32d8a8aa1de6.slice. Jan 28 01:00:48.062360 kubelet[2692]: I0128 01:00:48.061552 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95e6d4a0-89ab-461c-a749-32d8a8aa1de6-whisker-ca-bundle\") pod \"whisker-694cd9684d-pgqjc\" (UID: \"95e6d4a0-89ab-461c-a749-32d8a8aa1de6\") " pod="calico-system/whisker-694cd9684d-pgqjc" Jan 28 01:00:48.062360 kubelet[2692]: I0128 01:00:48.061648 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mwmv\" (UniqueName: \"kubernetes.io/projected/95e6d4a0-89ab-461c-a749-32d8a8aa1de6-kube-api-access-2mwmv\") pod \"whisker-694cd9684d-pgqjc\" (UID: \"95e6d4a0-89ab-461c-a749-32d8a8aa1de6\") " pod="calico-system/whisker-694cd9684d-pgqjc" Jan 28 01:00:48.062360 kubelet[2692]: I0128 01:00:48.061691 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/95e6d4a0-89ab-461c-a749-32d8a8aa1de6-whisker-backend-key-pair\") pod \"whisker-694cd9684d-pgqjc\" (UID: \"95e6d4a0-89ab-461c-a749-32d8a8aa1de6\") " pod="calico-system/whisker-694cd9684d-pgqjc" Jan 28 01:00:48.304066 containerd[1501]: time="2026-01-28T01:00:48.303938676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-694cd9684d-pgqjc,Uid:95e6d4a0-89ab-461c-a749-32d8a8aa1de6,Namespace:calico-system,Attempt:0,}" Jan 28 01:00:48.356123 containerd[1501]: time="2026-01-28T01:00:48.354088023Z" level=info msg="StopPodSandbox for \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\"" Jan 28 01:00:48.356123 containerd[1501]: time="2026-01-28T01:00:48.355197811Z" level=info msg="StopPodSandbox for \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\"" Jan 28 01:00:48.703006 containerd[1501]: 2026-01-28 01:00:48.550 [INFO][3987] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Jan 28 01:00:48.703006 containerd[1501]: 2026-01-28 01:00:48.550 [INFO][3987] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" iface="eth0" netns="/var/run/netns/cni-8ad8e59b-7d1a-1d4f-136e-250b0b2ff80f" Jan 28 01:00:48.703006 containerd[1501]: 2026-01-28 01:00:48.552 [INFO][3987] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" iface="eth0" netns="/var/run/netns/cni-8ad8e59b-7d1a-1d4f-136e-250b0b2ff80f" Jan 28 01:00:48.703006 containerd[1501]: 2026-01-28 01:00:48.553 [INFO][3987] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" iface="eth0" netns="/var/run/netns/cni-8ad8e59b-7d1a-1d4f-136e-250b0b2ff80f" Jan 28 01:00:48.703006 containerd[1501]: 2026-01-28 01:00:48.553 [INFO][3987] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Jan 28 01:00:48.703006 containerd[1501]: 2026-01-28 01:00:48.555 [INFO][3987] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Jan 28 01:00:48.703006 containerd[1501]: 2026-01-28 01:00:48.648 [INFO][4011] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" HandleID="k8s-pod-network.60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:00:48.703006 containerd[1501]: 2026-01-28 01:00:48.649 [INFO][4011] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:00:48.703006 containerd[1501]: 2026-01-28 01:00:48.649 [INFO][4011] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:00:48.703006 containerd[1501]: 2026-01-28 01:00:48.669 [WARNING][4011] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" HandleID="k8s-pod-network.60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:00:48.703006 containerd[1501]: 2026-01-28 01:00:48.669 [INFO][4011] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" HandleID="k8s-pod-network.60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:00:48.703006 containerd[1501]: 2026-01-28 01:00:48.679 [INFO][4011] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:00:48.703006 containerd[1501]: 2026-01-28 01:00:48.693 [INFO][3987] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Jan 28 01:00:48.704649 containerd[1501]: time="2026-01-28T01:00:48.704464435Z" level=info msg="TearDown network for sandbox \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\" successfully" Jan 28 01:00:48.704649 containerd[1501]: time="2026-01-28T01:00:48.704522283Z" level=info msg="StopPodSandbox for \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\" returns successfully" Jan 28 01:00:48.712504 containerd[1501]: time="2026-01-28T01:00:48.712058775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6768b4f5db-r4vpr,Uid:4ef09ae9-4abf-45ab-835f-f8b9901cd23b,Namespace:calico-apiserver,Attempt:1,}" Jan 28 01:00:48.795958 systemd[1]: run-netns-cni\x2d8ad8e59b\x2d7d1a\x2d1d4f\x2d136e\x2d250b0b2ff80f.mount: Deactivated successfully. Jan 28 01:00:48.877794 systemd-networkd[1431]: cali16b2eb6aaee: Link UP Jan 28 01:00:48.882571 systemd-networkd[1431]: cali16b2eb6aaee: Gained carrier Jan 28 01:00:48.940833 containerd[1501]: 2026-01-28 01:00:48.530 [INFO][3985] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Jan 28 01:00:48.940833 containerd[1501]: 2026-01-28 01:00:48.531 [INFO][3985] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" iface="eth0" netns="/var/run/netns/cni-578c080a-a8d9-05dc-baa8-74f48b26e461" Jan 28 01:00:48.940833 containerd[1501]: 2026-01-28 01:00:48.537 [INFO][3985] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" iface="eth0" netns="/var/run/netns/cni-578c080a-a8d9-05dc-baa8-74f48b26e461" Jan 28 01:00:48.940833 containerd[1501]: 2026-01-28 01:00:48.539 [INFO][3985] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" iface="eth0" netns="/var/run/netns/cni-578c080a-a8d9-05dc-baa8-74f48b26e461" Jan 28 01:00:48.940833 containerd[1501]: 2026-01-28 01:00:48.539 [INFO][3985] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Jan 28 01:00:48.940833 containerd[1501]: 2026-01-28 01:00:48.539 [INFO][3985] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Jan 28 01:00:48.940833 containerd[1501]: 2026-01-28 01:00:48.671 [INFO][4005] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" HandleID="k8s-pod-network.006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:00:48.940833 containerd[1501]: 2026-01-28 01:00:48.671 [INFO][4005] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:00:48.940833 containerd[1501]: 2026-01-28 01:00:48.827 [INFO][4005] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:00:48.940833 containerd[1501]: 2026-01-28 01:00:48.871 [WARNING][4005] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" HandleID="k8s-pod-network.006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:00:48.940833 containerd[1501]: 2026-01-28 01:00:48.871 [INFO][4005] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" HandleID="k8s-pod-network.006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:00:48.940833 containerd[1501]: 2026-01-28 01:00:48.880 [INFO][4005] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:00:48.940833 containerd[1501]: 2026-01-28 01:00:48.922 [INFO][3985] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Jan 28 01:00:48.952349 systemd[1]: run-netns-cni\x2d578c080a\x2da8d9\x2d05dc\x2dbaa8\x2d74f48b26e461.mount: Deactivated successfully. Jan 28 01:00:48.957558 containerd[1501]: time="2026-01-28T01:00:48.948911415Z" level=info msg="TearDown network for sandbox \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\" successfully" Jan 28 01:00:48.957558 containerd[1501]: time="2026-01-28T01:00:48.957368329Z" level=info msg="StopPodSandbox for \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\" returns successfully" Jan 28 01:00:48.977330 containerd[1501]: time="2026-01-28T01:00:48.976751810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcd5d865b-hrj24,Uid:1af325e3-7600-48af-bd7f-f8e9f715489b,Namespace:calico-system,Attempt:1,}" Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.426 [INFO][3959] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.501 [INFO][3959] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--8h12l.gb1.brightbox.com-k8s-whisker--694cd9684d--pgqjc-eth0 whisker-694cd9684d- calico-system 95e6d4a0-89ab-461c-a749-32d8a8aa1de6 984 0 2026-01-28 01:00:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:694cd9684d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-8h12l.gb1.brightbox.com whisker-694cd9684d-pgqjc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali16b2eb6aaee [] [] }} ContainerID="d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" Namespace="calico-system" Pod="whisker-694cd9684d-pgqjc" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-whisker--694cd9684d--pgqjc-" Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.501 [INFO][3959] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" Namespace="calico-system" Pod="whisker-694cd9684d-pgqjc" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-whisker--694cd9684d--pgqjc-eth0" Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.646 [INFO][4000] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" HandleID="k8s-pod-network.d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" Workload="srv--8h12l.gb1.brightbox.com-k8s-whisker--694cd9684d--pgqjc-eth0" Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.649 [INFO][4000] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" HandleID="k8s-pod-network.d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" Workload="srv--8h12l.gb1.brightbox.com-k8s-whisker--694cd9684d--pgqjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125c80), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-8h12l.gb1.brightbox.com", "pod":"whisker-694cd9684d-pgqjc", "timestamp":"2026-01-28 01:00:48.646390753 +0000 UTC"}, Hostname:"srv-8h12l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.649 [INFO][4000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.683 [INFO][4000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.684 [INFO][4000] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-8h12l.gb1.brightbox.com' Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.708 [INFO][4000] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.740 [INFO][4000] ipam/ipam.go 394: Looking up existing affinities for host host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.750 [INFO][4000] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.757 [INFO][4000] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.771 [INFO][4000] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.771 [INFO][4000] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.776 [INFO][4000] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62 Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.800 [INFO][4000] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.826 [INFO][4000] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.1/26] block=192.168.113.0/26 handle="k8s-pod-network.d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.826 [INFO][4000] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.1/26] handle="k8s-pod-network.d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.827 [INFO][4000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:00:48.990722 containerd[1501]: 2026-01-28 01:00:48.827 [INFO][4000] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.1/26] IPv6=[] ContainerID="d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" HandleID="k8s-pod-network.d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" Workload="srv--8h12l.gb1.brightbox.com-k8s-whisker--694cd9684d--pgqjc-eth0" Jan 28 01:00:48.995270 containerd[1501]: 2026-01-28 01:00:48.831 [INFO][3959] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" Namespace="calico-system" Pod="whisker-694cd9684d-pgqjc" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-whisker--694cd9684d--pgqjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-whisker--694cd9684d--pgqjc-eth0", GenerateName:"whisker-694cd9684d-", Namespace:"calico-system", SelfLink:"", UID:"95e6d4a0-89ab-461c-a749-32d8a8aa1de6", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"694cd9684d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"", Pod:"whisker-694cd9684d-pgqjc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.113.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali16b2eb6aaee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:00:48.995270 containerd[1501]: 2026-01-28 01:00:48.832 [INFO][3959] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.1/32] ContainerID="d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" Namespace="calico-system" Pod="whisker-694cd9684d-pgqjc" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-whisker--694cd9684d--pgqjc-eth0" Jan 28 01:00:48.995270 containerd[1501]: 2026-01-28 01:00:48.832 [INFO][3959] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16b2eb6aaee ContainerID="d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" Namespace="calico-system" Pod="whisker-694cd9684d-pgqjc" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-whisker--694cd9684d--pgqjc-eth0" Jan 28 01:00:48.995270 containerd[1501]: 2026-01-28 01:00:48.885 [INFO][3959] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" Namespace="calico-system" Pod="whisker-694cd9684d-pgqjc" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-whisker--694cd9684d--pgqjc-eth0" Jan 28 01:00:48.995270 containerd[1501]: 2026-01-28 01:00:48.895 [INFO][3959] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" Namespace="calico-system" Pod="whisker-694cd9684d-pgqjc" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-whisker--694cd9684d--pgqjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-whisker--694cd9684d--pgqjc-eth0", GenerateName:"whisker-694cd9684d-", Namespace:"calico-system", SelfLink:"", UID:"95e6d4a0-89ab-461c-a749-32d8a8aa1de6", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"694cd9684d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62", Pod:"whisker-694cd9684d-pgqjc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.113.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali16b2eb6aaee", MAC:"ca:ab:20:ec:f1:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:00:48.995270 containerd[1501]: 2026-01-28 01:00:48.983 [INFO][3959] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62" Namespace="calico-system" Pod="whisker-694cd9684d-pgqjc" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-whisker--694cd9684d--pgqjc-eth0" Jan 28 01:00:49.144696 containerd[1501]: time="2026-01-28T01:00:49.144435855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:49.144696 containerd[1501]: time="2026-01-28T01:00:49.144622232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:49.144696 containerd[1501]: time="2026-01-28T01:00:49.144649936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:49.145083 containerd[1501]: time="2026-01-28T01:00:49.144883305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:49.232067 systemd[1]: Started cri-containerd-d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62.scope - libcontainer container d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62. Jan 28 01:00:49.345682 systemd-networkd[1431]: calia7150bbe7d8: Link UP Jan 28 01:00:49.346089 systemd-networkd[1431]: calia7150bbe7d8: Gained carrier Jan 28 01:00:49.358005 containerd[1501]: time="2026-01-28T01:00:49.356848476Z" level=info msg="StopPodSandbox for \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\"" Jan 28 01:00:49.376022 kubelet[2692]: I0128 01:00:49.375939 2692 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c29ba3ff-ac29-4eda-99c2-465b53ee5c1d" path="/var/lib/kubelet/pods/c29ba3ff-ac29-4eda-99c2-465b53ee5c1d/volumes" Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:48.896 [INFO][4073] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:48.971 [INFO][4073] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0 calico-apiserver-6768b4f5db- calico-apiserver 4ef09ae9-4abf-45ab-835f-f8b9901cd23b 991 0 2026-01-28 01:00:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6768b4f5db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-8h12l.gb1.brightbox.com calico-apiserver-6768b4f5db-r4vpr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia7150bbe7d8 [] [] }} ContainerID="03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-r4vpr" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-" Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:48.973 [INFO][4073] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-r4vpr" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.172 [INFO][4120] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" HandleID="k8s-pod-network.03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.176 [INFO][4120] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" HandleID="k8s-pod-network.03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e510), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-8h12l.gb1.brightbox.com", "pod":"calico-apiserver-6768b4f5db-r4vpr", "timestamp":"2026-01-28 01:00:49.172406442 +0000 UTC"}, Hostname:"srv-8h12l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.181 [INFO][4120] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.181 [INFO][4120] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.181 [INFO][4120] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-8h12l.gb1.brightbox.com' Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.205 [INFO][4120] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.231 [INFO][4120] ipam/ipam.go 394: Looking up existing affinities for host host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.257 [INFO][4120] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.261 [INFO][4120] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.275 [INFO][4120] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.275 [INFO][4120] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.281 [INFO][4120] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5 Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.299 [INFO][4120] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.325 [INFO][4120] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.2/26] block=192.168.113.0/26 handle="k8s-pod-network.03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.325 [INFO][4120] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.2/26] handle="k8s-pod-network.03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.325 [INFO][4120] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:00:49.417069 containerd[1501]: 2026-01-28 01:00:49.325 [INFO][4120] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.2/26] IPv6=[] ContainerID="03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" HandleID="k8s-pod-network.03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:00:49.421017 containerd[1501]: 2026-01-28 01:00:49.331 [INFO][4073] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-r4vpr" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0", GenerateName:"calico-apiserver-6768b4f5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"4ef09ae9-4abf-45ab-835f-f8b9901cd23b", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6768b4f5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6768b4f5db-r4vpr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7150bbe7d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:00:49.421017 containerd[1501]: 2026-01-28 01:00:49.334 [INFO][4073] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.2/32] ContainerID="03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-r4vpr" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:00:49.421017 containerd[1501]: 2026-01-28 01:00:49.334 [INFO][4073] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7150bbe7d8 ContainerID="03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-r4vpr" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:00:49.421017 containerd[1501]: 2026-01-28 01:00:49.344 [INFO][4073] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-r4vpr" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:00:49.421017 containerd[1501]: 2026-01-28 01:00:49.352 [INFO][4073] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-r4vpr" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0", GenerateName:"calico-apiserver-6768b4f5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"4ef09ae9-4abf-45ab-835f-f8b9901cd23b", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6768b4f5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5", Pod:"calico-apiserver-6768b4f5db-r4vpr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7150bbe7d8", MAC:"26:9a:32:fd:e6:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:00:49.421017 containerd[1501]: 2026-01-28 01:00:49.397 [INFO][4073] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-r4vpr" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:00:49.522659 containerd[1501]: time="2026-01-28T01:00:49.521480870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:49.522659 containerd[1501]: time="2026-01-28T01:00:49.522465820Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:49.524644 containerd[1501]: time="2026-01-28T01:00:49.523066987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:49.525348 containerd[1501]: time="2026-01-28T01:00:49.524957749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:49.612578 systemd[1]: Started cri-containerd-03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5.scope - libcontainer container 03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5. Jan 28 01:00:49.685695 containerd[1501]: time="2026-01-28T01:00:49.685644417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-694cd9684d-pgqjc,Uid:95e6d4a0-89ab-461c-a749-32d8a8aa1de6,Namespace:calico-system,Attempt:0,} returns sandbox id \"d24be9ca58f7c9bed75a1fa5c5c0e3dfb70865c8f33ce7851e3059972a2b8a62\"" Jan 28 01:00:49.687297 systemd-networkd[1431]: calied3f4408309: Link UP Jan 28 01:00:49.690138 systemd-networkd[1431]: calied3f4408309: Gained carrier Jan 28 01:00:49.716122 containerd[1501]: time="2026-01-28T01:00:49.714740048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.218 [INFO][4133] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.264 [INFO][4133] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0 calico-kube-controllers-7fcd5d865b- calico-system 1af325e3-7600-48af-bd7f-f8e9f715489b 989 0 2026-01-28 01:00:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fcd5d865b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-8h12l.gb1.brightbox.com calico-kube-controllers-7fcd5d865b-hrj24 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calied3f4408309 [] [] }} ContainerID="c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5d865b-hrj24" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-" Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.264 [INFO][4133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5d865b-hrj24" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.484 [INFO][4182] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" HandleID="k8s-pod-network.c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.501 [INFO][4182] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" HandleID="k8s-pod-network.c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003abee0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-8h12l.gb1.brightbox.com", "pod":"calico-kube-controllers-7fcd5d865b-hrj24", "timestamp":"2026-01-28 01:00:49.484079379 +0000 UTC"}, Hostname:"srv-8h12l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.501 [INFO][4182] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.501 [INFO][4182] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.501 [INFO][4182] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-8h12l.gb1.brightbox.com' Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.532 [INFO][4182] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.552 [INFO][4182] ipam/ipam.go 394: Looking up existing affinities for host host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.570 [INFO][4182] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.574 [INFO][4182] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.595 [INFO][4182] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.595 [INFO][4182] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.603 [INFO][4182] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2 Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.625 [INFO][4182] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.648 [INFO][4182] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.3/26] block=192.168.113.0/26 handle="k8s-pod-network.c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.648 [INFO][4182] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.3/26] handle="k8s-pod-network.c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.648 [INFO][4182] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:00:49.725018 containerd[1501]: 2026-01-28 01:00:49.648 [INFO][4182] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.3/26] IPv6=[] ContainerID="c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" HandleID="k8s-pod-network.c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:00:49.726772 containerd[1501]: 2026-01-28 01:00:49.657 [INFO][4133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5d865b-hrj24" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0", GenerateName:"calico-kube-controllers-7fcd5d865b-", Namespace:"calico-system", SelfLink:"", UID:"1af325e3-7600-48af-bd7f-f8e9f715489b", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcd5d865b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-7fcd5d865b-hrj24", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied3f4408309", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:00:49.726772 containerd[1501]: 2026-01-28 01:00:49.666 [INFO][4133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.3/32] ContainerID="c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5d865b-hrj24" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:00:49.726772 containerd[1501]: 2026-01-28 01:00:49.667 [INFO][4133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied3f4408309 ContainerID="c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5d865b-hrj24" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:00:49.726772 containerd[1501]: 2026-01-28 01:00:49.688 [INFO][4133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5d865b-hrj24" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:00:49.726772 containerd[1501]: 2026-01-28 01:00:49.690 [INFO][4133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5d865b-hrj24" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0", GenerateName:"calico-kube-controllers-7fcd5d865b-", Namespace:"calico-system", SelfLink:"", UID:"1af325e3-7600-48af-bd7f-f8e9f715489b", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcd5d865b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2", Pod:"calico-kube-controllers-7fcd5d865b-hrj24", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied3f4408309", MAC:"1e:63:08:8b:6f:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:00:49.726772 containerd[1501]: 2026-01-28 01:00:49.706 [INFO][4133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2" Namespace="calico-system" Pod="calico-kube-controllers-7fcd5d865b-hrj24" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:00:49.779153 containerd[1501]: time="2026-01-28T01:00:49.777666516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:49.779153 containerd[1501]: time="2026-01-28T01:00:49.778730593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:49.779153 containerd[1501]: time="2026-01-28T01:00:49.778757467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:49.779153 containerd[1501]: time="2026-01-28T01:00:49.778882261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:49.833377 systemd[1]: Started cri-containerd-c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2.scope - libcontainer container c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2. Jan 28 01:00:49.885452 containerd[1501]: 2026-01-28 01:00:49.659 [INFO][4198] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Jan 28 01:00:49.885452 containerd[1501]: 2026-01-28 01:00:49.659 [INFO][4198] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" iface="eth0" netns="/var/run/netns/cni-d9a1f4e9-980d-d21b-a442-60a0591aa263" Jan 28 01:00:49.885452 containerd[1501]: 2026-01-28 01:00:49.659 [INFO][4198] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" iface="eth0" netns="/var/run/netns/cni-d9a1f4e9-980d-d21b-a442-60a0591aa263" Jan 28 01:00:49.885452 containerd[1501]: 2026-01-28 01:00:49.660 [INFO][4198] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" iface="eth0" netns="/var/run/netns/cni-d9a1f4e9-980d-d21b-a442-60a0591aa263" Jan 28 01:00:49.885452 containerd[1501]: 2026-01-28 01:00:49.660 [INFO][4198] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Jan 28 01:00:49.885452 containerd[1501]: 2026-01-28 01:00:49.660 [INFO][4198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Jan 28 01:00:49.885452 containerd[1501]: 2026-01-28 01:00:49.847 [INFO][4249] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" HandleID="k8s-pod-network.a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:00:49.885452 containerd[1501]: 2026-01-28 01:00:49.847 [INFO][4249] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:00:49.885452 containerd[1501]: 2026-01-28 01:00:49.847 [INFO][4249] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:00:49.885452 containerd[1501]: 2026-01-28 01:00:49.867 [WARNING][4249] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" HandleID="k8s-pod-network.a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:00:49.885452 containerd[1501]: 2026-01-28 01:00:49.868 [INFO][4249] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" HandleID="k8s-pod-network.a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:00:49.885452 containerd[1501]: 2026-01-28 01:00:49.874 [INFO][4249] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:00:49.885452 containerd[1501]: 2026-01-28 01:00:49.877 [INFO][4198] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Jan 28 01:00:49.889696 containerd[1501]: time="2026-01-28T01:00:49.888374942Z" level=info msg="TearDown network for sandbox \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\" successfully" Jan 28 01:00:49.889696 containerd[1501]: time="2026-01-28T01:00:49.888417495Z" level=info msg="StopPodSandbox for \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\" returns successfully" Jan 28 01:00:49.897805 containerd[1501]: time="2026-01-28T01:00:49.897626093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cj4z5,Uid:054a4d87-77d7-4fd5-ba18-4966e01b6356,Namespace:kube-system,Attempt:1,}" Jan 28 01:00:49.898969 systemd[1]: run-netns-cni\x2dd9a1f4e9\x2d980d\x2dd21b\x2da442\x2d60a0591aa263.mount: Deactivated successfully. Jan 28 01:00:49.964886 containerd[1501]: time="2026-01-28T01:00:49.964779866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6768b4f5db-r4vpr,Uid:4ef09ae9-4abf-45ab-835f-f8b9901cd23b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5\"" Jan 28 01:00:50.096718 containerd[1501]: time="2026-01-28T01:00:50.096552572Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:00:50.100690 containerd[1501]: time="2026-01-28T01:00:50.100365854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:00:50.128568 containerd[1501]: time="2026-01-28T01:00:50.100613930Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:00:50.129220 containerd[1501]: time="2026-01-28T01:00:50.104729593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fcd5d865b-hrj24,Uid:1af325e3-7600-48af-bd7f-f8e9f715489b,Namespace:calico-system,Attempt:1,} returns sandbox id \"c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2\"" Jan 28 01:00:50.131160 kubelet[2692]: E0128 01:00:50.130582 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:00:50.131160 kubelet[2692]: E0128 01:00:50.130717 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:00:50.137223 containerd[1501]: time="2026-01-28T01:00:50.137175271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:00:50.137837 kubelet[2692]: E0128 01:00:50.137197 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-694cd9684d-pgqjc_calico-system(95e6d4a0-89ab-461c-a749-32d8a8aa1de6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:00:50.346570 systemd-networkd[1431]: cali4e4d3c24c3f: Link UP Jan 28 01:00:50.349205 systemd-networkd[1431]: cali4e4d3c24c3f: Gained carrier Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.020 [INFO][4308] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.051 [INFO][4308] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0 coredns-66bc5c9577- kube-system 054a4d87-77d7-4fd5-ba18-4966e01b6356 1001 0 2026-01-28 00:59:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-8h12l.gb1.brightbox.com coredns-66bc5c9577-cj4z5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4e4d3c24c3f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" Namespace="kube-system" Pod="coredns-66bc5c9577-cj4z5" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-" Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.051 [INFO][4308] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" Namespace="kube-system" Pod="coredns-66bc5c9577-cj4z5" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.193 [INFO][4325] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" HandleID="k8s-pod-network.6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.193 [INFO][4325] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" HandleID="k8s-pod-network.6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002748c0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-8h12l.gb1.brightbox.com", "pod":"coredns-66bc5c9577-cj4z5", "timestamp":"2026-01-28 01:00:50.193606226 +0000 UTC"}, Hostname:"srv-8h12l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.193 [INFO][4325] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.194 [INFO][4325] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.194 [INFO][4325] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-8h12l.gb1.brightbox.com' Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.233 [INFO][4325] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.257 [INFO][4325] ipam/ipam.go 394: Looking up existing affinities for host host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.269 [INFO][4325] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.275 [INFO][4325] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.287 [INFO][4325] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.287 [INFO][4325] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.299 [INFO][4325] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809 Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.315 [INFO][4325] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.334 [INFO][4325] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.4/26] block=192.168.113.0/26 handle="k8s-pod-network.6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.335 [INFO][4325] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.4/26] handle="k8s-pod-network.6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.335 [INFO][4325] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:00:50.388205 containerd[1501]: 2026-01-28 01:00:50.335 [INFO][4325] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.4/26] IPv6=[] ContainerID="6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" HandleID="k8s-pod-network.6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:00:50.399007 containerd[1501]: 2026-01-28 01:00:50.338 [INFO][4308] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" Namespace="kube-system" Pod="coredns-66bc5c9577-cj4z5" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"054a4d87-77d7-4fd5-ba18-4966e01b6356", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-cj4z5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e4d3c24c3f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:00:50.399007 containerd[1501]: 2026-01-28 01:00:50.339 [INFO][4308] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.4/32] ContainerID="6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" Namespace="kube-system" Pod="coredns-66bc5c9577-cj4z5" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:00:50.399007 containerd[1501]: 2026-01-28 01:00:50.339 [INFO][4308] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e4d3c24c3f ContainerID="6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" Namespace="kube-system" Pod="coredns-66bc5c9577-cj4z5" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:00:50.399007 containerd[1501]: 2026-01-28 01:00:50.347 [INFO][4308] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" Namespace="kube-system" Pod="coredns-66bc5c9577-cj4z5" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:00:50.399007 containerd[1501]: 2026-01-28 01:00:50.348 [INFO][4308] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" Namespace="kube-system" Pod="coredns-66bc5c9577-cj4z5" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"054a4d87-77d7-4fd5-ba18-4966e01b6356", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809", Pod:"coredns-66bc5c9577-cj4z5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e4d3c24c3f", MAC:"ae:7c:ee:af:ea:64", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:00:50.399685 containerd[1501]: 2026-01-28 01:00:50.385 [INFO][4308] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809" Namespace="kube-system" Pod="coredns-66bc5c9577-cj4z5" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:00:50.441517 containerd[1501]: time="2026-01-28T01:00:50.439654261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:50.441517 containerd[1501]: time="2026-01-28T01:00:50.441057463Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:50.441517 containerd[1501]: time="2026-01-28T01:00:50.441076045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:50.441517 containerd[1501]: time="2026-01-28T01:00:50.441311624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:50.464600 systemd-networkd[1431]: cali16b2eb6aaee: Gained IPv6LL Jan 28 01:00:50.477014 systemd[1]: Started cri-containerd-6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809.scope - libcontainer container 6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809. Jan 28 01:00:50.479879 containerd[1501]: time="2026-01-28T01:00:50.479355888Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:00:50.482252 containerd[1501]: time="2026-01-28T01:00:50.481676614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:00:50.482252 containerd[1501]: time="2026-01-28T01:00:50.481816324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:00:50.482836 kubelet[2692]: E0128 01:00:50.482132 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:00:50.482836 kubelet[2692]: E0128 01:00:50.482642 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:00:50.486136 kubelet[2692]: E0128 01:00:50.483433 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6768b4f5db-r4vpr_calico-apiserver(4ef09ae9-4abf-45ab-835f-f8b9901cd23b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:00:50.486136 kubelet[2692]: E0128 01:00:50.483547 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-r4vpr" podUID="4ef09ae9-4abf-45ab-835f-f8b9901cd23b" Jan 28 01:00:50.487561 containerd[1501]: time="2026-01-28T01:00:50.486787531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:00:50.535433 kernel: bpftool[4407]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 28 01:00:50.595742 containerd[1501]: time="2026-01-28T01:00:50.595688766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cj4z5,Uid:054a4d87-77d7-4fd5-ba18-4966e01b6356,Namespace:kube-system,Attempt:1,} returns sandbox id \"6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809\"" Jan 28 01:00:50.613039 containerd[1501]: time="2026-01-28T01:00:50.611505805Z" level=info msg="CreateContainer within sandbox \"6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:00:50.654992 containerd[1501]: time="2026-01-28T01:00:50.654552295Z" level=info msg="CreateContainer within sandbox \"6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bee2eedbb3aa878b660f86d2b861b2c279f8df1e7c79e3ee69d117fa235e8a59\"" Jan 28 01:00:50.655862 containerd[1501]: time="2026-01-28T01:00:50.655779261Z" level=info msg="StartContainer for \"bee2eedbb3aa878b660f86d2b861b2c279f8df1e7c79e3ee69d117fa235e8a59\"" Jan 28 01:00:50.710679 systemd[1]: Started cri-containerd-bee2eedbb3aa878b660f86d2b861b2c279f8df1e7c79e3ee69d117fa235e8a59.scope - libcontainer container bee2eedbb3aa878b660f86d2b861b2c279f8df1e7c79e3ee69d117fa235e8a59. Jan 28 01:00:50.783862 containerd[1501]: time="2026-01-28T01:00:50.783760318Z" level=info msg="StartContainer for \"bee2eedbb3aa878b660f86d2b861b2c279f8df1e7c79e3ee69d117fa235e8a59\" returns successfully" Jan 28 01:00:50.808849 containerd[1501]: time="2026-01-28T01:00:50.808745938Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:00:50.820562 containerd[1501]: time="2026-01-28T01:00:50.820447260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:00:50.820745 containerd[1501]: time="2026-01-28T01:00:50.820629643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:00:50.821720 kubelet[2692]: E0128 01:00:50.821631 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:00:50.822024 kubelet[2692]: E0128 01:00:50.821729 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:00:50.823487 kubelet[2692]: E0128 01:00:50.822689 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-694cd9684d-pgqjc_calico-system(95e6d4a0-89ab-461c-a749-32d8a8aa1de6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:00:50.823487 kubelet[2692]: E0128 01:00:50.822789 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-694cd9684d-pgqjc" podUID="95e6d4a0-89ab-461c-a749-32d8a8aa1de6" Jan 28 01:00:50.823720 containerd[1501]: time="2026-01-28T01:00:50.822835070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:00:50.856482 kubelet[2692]: E0128 01:00:50.856345 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-694cd9684d-pgqjc" podUID="95e6d4a0-89ab-461c-a749-32d8a8aa1de6" Jan 28 01:00:50.856482 kubelet[2692]: E0128 01:00:50.856349 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-r4vpr" podUID="4ef09ae9-4abf-45ab-835f-f8b9901cd23b" Jan 28 01:00:50.922144 kubelet[2692]: I0128 01:00:50.922055 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cj4z5" podStartSLOduration=61.922010113 podStartE2EDuration="1m1.922010113s" podCreationTimestamp="2026-01-28 00:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:00:50.888089317 +0000 UTC m=+67.834226916" watchObservedRunningTime="2026-01-28 01:00:50.922010113 +0000 UTC m=+67.868147708" Jan 28 01:00:50.976534 systemd-networkd[1431]: calia7150bbe7d8: Gained IPv6LL Jan 28 01:00:51.121820 systemd-networkd[1431]: vxlan.calico: Link UP Jan 28 01:00:51.121833 systemd-networkd[1431]: vxlan.calico: Gained carrier Jan 28 01:00:51.168605 containerd[1501]: time="2026-01-28T01:00:51.168441565Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:00:51.169743 containerd[1501]: time="2026-01-28T01:00:51.169639032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:00:51.169893 containerd[1501]: time="2026-01-28T01:00:51.169749762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:00:51.170617 kubelet[2692]: E0128 01:00:51.170185 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:00:51.170617 kubelet[2692]: E0128 01:00:51.170250 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:00:51.170617 kubelet[2692]: E0128 01:00:51.170410 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7fcd5d865b-hrj24_calico-system(1af325e3-7600-48af-bd7f-f8e9f715489b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:00:51.170617 kubelet[2692]: E0128 01:00:51.170462 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcd5d865b-hrj24" podUID="1af325e3-7600-48af-bd7f-f8e9f715489b" Jan 28 01:00:51.352441 containerd[1501]: time="2026-01-28T01:00:51.352307479Z" level=info msg="StopPodSandbox for \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\"" Jan 28 01:00:51.353896 containerd[1501]: time="2026-01-28T01:00:51.353862205Z" level=info msg="StopPodSandbox for \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\"" Jan 28 01:00:51.360485 systemd-networkd[1431]: calied3f4408309: Gained IPv6LL Jan 28 01:00:51.572816 containerd[1501]: 2026-01-28 01:00:51.467 [INFO][4518] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Jan 28 01:00:51.572816 containerd[1501]: 2026-01-28 01:00:51.469 [INFO][4518] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" iface="eth0" netns="/var/run/netns/cni-e3c1f502-6dc3-974b-dc99-29170d094639" Jan 28 01:00:51.572816 containerd[1501]: 2026-01-28 01:00:51.470 [INFO][4518] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" iface="eth0" netns="/var/run/netns/cni-e3c1f502-6dc3-974b-dc99-29170d094639" Jan 28 01:00:51.572816 containerd[1501]: 2026-01-28 01:00:51.472 [INFO][4518] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" iface="eth0" netns="/var/run/netns/cni-e3c1f502-6dc3-974b-dc99-29170d094639" Jan 28 01:00:51.572816 containerd[1501]: 2026-01-28 01:00:51.472 [INFO][4518] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Jan 28 01:00:51.572816 containerd[1501]: 2026-01-28 01:00:51.472 [INFO][4518] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Jan 28 01:00:51.572816 containerd[1501]: 2026-01-28 01:00:51.542 [INFO][4531] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" HandleID="k8s-pod-network.5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:00:51.572816 containerd[1501]: 2026-01-28 01:00:51.544 [INFO][4531] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:00:51.572816 containerd[1501]: 2026-01-28 01:00:51.544 [INFO][4531] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:00:51.572816 containerd[1501]: 2026-01-28 01:00:51.558 [WARNING][4531] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" HandleID="k8s-pod-network.5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:00:51.572816 containerd[1501]: 2026-01-28 01:00:51.558 [INFO][4531] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" HandleID="k8s-pod-network.5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:00:51.572816 containerd[1501]: 2026-01-28 01:00:51.563 [INFO][4531] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:00:51.572816 containerd[1501]: 2026-01-28 01:00:51.568 [INFO][4518] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Jan 28 01:00:51.577638 containerd[1501]: time="2026-01-28T01:00:51.574072005Z" level=info msg="TearDown network for sandbox \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\" successfully" Jan 28 01:00:51.577638 containerd[1501]: time="2026-01-28T01:00:51.574110666Z" level=info msg="StopPodSandbox for \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\" returns successfully" Jan 28 01:00:51.580585 containerd[1501]: time="2026-01-28T01:00:51.580543086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6768b4f5db-5thvw,Uid:215504c8-12e3-45d1-b60d-0c358a1645a5,Namespace:calico-apiserver,Attempt:1,}" Jan 28 01:00:51.581026 systemd[1]: run-netns-cni\x2de3c1f502\x2d6dc3\x2d974b\x2ddc99\x2d29170d094639.mount: Deactivated successfully. Jan 28 01:00:51.661337 containerd[1501]: 2026-01-28 01:00:51.488 [INFO][4515] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Jan 28 01:00:51.661337 containerd[1501]: 2026-01-28 01:00:51.489 [INFO][4515] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" iface="eth0" netns="/var/run/netns/cni-1ea636de-6bdc-b7e6-aaba-119551eb9c55" Jan 28 01:00:51.661337 containerd[1501]: 2026-01-28 01:00:51.491 [INFO][4515] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" iface="eth0" netns="/var/run/netns/cni-1ea636de-6bdc-b7e6-aaba-119551eb9c55" Jan 28 01:00:51.661337 containerd[1501]: 2026-01-28 01:00:51.491 [INFO][4515] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" iface="eth0" netns="/var/run/netns/cni-1ea636de-6bdc-b7e6-aaba-119551eb9c55" Jan 28 01:00:51.661337 containerd[1501]: 2026-01-28 01:00:51.491 [INFO][4515] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Jan 28 01:00:51.661337 containerd[1501]: 2026-01-28 01:00:51.491 [INFO][4515] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Jan 28 01:00:51.661337 containerd[1501]: 2026-01-28 01:00:51.591 [INFO][4535] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" HandleID="k8s-pod-network.5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:00:51.661337 containerd[1501]: 2026-01-28 01:00:51.592 [INFO][4535] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:00:51.661337 containerd[1501]: 2026-01-28 01:00:51.592 [INFO][4535] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:00:51.661337 containerd[1501]: 2026-01-28 01:00:51.636 [WARNING][4535] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" HandleID="k8s-pod-network.5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:00:51.661337 containerd[1501]: 2026-01-28 01:00:51.636 [INFO][4535] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" HandleID="k8s-pod-network.5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:00:51.661337 containerd[1501]: 2026-01-28 01:00:51.646 [INFO][4535] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:00:51.661337 containerd[1501]: 2026-01-28 01:00:51.655 [INFO][4515] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Jan 28 01:00:51.662091 containerd[1501]: time="2026-01-28T01:00:51.661646375Z" level=info msg="TearDown network for sandbox \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\" successfully" Jan 28 01:00:51.662091 containerd[1501]: time="2026-01-28T01:00:51.661685174Z" level=info msg="StopPodSandbox for \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\" returns successfully" Jan 28 01:00:51.666636 systemd[1]: run-netns-cni\x2d1ea636de\x2d6bdc\x2db7e6\x2daaba\x2d119551eb9c55.mount: Deactivated successfully. Jan 28 01:00:51.671602 containerd[1501]: time="2026-01-28T01:00:51.671542345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2dnhz,Uid:628696b9-5871-452c-9749-f01c86f7c8e5,Namespace:kube-system,Attempt:1,}" Jan 28 01:00:51.862699 kubelet[2692]: E0128 01:00:51.861748 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcd5d865b-hrj24" podUID="1af325e3-7600-48af-bd7f-f8e9f715489b" Jan 28 01:00:51.868140 kubelet[2692]: E0128 01:00:51.867599 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-r4vpr" podUID="4ef09ae9-4abf-45ab-835f-f8b9901cd23b" Jan 28 01:00:52.049491 systemd-networkd[1431]: calia122905f106: Link UP Jan 28 01:00:52.051909 systemd-networkd[1431]: calia122905f106: Gained carrier Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:51.751 [INFO][4547] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0 calico-apiserver-6768b4f5db- calico-apiserver 215504c8-12e3-45d1-b60d-0c358a1645a5 1043 0 2026-01-28 01:00:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6768b4f5db projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-8h12l.gb1.brightbox.com calico-apiserver-6768b4f5db-5thvw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia122905f106 [] [] }} ContainerID="126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-5thvw" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-" Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:51.751 [INFO][4547] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-5thvw" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:51.860 [INFO][4568] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" HandleID="k8s-pod-network.126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:51.862 [INFO][4568] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" HandleID="k8s-pod-network.126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-8h12l.gb1.brightbox.com", "pod":"calico-apiserver-6768b4f5db-5thvw", "timestamp":"2026-01-28 01:00:51.860167092 +0000 UTC"}, Hostname:"srv-8h12l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:51.864 [INFO][4568] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:51.864 [INFO][4568] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:51.864 [INFO][4568] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-8h12l.gb1.brightbox.com' Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:51.921 [INFO][4568] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:51.952 [INFO][4568] ipam/ipam.go 394: Looking up existing affinities for host host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:51.985 [INFO][4568] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:51.991 [INFO][4568] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:52.003 [INFO][4568] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:52.004 [INFO][4568] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:52.011 [INFO][4568] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148 Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:52.023 [INFO][4568] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:52.033 [INFO][4568] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.5/26] block=192.168.113.0/26 handle="k8s-pod-network.126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:52.034 [INFO][4568] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.5/26] handle="k8s-pod-network.126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:52.034 [INFO][4568] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:00:52.090696 containerd[1501]: 2026-01-28 01:00:52.034 [INFO][4568] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.5/26] IPv6=[] ContainerID="126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" HandleID="k8s-pod-network.126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:00:52.093942 containerd[1501]: 2026-01-28 01:00:52.039 [INFO][4547] cni-plugin/k8s.go 418: Populated endpoint ContainerID="126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-5thvw" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0", GenerateName:"calico-apiserver-6768b4f5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"215504c8-12e3-45d1-b60d-0c358a1645a5", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6768b4f5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6768b4f5db-5thvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia122905f106", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:00:52.093942 containerd[1501]: 2026-01-28 01:00:52.039 [INFO][4547] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.5/32] ContainerID="126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-5thvw" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:00:52.093942 containerd[1501]: 2026-01-28 01:00:52.039 [INFO][4547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia122905f106 ContainerID="126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-5thvw" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:00:52.093942 containerd[1501]: 2026-01-28 01:00:52.054 [INFO][4547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-5thvw" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:00:52.093942 containerd[1501]: 2026-01-28 01:00:52.056 [INFO][4547] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-5thvw" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0", GenerateName:"calico-apiserver-6768b4f5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"215504c8-12e3-45d1-b60d-0c358a1645a5", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6768b4f5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148", Pod:"calico-apiserver-6768b4f5db-5thvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia122905f106", MAC:"e6:f6:90:0f:d6:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:00:52.093942 containerd[1501]: 2026-01-28 01:00:52.081 [INFO][4547] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148" Namespace="calico-apiserver" Pod="calico-apiserver-6768b4f5db-5thvw" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:00:52.161892 containerd[1501]: time="2026-01-28T01:00:52.161063262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:52.161892 containerd[1501]: time="2026-01-28T01:00:52.161318286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:52.161892 containerd[1501]: time="2026-01-28T01:00:52.161398315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:52.167491 containerd[1501]: time="2026-01-28T01:00:52.164571742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:52.240427 systemd-networkd[1431]: cali04d819a8d0a: Link UP Jan 28 01:00:52.240922 systemd-networkd[1431]: cali04d819a8d0a: Gained carrier Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:51.823 [INFO][4557] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0 coredns-66bc5c9577- kube-system 628696b9-5871-452c-9749-f01c86f7c8e5 1044 0 2026-01-28 00:59:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-8h12l.gb1.brightbox.com coredns-66bc5c9577-2dnhz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali04d819a8d0a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" Namespace="kube-system" Pod="coredns-66bc5c9577-2dnhz" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-" Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:51.823 [INFO][4557] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" Namespace="kube-system" Pod="coredns-66bc5c9577-2dnhz" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:51.947 [INFO][4576] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" HandleID="k8s-pod-network.cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:51.948 [INFO][4576] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" HandleID="k8s-pod-network.cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5860), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-8h12l.gb1.brightbox.com", "pod":"coredns-66bc5c9577-2dnhz", "timestamp":"2026-01-28 01:00:51.947511555 +0000 UTC"}, Hostname:"srv-8h12l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:51.948 [INFO][4576] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:52.034 [INFO][4576] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:52.034 [INFO][4576] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-8h12l.gb1.brightbox.com' Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:52.059 [INFO][4576] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:52.087 [INFO][4576] ipam/ipam.go 394: Looking up existing affinities for host host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:52.104 [INFO][4576] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:52.107 [INFO][4576] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:52.123 [INFO][4576] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:52.123 [INFO][4576] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:52.131 [INFO][4576] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24 Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:52.145 [INFO][4576] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:52.182 [INFO][4576] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.6/26] block=192.168.113.0/26 handle="k8s-pod-network.cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:52.182 [INFO][4576] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.6/26] handle="k8s-pod-network.cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:52.183 [INFO][4576] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:00:52.298312 containerd[1501]: 2026-01-28 01:00:52.183 [INFO][4576] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.6/26] IPv6=[] ContainerID="cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" HandleID="k8s-pod-network.cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:00:52.307073 containerd[1501]: 2026-01-28 01:00:52.203 [INFO][4557] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" Namespace="kube-system" Pod="coredns-66bc5c9577-2dnhz" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"628696b9-5871-452c-9749-f01c86f7c8e5", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-2dnhz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04d819a8d0a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:00:52.307073 containerd[1501]: 2026-01-28 01:00:52.204 [INFO][4557] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.6/32] ContainerID="cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" Namespace="kube-system" Pod="coredns-66bc5c9577-2dnhz" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:00:52.307073 containerd[1501]: 2026-01-28 01:00:52.204 [INFO][4557] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04d819a8d0a ContainerID="cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" Namespace="kube-system" Pod="coredns-66bc5c9577-2dnhz" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:00:52.307073 containerd[1501]: 2026-01-28 01:00:52.249 [INFO][4557] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" Namespace="kube-system" Pod="coredns-66bc5c9577-2dnhz" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:00:52.307073 containerd[1501]: 2026-01-28 01:00:52.251 [INFO][4557] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" Namespace="kube-system" Pod="coredns-66bc5c9577-2dnhz" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"628696b9-5871-452c-9749-f01c86f7c8e5", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24", Pod:"coredns-66bc5c9577-2dnhz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04d819a8d0a", MAC:"0a:4d:9f:84:cb:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:00:52.307622 containerd[1501]: 2026-01-28 01:00:52.269 [INFO][4557] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24" Namespace="kube-system" Pod="coredns-66bc5c9577-2dnhz" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:00:52.335573 systemd[1]: Started cri-containerd-126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148.scope - libcontainer container 126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148. Jan 28 01:00:52.375615 containerd[1501]: time="2026-01-28T01:00:52.374846552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:00:52.375615 containerd[1501]: time="2026-01-28T01:00:52.374957235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:00:52.375615 containerd[1501]: time="2026-01-28T01:00:52.374983802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:52.375615 containerd[1501]: time="2026-01-28T01:00:52.375112924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:00:52.385630 systemd-networkd[1431]: cali4e4d3c24c3f: Gained IPv6LL Jan 28 01:00:52.420515 systemd[1]: Started cri-containerd-cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24.scope - libcontainer container cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24. Jan 28 01:00:52.499051 containerd[1501]: time="2026-01-28T01:00:52.498993128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2dnhz,Uid:628696b9-5871-452c-9749-f01c86f7c8e5,Namespace:kube-system,Attempt:1,} returns sandbox id \"cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24\"" Jan 28 01:00:52.507349 containerd[1501]: time="2026-01-28T01:00:52.507130836Z" level=info msg="CreateContainer within sandbox \"cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 01:00:52.542451 containerd[1501]: time="2026-01-28T01:00:52.542398010Z" level=info msg="CreateContainer within sandbox \"cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55013a448e1d94fb1015f5b831ca1332686d22d6904e0152ff53cd7a4033b390\"" Jan 28 01:00:52.542825 containerd[1501]: time="2026-01-28T01:00:52.542553925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6768b4f5db-5thvw,Uid:215504c8-12e3-45d1-b60d-0c358a1645a5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148\"" Jan 28 01:00:52.546145 containerd[1501]: time="2026-01-28T01:00:52.544409354Z" level=info msg="StartContainer for \"55013a448e1d94fb1015f5b831ca1332686d22d6904e0152ff53cd7a4033b390\"" Jan 28 01:00:52.565718 containerd[1501]: time="2026-01-28T01:00:52.565071220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:00:52.605543 systemd[1]: Started cri-containerd-55013a448e1d94fb1015f5b831ca1332686d22d6904e0152ff53cd7a4033b390.scope - libcontainer container 55013a448e1d94fb1015f5b831ca1332686d22d6904e0152ff53cd7a4033b390. Jan 28 01:00:52.660160 containerd[1501]: time="2026-01-28T01:00:52.660101844Z" level=info msg="StartContainer for \"55013a448e1d94fb1015f5b831ca1332686d22d6904e0152ff53cd7a4033b390\" returns successfully" Jan 28 01:00:52.891434 kubelet[2692]: I0128 01:00:52.885212 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2dnhz" podStartSLOduration=63.885190347 podStartE2EDuration="1m3.885190347s" podCreationTimestamp="2026-01-28 00:59:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 01:00:52.881712556 +0000 UTC m=+69.827850153" watchObservedRunningTime="2026-01-28 01:00:52.885190347 +0000 UTC m=+69.831327955" Jan 28 01:00:52.899260 containerd[1501]: time="2026-01-28T01:00:52.898748314Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:00:52.901340 containerd[1501]: time="2026-01-28T01:00:52.901252399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:00:52.901705 containerd[1501]: time="2026-01-28T01:00:52.901310247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:00:52.901823 kubelet[2692]: E0128 01:00:52.901757 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:00:52.901908 kubelet[2692]: E0128 01:00:52.901839 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:00:52.901981 kubelet[2692]: E0128 01:00:52.901947 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6768b4f5db-5thvw_calico-apiserver(215504c8-12e3-45d1-b60d-0c358a1645a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:00:52.902041 kubelet[2692]: E0128 01:00:52.902010 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-5thvw" podUID="215504c8-12e3-45d1-b60d-0c358a1645a5" Jan 28 01:00:53.024587 systemd-networkd[1431]: vxlan.calico: Gained IPv6LL Jan 28 01:00:53.793141 systemd-networkd[1431]: calia122905f106: Gained IPv6LL Jan 28 01:00:53.878190 kubelet[2692]: E0128 01:00:53.876489 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-5thvw" podUID="215504c8-12e3-45d1-b60d-0c358a1645a5" Jan 28 01:00:53.920508 systemd-networkd[1431]: cali04d819a8d0a: Gained IPv6LL Jan 28 01:01:00.345729 containerd[1501]: time="2026-01-28T01:01:00.345033677Z" level=info msg="StopPodSandbox for \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\"" Jan 28 01:01:00.540512 containerd[1501]: 2026-01-28 01:01:00.454 [INFO][4792] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Jan 28 01:01:00.540512 containerd[1501]: 2026-01-28 01:01:00.454 [INFO][4792] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" iface="eth0" netns="/var/run/netns/cni-ac7d8fa7-f325-8d5b-b044-f92181bc50bc" Jan 28 01:01:00.540512 containerd[1501]: 2026-01-28 01:01:00.454 [INFO][4792] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" iface="eth0" netns="/var/run/netns/cni-ac7d8fa7-f325-8d5b-b044-f92181bc50bc" Jan 28 01:01:00.540512 containerd[1501]: 2026-01-28 01:01:00.455 [INFO][4792] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" iface="eth0" netns="/var/run/netns/cni-ac7d8fa7-f325-8d5b-b044-f92181bc50bc" Jan 28 01:01:00.540512 containerd[1501]: 2026-01-28 01:01:00.455 [INFO][4792] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Jan 28 01:01:00.540512 containerd[1501]: 2026-01-28 01:01:00.455 [INFO][4792] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Jan 28 01:01:00.540512 containerd[1501]: 2026-01-28 01:01:00.506 [INFO][4801] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" HandleID="k8s-pod-network.598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Workload="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:00.540512 containerd[1501]: 2026-01-28 01:01:00.506 [INFO][4801] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:00.540512 containerd[1501]: 2026-01-28 01:01:00.506 [INFO][4801] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:00.540512 containerd[1501]: 2026-01-28 01:01:00.528 [WARNING][4801] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" HandleID="k8s-pod-network.598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Workload="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:00.540512 containerd[1501]: 2026-01-28 01:01:00.528 [INFO][4801] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" HandleID="k8s-pod-network.598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Workload="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:00.540512 containerd[1501]: 2026-01-28 01:01:00.535 [INFO][4801] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:00.540512 containerd[1501]: 2026-01-28 01:01:00.537 [INFO][4792] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Jan 28 01:01:00.544984 containerd[1501]: time="2026-01-28T01:01:00.541094378Z" level=info msg="TearDown network for sandbox \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\" successfully" Jan 28 01:01:00.544984 containerd[1501]: time="2026-01-28T01:01:00.541342396Z" level=info msg="StopPodSandbox for \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\" returns successfully" Jan 28 01:01:00.551045 containerd[1501]: time="2026-01-28T01:01:00.550980503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9r9k6,Uid:eb7cb13d-31ca-4384-944f-1754705dfa3e,Namespace:calico-system,Attempt:1,}" Jan 28 01:01:00.553147 systemd[1]: run-netns-cni\x2dac7d8fa7\x2df325\x2d8d5b\x2db044\x2df92181bc50bc.mount: Deactivated successfully. Jan 28 01:01:00.781419 systemd-networkd[1431]: cali827be858c21: Link UP Jan 28 01:01:00.783249 systemd-networkd[1431]: cali827be858c21: Gained carrier Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.654 [INFO][4808] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0 goldmane-7c778bb748- calico-system eb7cb13d-31ca-4384-944f-1754705dfa3e 1104 0 2026-01-28 01:00:07 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-8h12l.gb1.brightbox.com goldmane-7c778bb748-9r9k6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali827be858c21 [] [] }} ContainerID="8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" Namespace="calico-system" Pod="goldmane-7c778bb748-9r9k6" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-" Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.655 [INFO][4808] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" Namespace="calico-system" Pod="goldmane-7c778bb748-9r9k6" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.700 [INFO][4820] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" HandleID="k8s-pod-network.8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" Workload="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.700 [INFO][4820] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" HandleID="k8s-pod-network.8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" Workload="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d57c0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-8h12l.gb1.brightbox.com", "pod":"goldmane-7c778bb748-9r9k6", "timestamp":"2026-01-28 01:01:00.700052036 +0000 UTC"}, Hostname:"srv-8h12l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.700 [INFO][4820] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.702 [INFO][4820] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.702 [INFO][4820] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-8h12l.gb1.brightbox.com' Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.718 [INFO][4820] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.728 [INFO][4820] ipam/ipam.go 394: Looking up existing affinities for host host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.736 [INFO][4820] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.740 [INFO][4820] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.746 [INFO][4820] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.746 [INFO][4820] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.749 [INFO][4820] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.754 [INFO][4820] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.768 [INFO][4820] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.7/26] block=192.168.113.0/26 handle="k8s-pod-network.8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.768 [INFO][4820] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.7/26] handle="k8s-pod-network.8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.769 [INFO][4820] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:00.809933 containerd[1501]: 2026-01-28 01:01:00.769 [INFO][4820] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.7/26] IPv6=[] ContainerID="8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" HandleID="k8s-pod-network.8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" Workload="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:00.816919 containerd[1501]: 2026-01-28 01:01:00.776 [INFO][4808] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" Namespace="calico-system" Pod="goldmane-7c778bb748-9r9k6" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"eb7cb13d-31ca-4384-944f-1754705dfa3e", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-7c778bb748-9r9k6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali827be858c21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:00.816919 containerd[1501]: 2026-01-28 01:01:00.776 [INFO][4808] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.7/32] ContainerID="8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" Namespace="calico-system" Pod="goldmane-7c778bb748-9r9k6" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:00.816919 containerd[1501]: 2026-01-28 01:01:00.776 [INFO][4808] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali827be858c21 ContainerID="8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" Namespace="calico-system" Pod="goldmane-7c778bb748-9r9k6" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:00.816919 containerd[1501]: 2026-01-28 01:01:00.782 [INFO][4808] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" Namespace="calico-system" Pod="goldmane-7c778bb748-9r9k6" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:00.816919 containerd[1501]: 2026-01-28 01:01:00.783 [INFO][4808] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" Namespace="calico-system" Pod="goldmane-7c778bb748-9r9k6" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"eb7cb13d-31ca-4384-944f-1754705dfa3e", ResourceVersion:"1104", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f", Pod:"goldmane-7c778bb748-9r9k6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali827be858c21", MAC:"22:9e:19:d1:61:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:00.816919 containerd[1501]: 2026-01-28 01:01:00.805 [INFO][4808] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f" Namespace="calico-system" Pod="goldmane-7c778bb748-9r9k6" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:00.873409 containerd[1501]: time="2026-01-28T01:01:00.872909365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:01:00.873409 containerd[1501]: time="2026-01-28T01:01:00.873078026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:01:00.873409 containerd[1501]: time="2026-01-28T01:01:00.873105191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:00.876025 containerd[1501]: time="2026-01-28T01:01:00.875712548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:00.940659 systemd[1]: Started cri-containerd-8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f.scope - libcontainer container 8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f. Jan 28 01:01:01.014743 containerd[1501]: time="2026-01-28T01:01:01.014581481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-9r9k6,Uid:eb7cb13d-31ca-4384-944f-1754705dfa3e,Namespace:calico-system,Attempt:1,} returns sandbox id \"8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f\"" Jan 28 01:01:01.017780 containerd[1501]: time="2026-01-28T01:01:01.017639042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:01:01.371415 containerd[1501]: time="2026-01-28T01:01:01.371351777Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:01.376920 containerd[1501]: time="2026-01-28T01:01:01.376706221Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:01:01.376920 containerd[1501]: time="2026-01-28T01:01:01.376805913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:01:01.377266 kubelet[2692]: E0128 01:01:01.377163 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:01:01.377844 kubelet[2692]: E0128 01:01:01.377306 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:01:01.377844 kubelet[2692]: E0128 01:01:01.377477 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-9r9k6_calico-system(eb7cb13d-31ca-4384-944f-1754705dfa3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:01.377844 kubelet[2692]: E0128 01:01:01.377575 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9r9k6" podUID="eb7cb13d-31ca-4384-944f-1754705dfa3e" Jan 28 01:01:01.917647 kubelet[2692]: E0128 01:01:01.917564 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9r9k6" podUID="eb7cb13d-31ca-4384-944f-1754705dfa3e" Jan 28 01:01:02.345063 containerd[1501]: time="2026-01-28T01:01:02.344457736Z" level=info msg="StopPodSandbox for \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\"" Jan 28 01:01:02.497467 containerd[1501]: 2026-01-28 01:01:02.424 [INFO][4890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Jan 28 01:01:02.497467 containerd[1501]: 2026-01-28 01:01:02.424 [INFO][4890] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" iface="eth0" netns="/var/run/netns/cni-0451618b-3b62-5908-3d2a-bc1079366050" Jan 28 01:01:02.497467 containerd[1501]: 2026-01-28 01:01:02.425 [INFO][4890] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" iface="eth0" netns="/var/run/netns/cni-0451618b-3b62-5908-3d2a-bc1079366050" Jan 28 01:01:02.497467 containerd[1501]: 2026-01-28 01:01:02.425 [INFO][4890] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" iface="eth0" netns="/var/run/netns/cni-0451618b-3b62-5908-3d2a-bc1079366050" Jan 28 01:01:02.497467 containerd[1501]: 2026-01-28 01:01:02.425 [INFO][4890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Jan 28 01:01:02.497467 containerd[1501]: 2026-01-28 01:01:02.425 [INFO][4890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Jan 28 01:01:02.497467 containerd[1501]: 2026-01-28 01:01:02.471 [INFO][4898] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" HandleID="k8s-pod-network.f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Workload="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:02.497467 containerd[1501]: 2026-01-28 01:01:02.471 [INFO][4898] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:02.497467 containerd[1501]: 2026-01-28 01:01:02.471 [INFO][4898] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:02.497467 containerd[1501]: 2026-01-28 01:01:02.486 [WARNING][4898] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" HandleID="k8s-pod-network.f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Workload="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:02.497467 containerd[1501]: 2026-01-28 01:01:02.486 [INFO][4898] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" HandleID="k8s-pod-network.f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Workload="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:02.497467 containerd[1501]: 2026-01-28 01:01:02.491 [INFO][4898] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:02.497467 containerd[1501]: 2026-01-28 01:01:02.493 [INFO][4890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Jan 28 01:01:02.499315 containerd[1501]: time="2026-01-28T01:01:02.499253811Z" level=info msg="TearDown network for sandbox \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\" successfully" Jan 28 01:01:02.499400 containerd[1501]: time="2026-01-28T01:01:02.499314575Z" level=info msg="StopPodSandbox for \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\" returns successfully" Jan 28 01:01:02.506558 containerd[1501]: time="2026-01-28T01:01:02.506483923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8h92,Uid:d6fe3f19-c2cb-4440-ac98-4f17244eae9f,Namespace:calico-system,Attempt:1,}" Jan 28 01:01:02.508343 systemd[1]: run-netns-cni\x2d0451618b\x2d3b62\x2d5908\x2d3d2a\x2dbc1079366050.mount: Deactivated successfully. Jan 28 01:01:02.689472 systemd-networkd[1431]: cali827be858c21: Gained IPv6LL Jan 28 01:01:02.715609 systemd-networkd[1431]: cali4488ee92e94: Link UP Jan 28 01:01:02.715978 systemd-networkd[1431]: cali4488ee92e94: Gained carrier Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.583 [INFO][4908] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0 csi-node-driver- calico-system d6fe3f19-c2cb-4440-ac98-4f17244eae9f 1122 0 2026-01-28 01:00:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-8h12l.gb1.brightbox.com csi-node-driver-v8h92 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4488ee92e94 [] [] }} ContainerID="e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" Namespace="calico-system" Pod="csi-node-driver-v8h92" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-" Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.583 [INFO][4908] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" Namespace="calico-system" Pod="csi-node-driver-v8h92" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.631 [INFO][4923] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" HandleID="k8s-pod-network.e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" Workload="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.633 [INFO][4923] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" HandleID="k8s-pod-network.e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" Workload="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-8h12l.gb1.brightbox.com", "pod":"csi-node-driver-v8h92", "timestamp":"2026-01-28 01:01:02.631768742 +0000 UTC"}, Hostname:"srv-8h12l.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.633 [INFO][4923] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.633 [INFO][4923] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.633 [INFO][4923] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-8h12l.gb1.brightbox.com' Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.646 [INFO][4923] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.657 [INFO][4923] ipam/ipam.go 394: Looking up existing affinities for host host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.665 [INFO][4923] ipam/ipam.go 511: Trying affinity for 192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.669 [INFO][4923] ipam/ipam.go 158: Attempting to load block cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.675 [INFO][4923] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.113.0/26 host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.675 [INFO][4923] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.113.0/26 handle="k8s-pod-network.e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.679 [INFO][4923] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019 Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.687 [INFO][4923] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.113.0/26 handle="k8s-pod-network.e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.699 [INFO][4923] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.113.8/26] block=192.168.113.0/26 handle="k8s-pod-network.e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.700 [INFO][4923] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.113.8/26] handle="k8s-pod-network.e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" host="srv-8h12l.gb1.brightbox.com" Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.700 [INFO][4923] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:02.745855 containerd[1501]: 2026-01-28 01:01:02.700 [INFO][4923] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.113.8/26] IPv6=[] ContainerID="e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" HandleID="k8s-pod-network.e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" Workload="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:02.747016 containerd[1501]: 2026-01-28 01:01:02.702 [INFO][4908] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" Namespace="calico-system" Pod="csi-node-driver-v8h92" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d6fe3f19-c2cb-4440-ac98-4f17244eae9f", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-v8h92", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4488ee92e94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:02.747016 containerd[1501]: 2026-01-28 01:01:02.703 [INFO][4908] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.113.8/32] ContainerID="e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" Namespace="calico-system" Pod="csi-node-driver-v8h92" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:02.747016 containerd[1501]: 2026-01-28 01:01:02.703 [INFO][4908] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4488ee92e94 ContainerID="e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" Namespace="calico-system" Pod="csi-node-driver-v8h92" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:02.747016 containerd[1501]: 2026-01-28 01:01:02.708 [INFO][4908] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" Namespace="calico-system" Pod="csi-node-driver-v8h92" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:02.747016 containerd[1501]: 2026-01-28 01:01:02.709 [INFO][4908] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" Namespace="calico-system" Pod="csi-node-driver-v8h92" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d6fe3f19-c2cb-4440-ac98-4f17244eae9f", ResourceVersion:"1122", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019", Pod:"csi-node-driver-v8h92", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4488ee92e94", MAC:"46:f3:2c:ed:76:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:02.747016 containerd[1501]: 2026-01-28 01:01:02.737 [INFO][4908] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019" Namespace="calico-system" Pod="csi-node-driver-v8h92" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:02.782040 containerd[1501]: time="2026-01-28T01:01:02.781328801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 01:01:02.782040 containerd[1501]: time="2026-01-28T01:01:02.781488750Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 01:01:02.782040 containerd[1501]: time="2026-01-28T01:01:02.781515207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:02.782040 containerd[1501]: time="2026-01-28T01:01:02.781716488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 01:01:02.829632 systemd[1]: Started cri-containerd-e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019.scope - libcontainer container e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019. Jan 28 01:01:02.868329 containerd[1501]: time="2026-01-28T01:01:02.868226697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v8h92,Uid:d6fe3f19-c2cb-4440-ac98-4f17244eae9f,Namespace:calico-system,Attempt:1,} returns sandbox id \"e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019\"" Jan 28 01:01:02.871830 containerd[1501]: time="2026-01-28T01:01:02.871747694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:01:02.930645 kubelet[2692]: E0128 01:01:02.929562 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9r9k6" podUID="eb7cb13d-31ca-4384-944f-1754705dfa3e" Jan 28 01:01:03.213337 containerd[1501]: time="2026-01-28T01:01:03.213018362Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:03.218256 containerd[1501]: time="2026-01-28T01:01:03.218163114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:01:03.219302 containerd[1501]: time="2026-01-28T01:01:03.218789578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:01:03.219435 kubelet[2692]: E0128 01:01:03.219101 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:01:03.219435 kubelet[2692]: E0128 01:01:03.219191 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:01:03.221124 kubelet[2692]: E0128 01:01:03.219747 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-v8h92_calico-system(d6fe3f19-c2cb-4440-ac98-4f17244eae9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:03.223549 containerd[1501]: time="2026-01-28T01:01:03.223501564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:01:03.555641 containerd[1501]: time="2026-01-28T01:01:03.555423710Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:03.558730 containerd[1501]: time="2026-01-28T01:01:03.557870055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:01:03.558730 containerd[1501]: time="2026-01-28T01:01:03.557949870Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:01:03.558932 kubelet[2692]: E0128 01:01:03.558250 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:01:03.558932 kubelet[2692]: E0128 01:01:03.558365 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:01:03.558932 kubelet[2692]: E0128 01:01:03.558526 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-v8h92_calico-system(d6fe3f19-c2cb-4440-ac98-4f17244eae9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:03.559194 kubelet[2692]: E0128 01:01:03.558617 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:01:03.904962 systemd-networkd[1431]: cali4488ee92e94: Gained IPv6LL Jan 28 01:01:03.937032 kubelet[2692]: E0128 01:01:03.935906 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:01:05.348603 containerd[1501]: time="2026-01-28T01:01:05.348360443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:01:05.688141 containerd[1501]: time="2026-01-28T01:01:05.687841959Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:05.689441 containerd[1501]: time="2026-01-28T01:01:05.689303640Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:01:05.689518 containerd[1501]: time="2026-01-28T01:01:05.689398806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:01:05.691615 kubelet[2692]: E0128 01:01:05.689833 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:05.691615 kubelet[2692]: E0128 01:01:05.689929 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:05.691615 kubelet[2692]: E0128 01:01:05.690080 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6768b4f5db-r4vpr_calico-apiserver(4ef09ae9-4abf-45ab-835f-f8b9901cd23b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:05.691615 kubelet[2692]: E0128 01:01:05.690152 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-r4vpr" podUID="4ef09ae9-4abf-45ab-835f-f8b9901cd23b" Jan 28 01:01:06.346256 containerd[1501]: time="2026-01-28T01:01:06.346046120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:01:06.685617 containerd[1501]: time="2026-01-28T01:01:06.685077599Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:06.688423 containerd[1501]: time="2026-01-28T01:01:06.688308703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:01:06.688497 containerd[1501]: time="2026-01-28T01:01:06.688333374Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:01:06.688874 kubelet[2692]: E0128 01:01:06.688793 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:01:06.689009 kubelet[2692]: E0128 01:01:06.688895 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:01:06.689909 kubelet[2692]: E0128 01:01:06.689178 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7fcd5d865b-hrj24_calico-system(1af325e3-7600-48af-bd7f-f8e9f715489b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:06.689909 kubelet[2692]: E0128 01:01:06.689253 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcd5d865b-hrj24" podUID="1af325e3-7600-48af-bd7f-f8e9f715489b" Jan 28 01:01:06.690153 containerd[1501]: time="2026-01-28T01:01:06.689404984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:01:07.026713 containerd[1501]: time="2026-01-28T01:01:07.026400197Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:07.028324 containerd[1501]: time="2026-01-28T01:01:07.028160821Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:01:07.028324 containerd[1501]: time="2026-01-28T01:01:07.028234966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:01:07.030261 kubelet[2692]: E0128 01:01:07.028738 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:01:07.030261 kubelet[2692]: E0128 01:01:07.028894 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:01:07.030261 kubelet[2692]: E0128 01:01:07.029141 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-694cd9684d-pgqjc_calico-system(95e6d4a0-89ab-461c-a749-32d8a8aa1de6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:07.033556 containerd[1501]: time="2026-01-28T01:01:07.033325299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:01:07.365228 containerd[1501]: time="2026-01-28T01:01:07.364783715Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:07.377140 containerd[1501]: time="2026-01-28T01:01:07.377030577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:01:07.377389 containerd[1501]: time="2026-01-28T01:01:07.377079656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:01:07.378121 kubelet[2692]: E0128 01:01:07.377687 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:01:07.378121 kubelet[2692]: E0128 01:01:07.377779 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:01:07.378121 kubelet[2692]: E0128 01:01:07.377935 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-694cd9684d-pgqjc_calico-system(95e6d4a0-89ab-461c-a749-32d8a8aa1de6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:07.378346 kubelet[2692]: E0128 01:01:07.378041 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-694cd9684d-pgqjc" podUID="95e6d4a0-89ab-461c-a749-32d8a8aa1de6" Jan 28 01:01:09.348123 containerd[1501]: time="2026-01-28T01:01:09.347760326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:01:09.667071 containerd[1501]: time="2026-01-28T01:01:09.666778910Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:09.672051 containerd[1501]: time="2026-01-28T01:01:09.671203513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:01:09.672051 containerd[1501]: time="2026-01-28T01:01:09.671352146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:01:09.673532 kubelet[2692]: E0128 01:01:09.671646 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:09.673532 kubelet[2692]: E0128 01:01:09.671718 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:09.673532 kubelet[2692]: E0128 01:01:09.671895 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6768b4f5db-5thvw_calico-apiserver(215504c8-12e3-45d1-b60d-0c358a1645a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:09.673532 kubelet[2692]: E0128 01:01:09.671981 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-5thvw" podUID="215504c8-12e3-45d1-b60d-0c358a1645a5" Jan 28 01:01:11.095778 systemd[1]: Started sshd@9-10.244.8.18:22-68.220.241.50:54310.service - OpenSSH per-connection server daemon (68.220.241.50:54310). Jan 28 01:01:11.738060 sshd[4987]: Accepted publickey for core from 68.220.241.50 port 54310 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 01:01:11.742638 sshd[4987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:11.756245 systemd-logind[1484]: New session 12 of user core. Jan 28 01:01:11.764058 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 01:01:13.057844 sshd[4987]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:13.072611 systemd[1]: sshd@9-10.244.8.18:22-68.220.241.50:54310.service: Deactivated successfully. Jan 28 01:01:13.078272 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 01:01:13.081060 systemd-logind[1484]: Session 12 logged out. Waiting for processes to exit. Jan 28 01:01:13.084050 systemd-logind[1484]: Removed session 12. Jan 28 01:01:14.345996 containerd[1501]: time="2026-01-28T01:01:14.345866097Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:01:14.673700 containerd[1501]: time="2026-01-28T01:01:14.673601944Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:14.690115 containerd[1501]: time="2026-01-28T01:01:14.687627538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:01:14.695469 containerd[1501]: time="2026-01-28T01:01:14.687860194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:01:14.695569 kubelet[2692]: E0128 01:01:14.690706 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:01:14.695569 kubelet[2692]: E0128 01:01:14.690816 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:01:14.695569 kubelet[2692]: E0128 01:01:14.691009 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-v8h92_calico-system(d6fe3f19-c2cb-4440-ac98-4f17244eae9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:14.702395 containerd[1501]: time="2026-01-28T01:01:14.702352997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:01:15.082825 containerd[1501]: time="2026-01-28T01:01:15.082598794Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:15.086149 containerd[1501]: time="2026-01-28T01:01:15.086087061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:01:15.086506 containerd[1501]: time="2026-01-28T01:01:15.086192514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:01:15.086763 kubelet[2692]: E0128 01:01:15.086700 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:01:15.087491 kubelet[2692]: E0128 01:01:15.086786 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:01:15.087491 kubelet[2692]: E0128 01:01:15.086954 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-v8h92_calico-system(d6fe3f19-c2cb-4440-ac98-4f17244eae9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:15.087491 kubelet[2692]: E0128 01:01:15.087075 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:01:15.346121 containerd[1501]: time="2026-01-28T01:01:15.345578871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:01:15.693188 containerd[1501]: time="2026-01-28T01:01:15.693102490Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:15.697748 containerd[1501]: time="2026-01-28T01:01:15.697573646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:01:15.697748 containerd[1501]: time="2026-01-28T01:01:15.697628638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:01:15.698137 kubelet[2692]: E0128 01:01:15.698054 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:01:15.698822 kubelet[2692]: E0128 01:01:15.698162 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:01:15.698822 kubelet[2692]: E0128 01:01:15.698338 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-9r9k6_calico-system(eb7cb13d-31ca-4384-944f-1754705dfa3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:15.698822 kubelet[2692]: E0128 01:01:15.698416 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9r9k6" podUID="eb7cb13d-31ca-4384-944f-1754705dfa3e" Jan 28 01:01:17.346141 kubelet[2692]: E0128 01:01:17.345641 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcd5d865b-hrj24" podUID="1af325e3-7600-48af-bd7f-f8e9f715489b" Jan 28 01:01:18.160835 systemd[1]: Started sshd@10-10.244.8.18:22-68.220.241.50:48282.service - OpenSSH per-connection server daemon (68.220.241.50:48282). Jan 28 01:01:18.785585 sshd[5033]: Accepted publickey for core from 68.220.241.50 port 48282 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 01:01:18.789272 sshd[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:18.798358 systemd-logind[1484]: New session 13 of user core. Jan 28 01:01:18.808620 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 01:01:19.348062 kubelet[2692]: E0128 01:01:19.347578 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-r4vpr" podUID="4ef09ae9-4abf-45ab-835f-f8b9901cd23b" Jan 28 01:01:19.361247 sshd[5033]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:19.373208 systemd[1]: sshd@10-10.244.8.18:22-68.220.241.50:48282.service: Deactivated successfully. Jan 28 01:01:19.376895 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 01:01:19.381442 systemd-logind[1484]: Session 13 logged out. Waiting for processes to exit. Jan 28 01:01:19.383448 systemd-logind[1484]: Removed session 13. Jan 28 01:01:20.346559 kubelet[2692]: E0128 01:01:20.346094 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-694cd9684d-pgqjc" podUID="95e6d4a0-89ab-461c-a749-32d8a8aa1de6" Jan 28 01:01:22.344996 kubelet[2692]: E0128 01:01:22.344889 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-5thvw" podUID="215504c8-12e3-45d1-b60d-0c358a1645a5" Jan 28 01:01:24.469665 systemd[1]: Started sshd@11-10.244.8.18:22-68.220.241.50:41702.service - OpenSSH per-connection server daemon (68.220.241.50:41702). Jan 28 01:01:25.044853 sshd[5055]: Accepted publickey for core from 68.220.241.50 port 41702 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 01:01:25.047466 sshd[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:25.056053 systemd-logind[1484]: New session 14 of user core. Jan 28 01:01:25.064700 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 01:01:25.556683 sshd[5055]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:25.563210 systemd-logind[1484]: Session 14 logged out. Waiting for processes to exit. Jan 28 01:01:25.563649 systemd[1]: sshd@11-10.244.8.18:22-68.220.241.50:41702.service: Deactivated successfully. Jan 28 01:01:25.566642 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 01:01:25.568153 systemd-logind[1484]: Removed session 14. Jan 28 01:01:25.665777 systemd[1]: Started sshd@12-10.244.8.18:22-68.220.241.50:41714.service - OpenSSH per-connection server daemon (68.220.241.50:41714). Jan 28 01:01:26.236126 sshd[5069]: Accepted publickey for core from 68.220.241.50 port 41714 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 01:01:26.238332 sshd[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:26.247884 systemd-logind[1484]: New session 15 of user core. Jan 28 01:01:26.254625 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 01:01:26.818981 sshd[5069]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:26.825010 systemd[1]: sshd@12-10.244.8.18:22-68.220.241.50:41714.service: Deactivated successfully. Jan 28 01:01:26.829248 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 01:01:26.830765 systemd-logind[1484]: Session 15 logged out. Waiting for processes to exit. Jan 28 01:01:26.832931 systemd-logind[1484]: Removed session 15. Jan 28 01:01:26.929202 systemd[1]: Started sshd@13-10.244.8.18:22-68.220.241.50:41726.service - OpenSSH per-connection server daemon (68.220.241.50:41726). Jan 28 01:01:27.530392 sshd[5080]: Accepted publickey for core from 68.220.241.50 port 41726 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 01:01:27.532560 sshd[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:27.539479 systemd-logind[1484]: New session 16 of user core. Jan 28 01:01:27.547555 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 01:01:28.055640 sshd[5080]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:28.061216 systemd[1]: sshd@13-10.244.8.18:22-68.220.241.50:41726.service: Deactivated successfully. Jan 28 01:01:28.064492 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 01:01:28.066132 systemd-logind[1484]: Session 16 logged out. Waiting for processes to exit. Jan 28 01:01:28.068231 systemd-logind[1484]: Removed session 16. Jan 28 01:01:29.347194 kubelet[2692]: E0128 01:01:29.346708 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9r9k6" podUID="eb7cb13d-31ca-4384-944f-1754705dfa3e" Jan 28 01:01:29.349467 kubelet[2692]: E0128 01:01:29.348094 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:01:31.347581 containerd[1501]: time="2026-01-28T01:01:31.347482084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:01:31.672955 containerd[1501]: time="2026-01-28T01:01:31.672677336Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:31.674408 containerd[1501]: time="2026-01-28T01:01:31.674307592Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:01:31.674541 containerd[1501]: time="2026-01-28T01:01:31.674317512Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:01:31.676500 kubelet[2692]: E0128 01:01:31.674808 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:31.676500 kubelet[2692]: E0128 01:01:31.674902 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:31.676500 kubelet[2692]: E0128 01:01:31.675197 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6768b4f5db-r4vpr_calico-apiserver(4ef09ae9-4abf-45ab-835f-f8b9901cd23b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:31.676500 kubelet[2692]: E0128 01:01:31.675267 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-r4vpr" podUID="4ef09ae9-4abf-45ab-835f-f8b9901cd23b" Jan 28 01:01:31.677314 containerd[1501]: time="2026-01-28T01:01:31.675976695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:01:31.996406 containerd[1501]: time="2026-01-28T01:01:31.996038639Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:31.997590 containerd[1501]: time="2026-01-28T01:01:31.997544187Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:01:31.997830 containerd[1501]: time="2026-01-28T01:01:31.997734637Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:01:31.998217 kubelet[2692]: E0128 01:01:31.998053 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:01:31.998217 kubelet[2692]: E0128 01:01:31.998133 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:01:31.999748 kubelet[2692]: E0128 01:01:31.998912 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-694cd9684d-pgqjc_calico-system(95e6d4a0-89ab-461c-a749-32d8a8aa1de6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:31.999849 containerd[1501]: time="2026-01-28T01:01:31.999092508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:01:32.340074 containerd[1501]: time="2026-01-28T01:01:32.339835637Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:32.341885 containerd[1501]: time="2026-01-28T01:01:32.341665942Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:01:32.341885 containerd[1501]: time="2026-01-28T01:01:32.341801275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:01:32.342311 kubelet[2692]: E0128 01:01:32.342202 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:01:32.342433 kubelet[2692]: E0128 01:01:32.342318 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:01:32.342732 kubelet[2692]: E0128 01:01:32.342682 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7fcd5d865b-hrj24_calico-system(1af325e3-7600-48af-bd7f-f8e9f715489b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:32.342882 kubelet[2692]: E0128 01:01:32.342766 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcd5d865b-hrj24" podUID="1af325e3-7600-48af-bd7f-f8e9f715489b" Jan 28 01:01:32.344377 containerd[1501]: time="2026-01-28T01:01:32.344194053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:01:32.665821 containerd[1501]: time="2026-01-28T01:01:32.665542543Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:32.667207 containerd[1501]: time="2026-01-28T01:01:32.667152354Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:01:32.667868 containerd[1501]: time="2026-01-28T01:01:32.667298224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:01:32.668247 kubelet[2692]: E0128 01:01:32.667717 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:01:32.668247 kubelet[2692]: E0128 01:01:32.667821 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:01:32.668247 kubelet[2692]: E0128 01:01:32.667989 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-694cd9684d-pgqjc_calico-system(95e6d4a0-89ab-461c-a749-32d8a8aa1de6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:32.668620 kubelet[2692]: E0128 01:01:32.668160 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-694cd9684d-pgqjc" podUID="95e6d4a0-89ab-461c-a749-32d8a8aa1de6" Jan 28 01:01:33.156747 systemd[1]: Started sshd@14-10.244.8.18:22-68.220.241.50:51990.service - OpenSSH per-connection server daemon (68.220.241.50:51990). Jan 28 01:01:33.740833 sshd[5099]: Accepted publickey for core from 68.220.241.50 port 51990 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 01:01:33.744443 sshd[5099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:33.753518 systemd-logind[1484]: New session 17 of user core. Jan 28 01:01:33.759573 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 01:01:34.261117 sshd[5099]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:34.279910 systemd[1]: sshd@14-10.244.8.18:22-68.220.241.50:51990.service: Deactivated successfully. Jan 28 01:01:34.291181 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 01:01:34.297411 systemd-logind[1484]: Session 17 logged out. Waiting for processes to exit. Jan 28 01:01:34.302334 systemd-logind[1484]: Removed session 17. Jan 28 01:01:36.348587 containerd[1501]: time="2026-01-28T01:01:36.348495851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:01:36.676896 containerd[1501]: time="2026-01-28T01:01:36.676628677Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:36.678541 containerd[1501]: time="2026-01-28T01:01:36.678380915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:01:36.678642 containerd[1501]: time="2026-01-28T01:01:36.678411029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:01:36.678952 kubelet[2692]: E0128 01:01:36.678831 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:36.679645 kubelet[2692]: E0128 01:01:36.678953 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:01:36.679645 kubelet[2692]: E0128 01:01:36.679232 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6768b4f5db-5thvw_calico-apiserver(215504c8-12e3-45d1-b60d-0c358a1645a5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:36.679758 kubelet[2692]: E0128 01:01:36.679721 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-5thvw" podUID="215504c8-12e3-45d1-b60d-0c358a1645a5" Jan 28 01:01:39.363968 systemd[1]: Started sshd@15-10.244.8.18:22-68.220.241.50:52000.service - OpenSSH per-connection server daemon (68.220.241.50:52000). Jan 28 01:01:39.984338 sshd[5118]: Accepted publickey for core from 68.220.241.50 port 52000 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 01:01:39.989007 sshd[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:39.999347 systemd-logind[1484]: New session 18 of user core. Jan 28 01:01:40.006536 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 01:01:40.587420 sshd[5118]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:40.594026 systemd-logind[1484]: Session 18 logged out. Waiting for processes to exit. Jan 28 01:01:40.594666 systemd[1]: sshd@15-10.244.8.18:22-68.220.241.50:52000.service: Deactivated successfully. Jan 28 01:01:40.597680 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 01:01:40.599851 systemd-logind[1484]: Removed session 18. Jan 28 01:01:43.351149 containerd[1501]: time="2026-01-28T01:01:43.350364492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 01:01:43.403248 containerd[1501]: time="2026-01-28T01:01:43.402418192Z" level=info msg="StopPodSandbox for \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\"" Jan 28 01:01:43.642365 containerd[1501]: 2026-01-28 01:01:43.524 [WARNING][5143] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0", GenerateName:"calico-apiserver-6768b4f5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"215504c8-12e3-45d1-b60d-0c358a1645a5", ResourceVersion:"1377", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6768b4f5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148", Pod:"calico-apiserver-6768b4f5db-5thvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia122905f106", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:43.642365 containerd[1501]: 2026-01-28 01:01:43.528 [INFO][5143] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Jan 28 01:01:43.642365 containerd[1501]: 2026-01-28 01:01:43.528 [INFO][5143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" iface="eth0" netns="" Jan 28 01:01:43.642365 containerd[1501]: 2026-01-28 01:01:43.528 [INFO][5143] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Jan 28 01:01:43.642365 containerd[1501]: 2026-01-28 01:01:43.528 [INFO][5143] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Jan 28 01:01:43.642365 containerd[1501]: 2026-01-28 01:01:43.618 [INFO][5150] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" HandleID="k8s-pod-network.5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:01:43.642365 containerd[1501]: 2026-01-28 01:01:43.618 [INFO][5150] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:43.642365 containerd[1501]: 2026-01-28 01:01:43.618 [INFO][5150] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:43.642365 containerd[1501]: 2026-01-28 01:01:43.633 [WARNING][5150] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" HandleID="k8s-pod-network.5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:01:43.642365 containerd[1501]: 2026-01-28 01:01:43.633 [INFO][5150] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" HandleID="k8s-pod-network.5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:01:43.642365 containerd[1501]: 2026-01-28 01:01:43.636 [INFO][5150] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:43.642365 containerd[1501]: 2026-01-28 01:01:43.639 [INFO][5143] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Jan 28 01:01:43.644207 containerd[1501]: time="2026-01-28T01:01:43.643780350Z" level=info msg="TearDown network for sandbox \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\" successfully" Jan 28 01:01:43.644207 containerd[1501]: time="2026-01-28T01:01:43.643831558Z" level=info msg="StopPodSandbox for \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\" returns successfully" Jan 28 01:01:43.645374 containerd[1501]: time="2026-01-28T01:01:43.644874549Z" level=info msg="RemovePodSandbox for \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\"" Jan 28 01:01:43.647595 containerd[1501]: time="2026-01-28T01:01:43.647562428Z" level=info msg="Forcibly stopping sandbox \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\"" Jan 28 01:01:43.683941 containerd[1501]: time="2026-01-28T01:01:43.683697797Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:43.688241 containerd[1501]: time="2026-01-28T01:01:43.688178318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 01:01:43.688421 containerd[1501]: time="2026-01-28T01:01:43.688359931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 01:01:43.689225 kubelet[2692]: E0128 01:01:43.688628 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:01:43.689225 kubelet[2692]: E0128 01:01:43.688744 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 01:01:43.689225 kubelet[2692]: E0128 01:01:43.688905 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-v8h92_calico-system(d6fe3f19-c2cb-4440-ac98-4f17244eae9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:43.693022 containerd[1501]: time="2026-01-28T01:01:43.691649375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 01:01:43.785147 containerd[1501]: 2026-01-28 01:01:43.715 [WARNING][5164] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0", GenerateName:"calico-apiserver-6768b4f5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"215504c8-12e3-45d1-b60d-0c358a1645a5", ResourceVersion:"1377", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6768b4f5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"126890a733e69ca285f8eb1bb775ecac19382c3fe2e30005a97c2b113f418148", Pod:"calico-apiserver-6768b4f5db-5thvw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia122905f106", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:43.785147 containerd[1501]: 2026-01-28 01:01:43.716 [INFO][5164] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Jan 28 01:01:43.785147 containerd[1501]: 2026-01-28 01:01:43.716 [INFO][5164] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" iface="eth0" netns="" Jan 28 01:01:43.785147 containerd[1501]: 2026-01-28 01:01:43.716 [INFO][5164] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Jan 28 01:01:43.785147 containerd[1501]: 2026-01-28 01:01:43.716 [INFO][5164] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Jan 28 01:01:43.785147 containerd[1501]: 2026-01-28 01:01:43.762 [INFO][5171] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" HandleID="k8s-pod-network.5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:01:43.785147 containerd[1501]: 2026-01-28 01:01:43.763 [INFO][5171] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:43.785147 containerd[1501]: 2026-01-28 01:01:43.763 [INFO][5171] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:43.785147 containerd[1501]: 2026-01-28 01:01:43.774 [WARNING][5171] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" HandleID="k8s-pod-network.5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:01:43.785147 containerd[1501]: 2026-01-28 01:01:43.774 [INFO][5171] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" HandleID="k8s-pod-network.5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--5thvw-eth0" Jan 28 01:01:43.785147 containerd[1501]: 2026-01-28 01:01:43.780 [INFO][5171] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:43.785147 containerd[1501]: 2026-01-28 01:01:43.782 [INFO][5164] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41" Jan 28 01:01:43.786609 containerd[1501]: time="2026-01-28T01:01:43.785192712Z" level=info msg="TearDown network for sandbox \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\" successfully" Jan 28 01:01:43.811363 containerd[1501]: time="2026-01-28T01:01:43.811260695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:43.811620 containerd[1501]: time="2026-01-28T01:01:43.811414609Z" level=info msg="RemovePodSandbox \"5a3a9efeb3233ee939561363b4784d43e933922ac71e35cb23bc389c2c164a41\" returns successfully" Jan 28 01:01:43.812832 containerd[1501]: time="2026-01-28T01:01:43.812589346Z" level=info msg="StopPodSandbox for \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\"" Jan 28 01:01:43.914867 containerd[1501]: 2026-01-28 01:01:43.866 [WARNING][5185] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0", GenerateName:"calico-apiserver-6768b4f5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"4ef09ae9-4abf-45ab-835f-f8b9901cd23b", ResourceVersion:"1341", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6768b4f5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5", Pod:"calico-apiserver-6768b4f5db-r4vpr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7150bbe7d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:43.914867 containerd[1501]: 2026-01-28 01:01:43.866 [INFO][5185] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Jan 28 01:01:43.914867 containerd[1501]: 2026-01-28 01:01:43.866 [INFO][5185] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" iface="eth0" netns="" Jan 28 01:01:43.914867 containerd[1501]: 2026-01-28 01:01:43.866 [INFO][5185] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Jan 28 01:01:43.914867 containerd[1501]: 2026-01-28 01:01:43.866 [INFO][5185] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Jan 28 01:01:43.914867 containerd[1501]: 2026-01-28 01:01:43.895 [INFO][5192] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" HandleID="k8s-pod-network.60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:01:43.914867 containerd[1501]: 2026-01-28 01:01:43.896 [INFO][5192] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:43.914867 containerd[1501]: 2026-01-28 01:01:43.896 [INFO][5192] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:43.914867 containerd[1501]: 2026-01-28 01:01:43.908 [WARNING][5192] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" HandleID="k8s-pod-network.60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:01:43.914867 containerd[1501]: 2026-01-28 01:01:43.908 [INFO][5192] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" HandleID="k8s-pod-network.60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:01:43.914867 containerd[1501]: 2026-01-28 01:01:43.910 [INFO][5192] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:43.914867 containerd[1501]: 2026-01-28 01:01:43.912 [INFO][5185] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Jan 28 01:01:43.919645 containerd[1501]: time="2026-01-28T01:01:43.916838238Z" level=info msg="TearDown network for sandbox \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\" successfully" Jan 28 01:01:43.919645 containerd[1501]: time="2026-01-28T01:01:43.916877106Z" level=info msg="StopPodSandbox for \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\" returns successfully" Jan 28 01:01:43.919645 containerd[1501]: time="2026-01-28T01:01:43.918589702Z" level=info msg="RemovePodSandbox for \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\"" Jan 28 01:01:43.919645 containerd[1501]: time="2026-01-28T01:01:43.918627475Z" level=info msg="Forcibly stopping sandbox \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\"" Jan 28 01:01:44.022715 containerd[1501]: 2026-01-28 01:01:43.978 [WARNING][5206] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0", GenerateName:"calico-apiserver-6768b4f5db-", Namespace:"calico-apiserver", SelfLink:"", UID:"4ef09ae9-4abf-45ab-835f-f8b9901cd23b", ResourceVersion:"1341", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6768b4f5db", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"03ef0f03f54fa6d9883818e7c2e39c9b2de2b10271c69ff66ef22eec783b67c5", Pod:"calico-apiserver-6768b4f5db-r4vpr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.113.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia7150bbe7d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:44.022715 containerd[1501]: 2026-01-28 01:01:43.979 [INFO][5206] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Jan 28 01:01:44.022715 containerd[1501]: 2026-01-28 01:01:43.979 [INFO][5206] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" iface="eth0" netns="" Jan 28 01:01:44.022715 containerd[1501]: 2026-01-28 01:01:43.979 [INFO][5206] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Jan 28 01:01:44.022715 containerd[1501]: 2026-01-28 01:01:43.979 [INFO][5206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Jan 28 01:01:44.022715 containerd[1501]: 2026-01-28 01:01:44.007 [INFO][5213] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" HandleID="k8s-pod-network.60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:01:44.022715 containerd[1501]: 2026-01-28 01:01:44.008 [INFO][5213] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:44.022715 containerd[1501]: 2026-01-28 01:01:44.008 [INFO][5213] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:44.022715 containerd[1501]: 2026-01-28 01:01:44.016 [WARNING][5213] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" HandleID="k8s-pod-network.60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:01:44.022715 containerd[1501]: 2026-01-28 01:01:44.016 [INFO][5213] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" HandleID="k8s-pod-network.60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--apiserver--6768b4f5db--r4vpr-eth0" Jan 28 01:01:44.022715 containerd[1501]: 2026-01-28 01:01:44.019 [INFO][5213] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:44.022715 containerd[1501]: 2026-01-28 01:01:44.020 [INFO][5206] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5" Jan 28 01:01:44.024545 containerd[1501]: time="2026-01-28T01:01:44.022914320Z" level=info msg="TearDown network for sandbox \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\" successfully" Jan 28 01:01:44.027186 containerd[1501]: time="2026-01-28T01:01:44.027146845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:44.027322 containerd[1501]: time="2026-01-28T01:01:44.027240177Z" level=info msg="RemovePodSandbox \"60be24589a569b4030b197e429818a6eced8b46392ee5873a97c4b708bf149f5\" returns successfully" Jan 28 01:01:44.028007 containerd[1501]: time="2026-01-28T01:01:44.027974136Z" level=info msg="StopPodSandbox for \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\"" Jan 28 01:01:44.029157 containerd[1501]: time="2026-01-28T01:01:44.029123340Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:44.030031 containerd[1501]: time="2026-01-28T01:01:44.029978730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 01:01:44.031072 containerd[1501]: time="2026-01-28T01:01:44.030081207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 01:01:44.031159 kubelet[2692]: E0128 01:01:44.030255 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:01:44.031159 kubelet[2692]: E0128 01:01:44.030378 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 01:01:44.031159 kubelet[2692]: E0128 01:01:44.030520 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-v8h92_calico-system(d6fe3f19-c2cb-4440-ac98-4f17244eae9f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:44.031429 kubelet[2692]: E0128 01:01:44.030647 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:01:44.161349 containerd[1501]: 2026-01-28 01:01:44.089 [WARNING][5228] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"628696b9-5871-452c-9749-f01c86f7c8e5", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24", Pod:"coredns-66bc5c9577-2dnhz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04d819a8d0a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:44.161349 containerd[1501]: 2026-01-28 01:01:44.090 [INFO][5228] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Jan 28 01:01:44.161349 containerd[1501]: 2026-01-28 01:01:44.090 [INFO][5228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" iface="eth0" netns="" Jan 28 01:01:44.161349 containerd[1501]: 2026-01-28 01:01:44.090 [INFO][5228] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Jan 28 01:01:44.161349 containerd[1501]: 2026-01-28 01:01:44.090 [INFO][5228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Jan 28 01:01:44.161349 containerd[1501]: 2026-01-28 01:01:44.136 [INFO][5235] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" HandleID="k8s-pod-network.5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:01:44.161349 containerd[1501]: 2026-01-28 01:01:44.137 [INFO][5235] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:44.161349 containerd[1501]: 2026-01-28 01:01:44.137 [INFO][5235] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:44.161349 containerd[1501]: 2026-01-28 01:01:44.150 [WARNING][5235] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" HandleID="k8s-pod-network.5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:01:44.161349 containerd[1501]: 2026-01-28 01:01:44.150 [INFO][5235] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" HandleID="k8s-pod-network.5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:01:44.161349 containerd[1501]: 2026-01-28 01:01:44.157 [INFO][5235] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:44.161349 containerd[1501]: 2026-01-28 01:01:44.159 [INFO][5228] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Jan 28 01:01:44.163871 containerd[1501]: time="2026-01-28T01:01:44.161398782Z" level=info msg="TearDown network for sandbox \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\" successfully" Jan 28 01:01:44.163871 containerd[1501]: time="2026-01-28T01:01:44.161444421Z" level=info msg="StopPodSandbox for \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\" returns successfully" Jan 28 01:01:44.163871 containerd[1501]: time="2026-01-28T01:01:44.162726953Z" level=info msg="RemovePodSandbox for \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\"" Jan 28 01:01:44.163871 containerd[1501]: time="2026-01-28T01:01:44.162764790Z" level=info msg="Forcibly stopping sandbox \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\"" Jan 28 01:01:44.276618 containerd[1501]: 2026-01-28 01:01:44.224 [WARNING][5249] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"628696b9-5871-452c-9749-f01c86f7c8e5", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"cdd0e30b270f54dbd0f6a47662fc314108c2368a074cef9dfeaabf597fa8bc24", Pod:"coredns-66bc5c9577-2dnhz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04d819a8d0a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:44.276618 containerd[1501]: 2026-01-28 01:01:44.225 [INFO][5249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Jan 28 01:01:44.276618 containerd[1501]: 2026-01-28 01:01:44.225 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" iface="eth0" netns="" Jan 28 01:01:44.276618 containerd[1501]: 2026-01-28 01:01:44.225 [INFO][5249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Jan 28 01:01:44.276618 containerd[1501]: 2026-01-28 01:01:44.225 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Jan 28 01:01:44.276618 containerd[1501]: 2026-01-28 01:01:44.258 [INFO][5256] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" HandleID="k8s-pod-network.5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:01:44.276618 containerd[1501]: 2026-01-28 01:01:44.258 [INFO][5256] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:44.276618 containerd[1501]: 2026-01-28 01:01:44.258 [INFO][5256] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:44.276618 containerd[1501]: 2026-01-28 01:01:44.269 [WARNING][5256] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" HandleID="k8s-pod-network.5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:01:44.276618 containerd[1501]: 2026-01-28 01:01:44.269 [INFO][5256] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" HandleID="k8s-pod-network.5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--2dnhz-eth0" Jan 28 01:01:44.276618 containerd[1501]: 2026-01-28 01:01:44.272 [INFO][5256] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:44.276618 containerd[1501]: 2026-01-28 01:01:44.274 [INFO][5249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb" Jan 28 01:01:44.277574 containerd[1501]: time="2026-01-28T01:01:44.276753313Z" level=info msg="TearDown network for sandbox \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\" successfully" Jan 28 01:01:44.285746 containerd[1501]: time="2026-01-28T01:01:44.285050564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:44.285746 containerd[1501]: time="2026-01-28T01:01:44.285130113Z" level=info msg="RemovePodSandbox \"5eba460546b15eec959a1b61c58eab929936224058affae160afe64cae87a3eb\" returns successfully" Jan 28 01:01:44.285919 containerd[1501]: time="2026-01-28T01:01:44.285797355Z" level=info msg="StopPodSandbox for \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\"" Jan 28 01:01:44.351724 containerd[1501]: time="2026-01-28T01:01:44.350078665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 01:01:44.439051 containerd[1501]: 2026-01-28 01:01:44.340 [WARNING][5270] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0", GenerateName:"calico-kube-controllers-7fcd5d865b-", Namespace:"calico-system", SelfLink:"", UID:"1af325e3-7600-48af-bd7f-f8e9f715489b", ResourceVersion:"1337", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcd5d865b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2", Pod:"calico-kube-controllers-7fcd5d865b-hrj24", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied3f4408309", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:44.439051 containerd[1501]: 2026-01-28 01:01:44.341 [INFO][5270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Jan 28 01:01:44.439051 containerd[1501]: 2026-01-28 01:01:44.341 [INFO][5270] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" iface="eth0" netns="" Jan 28 01:01:44.439051 containerd[1501]: 2026-01-28 01:01:44.341 [INFO][5270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Jan 28 01:01:44.439051 containerd[1501]: 2026-01-28 01:01:44.341 [INFO][5270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Jan 28 01:01:44.439051 containerd[1501]: 2026-01-28 01:01:44.416 [INFO][5277] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" HandleID="k8s-pod-network.006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:01:44.439051 containerd[1501]: 2026-01-28 01:01:44.416 [INFO][5277] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:44.439051 containerd[1501]: 2026-01-28 01:01:44.416 [INFO][5277] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:44.439051 containerd[1501]: 2026-01-28 01:01:44.432 [WARNING][5277] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" HandleID="k8s-pod-network.006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:01:44.439051 containerd[1501]: 2026-01-28 01:01:44.432 [INFO][5277] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" HandleID="k8s-pod-network.006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:01:44.439051 containerd[1501]: 2026-01-28 01:01:44.435 [INFO][5277] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:44.439051 containerd[1501]: 2026-01-28 01:01:44.437 [INFO][5270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Jan 28 01:01:44.440123 containerd[1501]: time="2026-01-28T01:01:44.439175762Z" level=info msg="TearDown network for sandbox \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\" successfully" Jan 28 01:01:44.440123 containerd[1501]: time="2026-01-28T01:01:44.439213797Z" level=info msg="StopPodSandbox for \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\" returns successfully" Jan 28 01:01:44.440123 containerd[1501]: time="2026-01-28T01:01:44.439941726Z" level=info msg="RemovePodSandbox for \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\"" Jan 28 01:01:44.440123 containerd[1501]: time="2026-01-28T01:01:44.439979325Z" level=info msg="Forcibly stopping sandbox \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\"" Jan 28 01:01:44.551435 containerd[1501]: 2026-01-28 01:01:44.492 [WARNING][5292] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0", GenerateName:"calico-kube-controllers-7fcd5d865b-", Namespace:"calico-system", SelfLink:"", UID:"1af325e3-7600-48af-bd7f-f8e9f715489b", ResourceVersion:"1337", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fcd5d865b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"c780c800d99580232bb7f35f6a6cf0729dcb1740c7f416e9d204c784746aa6a2", Pod:"calico-kube-controllers-7fcd5d865b-hrj24", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.113.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calied3f4408309", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:44.551435 containerd[1501]: 2026-01-28 01:01:44.492 [INFO][5292] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Jan 28 01:01:44.551435 containerd[1501]: 2026-01-28 01:01:44.492 [INFO][5292] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" iface="eth0" netns="" Jan 28 01:01:44.551435 containerd[1501]: 2026-01-28 01:01:44.493 [INFO][5292] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Jan 28 01:01:44.551435 containerd[1501]: 2026-01-28 01:01:44.493 [INFO][5292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Jan 28 01:01:44.551435 containerd[1501]: 2026-01-28 01:01:44.527 [INFO][5299] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" HandleID="k8s-pod-network.006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:01:44.551435 containerd[1501]: 2026-01-28 01:01:44.527 [INFO][5299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:44.551435 containerd[1501]: 2026-01-28 01:01:44.527 [INFO][5299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:44.551435 containerd[1501]: 2026-01-28 01:01:44.541 [WARNING][5299] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" HandleID="k8s-pod-network.006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:01:44.551435 containerd[1501]: 2026-01-28 01:01:44.541 [INFO][5299] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" HandleID="k8s-pod-network.006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Workload="srv--8h12l.gb1.brightbox.com-k8s-calico--kube--controllers--7fcd5d865b--hrj24-eth0" Jan 28 01:01:44.551435 containerd[1501]: 2026-01-28 01:01:44.544 [INFO][5299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:44.551435 containerd[1501]: 2026-01-28 01:01:44.546 [INFO][5292] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d" Jan 28 01:01:44.551435 containerd[1501]: time="2026-01-28T01:01:44.549551767Z" level=info msg="TearDown network for sandbox \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\" successfully" Jan 28 01:01:44.569917 containerd[1501]: time="2026-01-28T01:01:44.569615338Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:44.569917 containerd[1501]: time="2026-01-28T01:01:44.569711042Z" level=info msg="RemovePodSandbox \"006f16219ca5a9b68e72238ab0e2b2c6cf97fb41e681828ee906eba93ddf9b7d\" returns successfully" Jan 28 01:01:44.571653 containerd[1501]: time="2026-01-28T01:01:44.571609864Z" level=info msg="StopPodSandbox for \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\"" Jan 28 01:01:44.681606 containerd[1501]: time="2026-01-28T01:01:44.681392673Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:01:44.682497 containerd[1501]: time="2026-01-28T01:01:44.682443784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 01:01:44.682807 containerd[1501]: time="2026-01-28T01:01:44.682477741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 01:01:44.683339 kubelet[2692]: E0128 01:01:44.683210 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:01:44.684595 kubelet[2692]: E0128 01:01:44.683349 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 01:01:44.684595 kubelet[2692]: E0128 01:01:44.683545 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-9r9k6_calico-system(eb7cb13d-31ca-4384-944f-1754705dfa3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 01:01:44.684595 kubelet[2692]: E0128 01:01:44.683612 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9r9k6" podUID="eb7cb13d-31ca-4384-944f-1754705dfa3e" Jan 28 01:01:44.691474 containerd[1501]: 2026-01-28 01:01:44.633 [WARNING][5313] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d6fe3f19-c2cb-4440-ac98-4f17244eae9f", ResourceVersion:"1414", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019", Pod:"csi-node-driver-v8h92", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4488ee92e94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:44.691474 containerd[1501]: 2026-01-28 01:01:44.633 [INFO][5313] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Jan 28 01:01:44.691474 containerd[1501]: 2026-01-28 01:01:44.633 [INFO][5313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" iface="eth0" netns="" Jan 28 01:01:44.691474 containerd[1501]: 2026-01-28 01:01:44.633 [INFO][5313] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Jan 28 01:01:44.691474 containerd[1501]: 2026-01-28 01:01:44.633 [INFO][5313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Jan 28 01:01:44.691474 containerd[1501]: 2026-01-28 01:01:44.673 [INFO][5320] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" HandleID="k8s-pod-network.f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Workload="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:44.691474 containerd[1501]: 2026-01-28 01:01:44.673 [INFO][5320] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:44.691474 containerd[1501]: 2026-01-28 01:01:44.674 [INFO][5320] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:44.691474 containerd[1501]: 2026-01-28 01:01:44.684 [WARNING][5320] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" HandleID="k8s-pod-network.f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Workload="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:44.691474 containerd[1501]: 2026-01-28 01:01:44.684 [INFO][5320] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" HandleID="k8s-pod-network.f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Workload="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:44.691474 containerd[1501]: 2026-01-28 01:01:44.687 [INFO][5320] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:44.691474 containerd[1501]: 2026-01-28 01:01:44.689 [INFO][5313] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Jan 28 01:01:44.692267 containerd[1501]: time="2026-01-28T01:01:44.691531940Z" level=info msg="TearDown network for sandbox \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\" successfully" Jan 28 01:01:44.692267 containerd[1501]: time="2026-01-28T01:01:44.691562863Z" level=info msg="StopPodSandbox for \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\" returns successfully" Jan 28 01:01:44.693109 containerd[1501]: time="2026-01-28T01:01:44.693044893Z" level=info msg="RemovePodSandbox for \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\"" Jan 28 01:01:44.693189 containerd[1501]: time="2026-01-28T01:01:44.693115062Z" level=info msg="Forcibly stopping sandbox \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\"" Jan 28 01:01:44.792077 containerd[1501]: 2026-01-28 01:01:44.746 [WARNING][5334] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d6fe3f19-c2cb-4440-ac98-4f17244eae9f", ResourceVersion:"1414", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"e77892c49b96af9f227adf8247f94bd9e9299231b91e985fe950be5ebd31a019", Pod:"csi-node-driver-v8h92", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4488ee92e94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:44.792077 containerd[1501]: 2026-01-28 01:01:44.746 [INFO][5334] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Jan 28 01:01:44.792077 containerd[1501]: 2026-01-28 01:01:44.746 [INFO][5334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" iface="eth0" netns="" Jan 28 01:01:44.792077 containerd[1501]: 2026-01-28 01:01:44.746 [INFO][5334] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Jan 28 01:01:44.792077 containerd[1501]: 2026-01-28 01:01:44.746 [INFO][5334] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Jan 28 01:01:44.792077 containerd[1501]: 2026-01-28 01:01:44.775 [INFO][5341] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" HandleID="k8s-pod-network.f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Workload="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:44.792077 containerd[1501]: 2026-01-28 01:01:44.775 [INFO][5341] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:44.792077 containerd[1501]: 2026-01-28 01:01:44.775 [INFO][5341] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:44.792077 containerd[1501]: 2026-01-28 01:01:44.785 [WARNING][5341] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" HandleID="k8s-pod-network.f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Workload="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:44.792077 containerd[1501]: 2026-01-28 01:01:44.785 [INFO][5341] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" HandleID="k8s-pod-network.f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Workload="srv--8h12l.gb1.brightbox.com-k8s-csi--node--driver--v8h92-eth0" Jan 28 01:01:44.792077 containerd[1501]: 2026-01-28 01:01:44.787 [INFO][5341] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:44.792077 containerd[1501]: 2026-01-28 01:01:44.790 [INFO][5334] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe" Jan 28 01:01:44.794005 containerd[1501]: time="2026-01-28T01:01:44.792016693Z" level=info msg="TearDown network for sandbox \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\" successfully" Jan 28 01:01:44.798507 containerd[1501]: time="2026-01-28T01:01:44.798232635Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:44.798507 containerd[1501]: time="2026-01-28T01:01:44.798383849Z" level=info msg="RemovePodSandbox \"f60f413ed4a83b8d95462d3c8c545bcfa288d1f0b0e026b7e8d6865036d2b1fe\" returns successfully" Jan 28 01:01:44.800094 containerd[1501]: time="2026-01-28T01:01:44.799545418Z" level=info msg="StopPodSandbox for \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\"" Jan 28 01:01:44.909554 containerd[1501]: 2026-01-28 01:01:44.856 [WARNING][5355] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-whisker--795bbb5d6--zbqwm-eth0" Jan 28 01:01:44.909554 containerd[1501]: 2026-01-28 01:01:44.857 [INFO][5355] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Jan 28 01:01:44.909554 containerd[1501]: 2026-01-28 01:01:44.857 [INFO][5355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" iface="eth0" netns="" Jan 28 01:01:44.909554 containerd[1501]: 2026-01-28 01:01:44.857 [INFO][5355] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Jan 28 01:01:44.909554 containerd[1501]: 2026-01-28 01:01:44.857 [INFO][5355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Jan 28 01:01:44.909554 containerd[1501]: 2026-01-28 01:01:44.889 [INFO][5362] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" HandleID="k8s-pod-network.4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Workload="srv--8h12l.gb1.brightbox.com-k8s-whisker--795bbb5d6--zbqwm-eth0" Jan 28 01:01:44.909554 containerd[1501]: 2026-01-28 01:01:44.889 [INFO][5362] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:44.909554 containerd[1501]: 2026-01-28 01:01:44.890 [INFO][5362] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:44.909554 containerd[1501]: 2026-01-28 01:01:44.902 [WARNING][5362] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" HandleID="k8s-pod-network.4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Workload="srv--8h12l.gb1.brightbox.com-k8s-whisker--795bbb5d6--zbqwm-eth0" Jan 28 01:01:44.909554 containerd[1501]: 2026-01-28 01:01:44.902 [INFO][5362] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" HandleID="k8s-pod-network.4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Workload="srv--8h12l.gb1.brightbox.com-k8s-whisker--795bbb5d6--zbqwm-eth0" Jan 28 01:01:44.909554 containerd[1501]: 2026-01-28 01:01:44.905 [INFO][5362] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:44.909554 containerd[1501]: 2026-01-28 01:01:44.907 [INFO][5355] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Jan 28 01:01:44.911353 containerd[1501]: time="2026-01-28T01:01:44.910273491Z" level=info msg="TearDown network for sandbox \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\" successfully" Jan 28 01:01:44.911353 containerd[1501]: time="2026-01-28T01:01:44.910347654Z" level=info msg="StopPodSandbox for \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\" returns successfully" Jan 28 01:01:44.911353 containerd[1501]: time="2026-01-28T01:01:44.911042384Z" level=info msg="RemovePodSandbox for \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\"" Jan 28 01:01:44.911353 containerd[1501]: time="2026-01-28T01:01:44.911077630Z" level=info msg="Forcibly stopping sandbox \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\"" Jan 28 01:01:45.009242 containerd[1501]: 2026-01-28 01:01:44.959 [WARNING][5376] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" WorkloadEndpoint="srv--8h12l.gb1.brightbox.com-k8s-whisker--795bbb5d6--zbqwm-eth0" Jan 28 01:01:45.009242 containerd[1501]: 2026-01-28 01:01:44.959 [INFO][5376] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Jan 28 01:01:45.009242 containerd[1501]: 2026-01-28 01:01:44.959 [INFO][5376] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" iface="eth0" netns="" Jan 28 01:01:45.009242 containerd[1501]: 2026-01-28 01:01:44.959 [INFO][5376] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Jan 28 01:01:45.009242 containerd[1501]: 2026-01-28 01:01:44.959 [INFO][5376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Jan 28 01:01:45.009242 containerd[1501]: 2026-01-28 01:01:44.990 [INFO][5383] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" HandleID="k8s-pod-network.4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Workload="srv--8h12l.gb1.brightbox.com-k8s-whisker--795bbb5d6--zbqwm-eth0" Jan 28 01:01:45.009242 containerd[1501]: 2026-01-28 01:01:44.991 [INFO][5383] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:45.009242 containerd[1501]: 2026-01-28 01:01:44.991 [INFO][5383] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:45.009242 containerd[1501]: 2026-01-28 01:01:45.000 [WARNING][5383] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" HandleID="k8s-pod-network.4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Workload="srv--8h12l.gb1.brightbox.com-k8s-whisker--795bbb5d6--zbqwm-eth0" Jan 28 01:01:45.009242 containerd[1501]: 2026-01-28 01:01:45.000 [INFO][5383] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" HandleID="k8s-pod-network.4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Workload="srv--8h12l.gb1.brightbox.com-k8s-whisker--795bbb5d6--zbqwm-eth0" Jan 28 01:01:45.009242 containerd[1501]: 2026-01-28 01:01:45.002 [INFO][5383] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:45.009242 containerd[1501]: 2026-01-28 01:01:45.005 [INFO][5376] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a" Jan 28 01:01:45.009242 containerd[1501]: time="2026-01-28T01:01:45.007708735Z" level=info msg="TearDown network for sandbox \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\" successfully" Jan 28 01:01:45.025822 containerd[1501]: time="2026-01-28T01:01:45.025762391Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:45.026110 containerd[1501]: time="2026-01-28T01:01:45.026079607Z" level=info msg="RemovePodSandbox \"4e26c8c35336d7aff319f8e310d97d87a8e44ec1390e0edc8f7b98e12264392a\" returns successfully" Jan 28 01:01:45.026920 containerd[1501]: time="2026-01-28T01:01:45.026889025Z" level=info msg="StopPodSandbox for \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\"" Jan 28 01:01:45.136991 containerd[1501]: 2026-01-28 01:01:45.078 [WARNING][5397] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"eb7cb13d-31ca-4384-944f-1754705dfa3e", ResourceVersion:"1423", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f", Pod:"goldmane-7c778bb748-9r9k6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali827be858c21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:45.136991 containerd[1501]: 2026-01-28 01:01:45.079 [INFO][5397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Jan 28 01:01:45.136991 containerd[1501]: 2026-01-28 01:01:45.079 [INFO][5397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" iface="eth0" netns="" Jan 28 01:01:45.136991 containerd[1501]: 2026-01-28 01:01:45.079 [INFO][5397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Jan 28 01:01:45.136991 containerd[1501]: 2026-01-28 01:01:45.079 [INFO][5397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Jan 28 01:01:45.136991 containerd[1501]: 2026-01-28 01:01:45.121 [INFO][5404] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" HandleID="k8s-pod-network.598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Workload="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:45.136991 containerd[1501]: 2026-01-28 01:01:45.121 [INFO][5404] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:45.136991 containerd[1501]: 2026-01-28 01:01:45.121 [INFO][5404] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:45.136991 containerd[1501]: 2026-01-28 01:01:45.130 [WARNING][5404] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" HandleID="k8s-pod-network.598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Workload="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:45.136991 containerd[1501]: 2026-01-28 01:01:45.130 [INFO][5404] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" HandleID="k8s-pod-network.598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Workload="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:45.136991 containerd[1501]: 2026-01-28 01:01:45.133 [INFO][5404] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:45.136991 containerd[1501]: 2026-01-28 01:01:45.135 [INFO][5397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Jan 28 01:01:45.139020 containerd[1501]: time="2026-01-28T01:01:45.136953088Z" level=info msg="TearDown network for sandbox \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\" successfully" Jan 28 01:01:45.139020 containerd[1501]: time="2026-01-28T01:01:45.137907156Z" level=info msg="StopPodSandbox for \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\" returns successfully" Jan 28 01:01:45.139020 containerd[1501]: time="2026-01-28T01:01:45.138637984Z" level=info msg="RemovePodSandbox for \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\"" Jan 28 01:01:45.139020 containerd[1501]: time="2026-01-28T01:01:45.138675414Z" level=info msg="Forcibly stopping sandbox \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\"" Jan 28 01:01:45.237369 containerd[1501]: 2026-01-28 01:01:45.187 [WARNING][5418] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"eb7cb13d-31ca-4384-944f-1754705dfa3e", ResourceVersion:"1423", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 1, 0, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"8518202519d9ed5314406b8d00fb3a361f4d1a5fc2d59600cc02846718be351f", Pod:"goldmane-7c778bb748-9r9k6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.113.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali827be858c21", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:45.237369 containerd[1501]: 2026-01-28 01:01:45.187 [INFO][5418] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Jan 28 01:01:45.237369 containerd[1501]: 2026-01-28 01:01:45.187 [INFO][5418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" iface="eth0" netns="" Jan 28 01:01:45.237369 containerd[1501]: 2026-01-28 01:01:45.187 [INFO][5418] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Jan 28 01:01:45.237369 containerd[1501]: 2026-01-28 01:01:45.187 [INFO][5418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Jan 28 01:01:45.237369 containerd[1501]: 2026-01-28 01:01:45.219 [INFO][5425] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" HandleID="k8s-pod-network.598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Workload="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:45.237369 containerd[1501]: 2026-01-28 01:01:45.220 [INFO][5425] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:45.237369 containerd[1501]: 2026-01-28 01:01:45.220 [INFO][5425] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:45.237369 containerd[1501]: 2026-01-28 01:01:45.229 [WARNING][5425] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" HandleID="k8s-pod-network.598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Workload="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:45.237369 containerd[1501]: 2026-01-28 01:01:45.229 [INFO][5425] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" HandleID="k8s-pod-network.598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Workload="srv--8h12l.gb1.brightbox.com-k8s-goldmane--7c778bb748--9r9k6-eth0" Jan 28 01:01:45.237369 containerd[1501]: 2026-01-28 01:01:45.231 [INFO][5425] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:45.237369 containerd[1501]: 2026-01-28 01:01:45.234 [INFO][5418] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4" Jan 28 01:01:45.239221 containerd[1501]: time="2026-01-28T01:01:45.238229892Z" level=info msg="TearDown network for sandbox \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\" successfully" Jan 28 01:01:45.251922 containerd[1501]: time="2026-01-28T01:01:45.251853360Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:45.252236 containerd[1501]: time="2026-01-28T01:01:45.252205567Z" level=info msg="RemovePodSandbox \"598198afe4d5c7894a8622d0dc0be3979b2b410cd1d5eeee7ca14f2fe27eefb4\" returns successfully" Jan 28 01:01:45.253641 containerd[1501]: time="2026-01-28T01:01:45.253224873Z" level=info msg="StopPodSandbox for \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\"" Jan 28 01:01:45.354161 kubelet[2692]: E0128 01:01:45.353784 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcd5d865b-hrj24" podUID="1af325e3-7600-48af-bd7f-f8e9f715489b" Jan 28 01:01:45.357482 kubelet[2692]: E0128 01:01:45.357381 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-r4vpr" podUID="4ef09ae9-4abf-45ab-835f-f8b9901cd23b" Jan 28 01:01:45.358334 kubelet[2692]: E0128 01:01:45.358216 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-694cd9684d-pgqjc" podUID="95e6d4a0-89ab-461c-a749-32d8a8aa1de6" Jan 28 01:01:45.386103 containerd[1501]: 2026-01-28 01:01:45.309 [WARNING][5439] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"054a4d87-77d7-4fd5-ba18-4966e01b6356", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809", Pod:"coredns-66bc5c9577-cj4z5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e4d3c24c3f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:45.386103 containerd[1501]: 2026-01-28 01:01:45.310 [INFO][5439] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Jan 28 01:01:45.386103 containerd[1501]: 2026-01-28 01:01:45.310 [INFO][5439] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" iface="eth0" netns="" Jan 28 01:01:45.386103 containerd[1501]: 2026-01-28 01:01:45.310 [INFO][5439] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Jan 28 01:01:45.386103 containerd[1501]: 2026-01-28 01:01:45.310 [INFO][5439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Jan 28 01:01:45.386103 containerd[1501]: 2026-01-28 01:01:45.349 [INFO][5446] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" HandleID="k8s-pod-network.a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:01:45.386103 containerd[1501]: 2026-01-28 01:01:45.349 [INFO][5446] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:45.386103 containerd[1501]: 2026-01-28 01:01:45.349 [INFO][5446] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:45.386103 containerd[1501]: 2026-01-28 01:01:45.370 [WARNING][5446] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" HandleID="k8s-pod-network.a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:01:45.386103 containerd[1501]: 2026-01-28 01:01:45.370 [INFO][5446] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" HandleID="k8s-pod-network.a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:01:45.386103 containerd[1501]: 2026-01-28 01:01:45.378 [INFO][5446] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:45.386103 containerd[1501]: 2026-01-28 01:01:45.382 [INFO][5439] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Jan 28 01:01:45.386103 containerd[1501]: time="2026-01-28T01:01:45.386166285Z" level=info msg="TearDown network for sandbox \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\" successfully" Jan 28 01:01:45.386103 containerd[1501]: time="2026-01-28T01:01:45.386206261Z" level=info msg="StopPodSandbox for \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\" returns successfully" Jan 28 01:01:45.389808 containerd[1501]: time="2026-01-28T01:01:45.389773760Z" level=info msg="RemovePodSandbox for \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\"" Jan 28 01:01:45.389905 containerd[1501]: time="2026-01-28T01:01:45.389819947Z" level=info msg="Forcibly stopping sandbox \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\"" Jan 28 01:01:45.527532 containerd[1501]: 2026-01-28 01:01:45.471 [WARNING][5461] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"054a4d87-77d7-4fd5-ba18-4966e01b6356", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 0, 59, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-8h12l.gb1.brightbox.com", ContainerID:"6d50a8f3d0adab6d201c90d8a16ab085b15731d3027b4cfc79fa74aac8704809", Pod:"coredns-66bc5c9577-cj4z5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.113.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4e4d3c24c3f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 01:01:45.527532 containerd[1501]: 2026-01-28 01:01:45.472 [INFO][5461] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Jan 28 01:01:45.527532 containerd[1501]: 2026-01-28 01:01:45.472 [INFO][5461] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" iface="eth0" netns="" Jan 28 01:01:45.527532 containerd[1501]: 2026-01-28 01:01:45.472 [INFO][5461] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Jan 28 01:01:45.527532 containerd[1501]: 2026-01-28 01:01:45.472 [INFO][5461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Jan 28 01:01:45.527532 containerd[1501]: 2026-01-28 01:01:45.509 [INFO][5468] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" HandleID="k8s-pod-network.a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:01:45.527532 containerd[1501]: 2026-01-28 01:01:45.510 [INFO][5468] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 01:01:45.527532 containerd[1501]: 2026-01-28 01:01:45.510 [INFO][5468] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 01:01:45.527532 containerd[1501]: 2026-01-28 01:01:45.520 [WARNING][5468] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" HandleID="k8s-pod-network.a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:01:45.527532 containerd[1501]: 2026-01-28 01:01:45.520 [INFO][5468] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" HandleID="k8s-pod-network.a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Workload="srv--8h12l.gb1.brightbox.com-k8s-coredns--66bc5c9577--cj4z5-eth0" Jan 28 01:01:45.527532 containerd[1501]: 2026-01-28 01:01:45.523 [INFO][5468] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 01:01:45.527532 containerd[1501]: 2026-01-28 01:01:45.525 [INFO][5461] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6" Jan 28 01:01:45.529250 containerd[1501]: time="2026-01-28T01:01:45.527521518Z" level=info msg="TearDown network for sandbox \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\" successfully" Jan 28 01:01:45.534469 containerd[1501]: time="2026-01-28T01:01:45.534403828Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 01:01:45.534561 containerd[1501]: time="2026-01-28T01:01:45.534524935Z" level=info msg="RemovePodSandbox \"a362bcefe5ae13aa379175c177ebcee781075278d168b98be8d3aa82af72b3c6\" returns successfully" Jan 28 01:01:45.698762 systemd[1]: Started sshd@16-10.244.8.18:22-68.220.241.50:47780.service - OpenSSH per-connection server daemon (68.220.241.50:47780). Jan 28 01:01:46.314878 sshd[5475]: Accepted publickey for core from 68.220.241.50 port 47780 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 01:01:46.317475 sshd[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:46.326614 systemd-logind[1484]: New session 19 of user core. Jan 28 01:01:46.336649 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 01:01:47.085526 sshd[5475]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:47.093651 systemd[1]: sshd@16-10.244.8.18:22-68.220.241.50:47780.service: Deactivated successfully. Jan 28 01:01:47.097973 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 01:01:47.099509 systemd-logind[1484]: Session 19 logged out. Waiting for processes to exit. Jan 28 01:01:47.101837 systemd-logind[1484]: Removed session 19. Jan 28 01:01:47.196829 systemd[1]: Started sshd@17-10.244.8.18:22-68.220.241.50:47792.service - OpenSSH per-connection server daemon (68.220.241.50:47792). Jan 28 01:01:47.778783 sshd[5488]: Accepted publickey for core from 68.220.241.50 port 47792 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 01:01:47.781422 sshd[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:47.790531 systemd-logind[1484]: New session 20 of user core. Jan 28 01:01:47.797581 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 01:01:48.880148 sshd[5488]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:48.889707 systemd[1]: sshd@17-10.244.8.18:22-68.220.241.50:47792.service: Deactivated successfully. Jan 28 01:01:48.897092 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 01:01:48.903491 systemd-logind[1484]: Session 20 logged out. Waiting for processes to exit. Jan 28 01:01:48.907424 systemd-logind[1484]: Removed session 20. Jan 28 01:01:48.990817 systemd[1]: Started sshd@18-10.244.8.18:22-68.220.241.50:47794.service - OpenSSH per-connection server daemon (68.220.241.50:47794). Jan 28 01:01:49.616709 sshd[5526]: Accepted publickey for core from 68.220.241.50 port 47794 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 01:01:49.620511 sshd[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:49.630733 systemd-logind[1484]: New session 21 of user core. Jan 28 01:01:49.640670 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 01:01:51.222260 sshd[5526]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:51.243653 systemd[1]: sshd@18-10.244.8.18:22-68.220.241.50:47794.service: Deactivated successfully. Jan 28 01:01:51.248377 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 01:01:51.250622 systemd-logind[1484]: Session 21 logged out. Waiting for processes to exit. Jan 28 01:01:51.252740 systemd-logind[1484]: Removed session 21. Jan 28 01:01:51.329841 systemd[1]: Started sshd@19-10.244.8.18:22-68.220.241.50:47804.service - OpenSSH per-connection server daemon (68.220.241.50:47804). Jan 28 01:01:51.949102 sshd[5544]: Accepted publickey for core from 68.220.241.50 port 47804 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 01:01:51.954488 sshd[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:51.962488 systemd-logind[1484]: New session 22 of user core. Jan 28 01:01:51.970656 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 01:01:52.348342 kubelet[2692]: E0128 01:01:52.347715 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-5thvw" podUID="215504c8-12e3-45d1-b60d-0c358a1645a5" Jan 28 01:01:52.872444 sshd[5544]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:52.879236 systemd-logind[1484]: Session 22 logged out. Waiting for processes to exit. Jan 28 01:01:52.881232 systemd[1]: sshd@19-10.244.8.18:22-68.220.241.50:47804.service: Deactivated successfully. Jan 28 01:01:52.887767 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 01:01:52.890346 systemd-logind[1484]: Removed session 22. Jan 28 01:01:52.980423 systemd[1]: Started sshd@20-10.244.8.18:22-68.220.241.50:53466.service - OpenSSH per-connection server daemon (68.220.241.50:53466). Jan 28 01:01:53.576059 sshd[5558]: Accepted publickey for core from 68.220.241.50 port 53466 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 01:01:53.578326 sshd[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:53.585643 systemd-logind[1484]: New session 23 of user core. Jan 28 01:01:53.592608 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 01:01:54.080884 sshd[5558]: pam_unix(sshd:session): session closed for user core Jan 28 01:01:54.090182 systemd[1]: sshd@20-10.244.8.18:22-68.220.241.50:53466.service: Deactivated successfully. Jan 28 01:01:54.093796 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 01:01:54.097099 systemd-logind[1484]: Session 23 logged out. Waiting for processes to exit. Jan 28 01:01:54.100638 systemd-logind[1484]: Removed session 23. Jan 28 01:01:56.348426 kubelet[2692]: E0128 01:01:56.348250 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:01:58.346244 kubelet[2692]: E0128 01:01:58.346066 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-r4vpr" podUID="4ef09ae9-4abf-45ab-835f-f8b9901cd23b" Jan 28 01:01:58.346244 kubelet[2692]: E0128 01:01:58.346170 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9r9k6" podUID="eb7cb13d-31ca-4384-944f-1754705dfa3e" Jan 28 01:01:59.205817 systemd[1]: Started sshd@21-10.244.8.18:22-68.220.241.50:53476.service - OpenSSH per-connection server daemon (68.220.241.50:53476). Jan 28 01:01:59.787773 sshd[5570]: Accepted publickey for core from 68.220.241.50 port 53476 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 01:01:59.790345 sshd[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:01:59.799253 systemd-logind[1484]: New session 24 of user core. Jan 28 01:01:59.810691 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 01:02:00.347903 kubelet[2692]: E0128 01:02:00.346942 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcd5d865b-hrj24" podUID="1af325e3-7600-48af-bd7f-f8e9f715489b" Jan 28 01:02:00.364729 sshd[5570]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:00.373805 kubelet[2692]: E0128 01:02:00.370773 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-694cd9684d-pgqjc" podUID="95e6d4a0-89ab-461c-a749-32d8a8aa1de6" Jan 28 01:02:00.379387 systemd-logind[1484]: Session 24 logged out. Waiting for processes to exit. Jan 28 01:02:00.379679 systemd[1]: sshd@21-10.244.8.18:22-68.220.241.50:53476.service: Deactivated successfully. Jan 28 01:02:00.387032 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 01:02:00.389714 systemd-logind[1484]: Removed session 24. Jan 28 01:02:03.346590 kubelet[2692]: E0128 01:02:03.346450 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-5thvw" podUID="215504c8-12e3-45d1-b60d-0c358a1645a5" Jan 28 01:02:05.474026 systemd[1]: Started sshd@22-10.244.8.18:22-68.220.241.50:51772.service - OpenSSH per-connection server daemon (68.220.241.50:51772). Jan 28 01:02:06.074402 sshd[5585]: Accepted publickey for core from 68.220.241.50 port 51772 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 01:02:06.077205 sshd[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:06.091990 systemd-logind[1484]: New session 25 of user core. Jan 28 01:02:06.098586 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 01:02:06.663640 sshd[5585]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:06.678686 systemd[1]: sshd@22-10.244.8.18:22-68.220.241.50:51772.service: Deactivated successfully. Jan 28 01:02:06.678830 systemd-logind[1484]: Session 25 logged out. Waiting for processes to exit. Jan 28 01:02:06.688037 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 01:02:06.692008 systemd-logind[1484]: Removed session 25. Jan 28 01:02:08.349780 kubelet[2692]: E0128 01:02:08.349594 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-v8h92" podUID="d6fe3f19-c2cb-4440-ac98-4f17244eae9f" Jan 28 01:02:11.351149 kubelet[2692]: E0128 01:02:11.350564 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-9r9k6" podUID="eb7cb13d-31ca-4384-944f-1754705dfa3e" Jan 28 01:02:11.770444 systemd[1]: Started sshd@23-10.244.8.18:22-68.220.241.50:51774.service - OpenSSH per-connection server daemon (68.220.241.50:51774). Jan 28 01:02:12.353609 containerd[1501]: time="2026-01-28T01:02:12.353370911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 01:02:12.361314 sshd[5597]: Accepted publickey for core from 68.220.241.50 port 51774 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 01:02:12.367919 sshd[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 01:02:12.382474 systemd-logind[1484]: New session 26 of user core. Jan 28 01:02:12.392575 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 01:02:12.700173 containerd[1501]: time="2026-01-28T01:02:12.699432941Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:02:12.703014 containerd[1501]: time="2026-01-28T01:02:12.702927944Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 01:02:12.703303 containerd[1501]: time="2026-01-28T01:02:12.702989186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 01:02:12.704869 kubelet[2692]: E0128 01:02:12.703700 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:02:12.704869 kubelet[2692]: E0128 01:02:12.703909 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 01:02:12.706355 containerd[1501]: time="2026-01-28T01:02:12.704544956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 01:02:12.706674 kubelet[2692]: E0128 01:02:12.706444 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6768b4f5db-r4vpr_calico-apiserver(4ef09ae9-4abf-45ab-835f-f8b9901cd23b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 01:02:12.707130 kubelet[2692]: E0128 01:02:12.707028 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-r4vpr" podUID="4ef09ae9-4abf-45ab-835f-f8b9901cd23b" Jan 28 01:02:12.979111 sshd[5597]: pam_unix(sshd:session): session closed for user core Jan 28 01:02:12.989420 systemd[1]: sshd@23-10.244.8.18:22-68.220.241.50:51774.service: Deactivated successfully. Jan 28 01:02:12.994571 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 01:02:12.995971 systemd-logind[1484]: Session 26 logged out. Waiting for processes to exit. Jan 28 01:02:12.998747 systemd-logind[1484]: Removed session 26. Jan 28 01:02:13.045995 containerd[1501]: time="2026-01-28T01:02:13.045346150Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:02:13.048508 containerd[1501]: time="2026-01-28T01:02:13.048245049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 01:02:13.048508 containerd[1501]: time="2026-01-28T01:02:13.048413910Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 01:02:13.049182 kubelet[2692]: E0128 01:02:13.048940 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:02:13.049182 kubelet[2692]: E0128 01:02:13.049133 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 01:02:13.049852 kubelet[2692]: E0128 01:02:13.049640 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-7fcd5d865b-hrj24_calico-system(1af325e3-7600-48af-bd7f-f8e9f715489b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 01:02:13.049852 kubelet[2692]: E0128 01:02:13.049737 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fcd5d865b-hrj24" podUID="1af325e3-7600-48af-bd7f-f8e9f715489b" Jan 28 01:02:13.348215 containerd[1501]: time="2026-01-28T01:02:13.347129216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 01:02:13.672992 containerd[1501]: time="2026-01-28T01:02:13.672927288Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:02:13.674885 containerd[1501]: time="2026-01-28T01:02:13.674773149Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 01:02:13.675062 containerd[1501]: time="2026-01-28T01:02:13.674794272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 01:02:13.675531 kubelet[2692]: E0128 01:02:13.675453 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:02:13.676507 kubelet[2692]: E0128 01:02:13.675526 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 01:02:13.676507 kubelet[2692]: E0128 01:02:13.675674 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-694cd9684d-pgqjc_calico-system(95e6d4a0-89ab-461c-a749-32d8a8aa1de6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 01:02:13.677300 containerd[1501]: time="2026-01-28T01:02:13.677223502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 01:02:13.996687 containerd[1501]: time="2026-01-28T01:02:13.996512613Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 01:02:14.001459 containerd[1501]: time="2026-01-28T01:02:14.001094906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 01:02:14.001459 containerd[1501]: time="2026-01-28T01:02:14.001220737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 01:02:14.003166 kubelet[2692]: E0128 01:02:14.003066 2692 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:02:14.005352 kubelet[2692]: E0128 01:02:14.003339 2692 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 01:02:14.005352 kubelet[2692]: E0128 01:02:14.003511 2692 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-694cd9684d-pgqjc_calico-system(95e6d4a0-89ab-461c-a749-32d8a8aa1de6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 01:02:14.005352 kubelet[2692]: E0128 01:02:14.003598 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-694cd9684d-pgqjc" podUID="95e6d4a0-89ab-461c-a749-32d8a8aa1de6" Jan 28 01:02:16.347086 kubelet[2692]: E0128 01:02:16.346947 2692 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6768b4f5db-5thvw" podUID="215504c8-12e3-45d1-b60d-0c358a1645a5"