Jan 28 02:04:59.014817 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Jan 27 23:02:38 -00 2026 Jan 28 02:04:59.014851 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 02:04:59.014864 kernel: BIOS-provided physical RAM map: Jan 28 02:04:59.014879 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 28 02:04:59.014888 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 28 02:04:59.014898 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 28 02:04:59.014909 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jan 28 02:04:59.014919 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jan 28 02:04:59.014928 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 28 02:04:59.014938 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 28 02:04:59.014948 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 28 02:04:59.014958 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 28 02:04:59.014973 kernel: NX (Execute Disable) protection: active Jan 28 02:04:59.014983 kernel: APIC: Static calls initialized Jan 28 02:04:59.014995 kernel: SMBIOS 2.8 present. Jan 28 02:04:59.015006 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jan 28 02:04:59.015016 kernel: Hypervisor detected: KVM Jan 28 02:04:59.015031 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 28 02:04:59.015042 kernel: kvm-clock: using sched offset of 4414418549 cycles Jan 28 02:04:59.015053 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 28 02:04:59.015064 kernel: tsc: Detected 2799.998 MHz processor Jan 28 02:04:59.015090 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 28 02:04:59.015100 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 28 02:04:59.015109 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jan 28 02:04:59.015118 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 28 02:04:59.015128 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 28 02:04:59.015146 kernel: Using GB pages for direct mapping Jan 28 02:04:59.015156 kernel: ACPI: Early table checksum verification disabled Jan 28 02:04:59.015165 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jan 28 02:04:59.015175 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:04:59.015185 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:04:59.015202 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:04:59.015212 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jan 28 02:04:59.015221 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:04:59.015243 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:04:59.015257 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:04:59.015267 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 28 02:04:59.015277 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jan 28 02:04:59.015287 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jan 28 02:04:59.015297 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jan 28 02:04:59.015312 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jan 28 02:04:59.015328 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jan 28 02:04:59.015342 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jan 28 02:04:59.015353 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jan 28 02:04:59.015363 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 28 02:04:59.015373 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 28 02:04:59.015383 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 28 02:04:59.015394 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jan 28 02:04:59.015404 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 28 02:04:59.015414 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jan 28 02:04:59.015428 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 28 02:04:59.015438 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jan 28 02:04:59.015456 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 28 02:04:59.015466 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jan 28 02:04:59.015489 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 28 02:04:59.015499 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jan 28 02:04:59.015510 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 28 02:04:59.015520 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jan 28 02:04:59.015531 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 28 02:04:59.015545 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jan 28 02:04:59.015569 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 28 02:04:59.015580 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 28 02:04:59.015591 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jan 28 02:04:59.015602 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jan 28 02:04:59.016703 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jan 28 02:04:59.016715 kernel: Zone ranges: Jan 28 02:04:59.016728 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 28 02:04:59.016739 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jan 28 02:04:59.016755 kernel: Normal empty Jan 28 02:04:59.016766 kernel: Movable zone start for each node Jan 28 02:04:59.016790 kernel: Early memory node ranges Jan 28 02:04:59.016811 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 28 02:04:59.016823 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jan 28 02:04:59.016853 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jan 28 02:04:59.016864 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 28 02:04:59.016876 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 28 02:04:59.016887 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jan 28 02:04:59.016898 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 28 02:04:59.016915 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 28 02:04:59.016927 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 28 02:04:59.016938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 28 02:04:59.016950 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 28 02:04:59.016961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 28 02:04:59.016973 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 28 02:04:59.016984 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 28 02:04:59.016996 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 28 02:04:59.017007 kernel: TSC deadline timer available Jan 28 02:04:59.017023 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jan 28 02:04:59.017034 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 28 02:04:59.017046 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 28 02:04:59.017057 kernel: Booting paravirtualized kernel on KVM Jan 28 02:04:59.017069 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 28 02:04:59.017080 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jan 28 02:04:59.017091 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Jan 28 02:04:59.017103 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Jan 28 02:04:59.017114 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jan 28 02:04:59.017130 kernel: kvm-guest: PV spinlocks enabled Jan 28 02:04:59.017141 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 28 02:04:59.017166 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 02:04:59.017177 kernel: random: crng init done Jan 28 02:04:59.017187 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 28 02:04:59.017207 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 28 02:04:59.017218 kernel: Fallback order for Node 0: 0 Jan 28 02:04:59.017241 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jan 28 02:04:59.017257 kernel: Policy zone: DMA32 Jan 28 02:04:59.017268 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 28 02:04:59.017279 kernel: software IO TLB: area num 16. Jan 28 02:04:59.017302 kernel: Memory: 1901604K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 194752K reserved, 0K cma-reserved) Jan 28 02:04:59.017313 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jan 28 02:04:59.017324 kernel: Kernel/User page tables isolation: enabled Jan 28 02:04:59.017335 kernel: ftrace: allocating 37989 entries in 149 pages Jan 28 02:04:59.017345 kernel: ftrace: allocated 149 pages with 4 groups Jan 28 02:04:59.017356 kernel: Dynamic Preempt: voluntary Jan 28 02:04:59.017378 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 28 02:04:59.017389 kernel: rcu: RCU event tracing is enabled. Jan 28 02:04:59.017400 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jan 28 02:04:59.017411 kernel: Trampoline variant of Tasks RCU enabled. Jan 28 02:04:59.017433 kernel: Rude variant of Tasks RCU enabled. Jan 28 02:04:59.017454 kernel: Tracing variant of Tasks RCU enabled. Jan 28 02:04:59.017479 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 28 02:04:59.017491 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jan 28 02:04:59.017502 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jan 28 02:04:59.017514 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 28 02:04:59.017532 kernel: Console: colour VGA+ 80x25 Jan 28 02:04:59.017543 kernel: printk: console [tty0] enabled Jan 28 02:04:59.017559 kernel: printk: console [ttyS0] enabled Jan 28 02:04:59.017571 kernel: ACPI: Core revision 20230628 Jan 28 02:04:59.017582 kernel: APIC: Switch to symmetric I/O mode setup Jan 28 02:04:59.017594 kernel: x2apic enabled Jan 28 02:04:59.017618 kernel: APIC: Switched APIC routing to: physical x2apic Jan 28 02:04:59.017643 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 28 02:04:59.017656 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Jan 28 02:04:59.017679 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 28 02:04:59.017691 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 28 02:04:59.017703 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 28 02:04:59.017714 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 28 02:04:59.017725 kernel: Spectre V2 : Mitigation: Retpolines Jan 28 02:04:59.017737 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 28 02:04:59.017749 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 28 02:04:59.017765 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 28 02:04:59.017789 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 28 02:04:59.017810 kernel: MDS: Mitigation: Clear CPU buffers Jan 28 02:04:59.017826 kernel: MMIO Stale Data: Unknown: No mitigations Jan 28 02:04:59.017838 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 28 02:04:59.017849 kernel: active return thunk: its_return_thunk Jan 28 02:04:59.017861 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 28 02:04:59.017873 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 28 02:04:59.017885 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 28 02:04:59.017897 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 28 02:04:59.017908 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 28 02:04:59.017925 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 28 02:04:59.017938 kernel: Freeing SMP alternatives memory: 32K Jan 28 02:04:59.017949 kernel: pid_max: default: 32768 minimum: 301 Jan 28 02:04:59.017961 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 28 02:04:59.017973 kernel: landlock: Up and running. Jan 28 02:04:59.017985 kernel: SELinux: Initializing. Jan 28 02:04:59.017997 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 28 02:04:59.018009 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 28 02:04:59.018021 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jan 28 02:04:59.018033 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 02:04:59.018045 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 02:04:59.018061 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jan 28 02:04:59.018074 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jan 28 02:04:59.018086 kernel: signal: max sigframe size: 1776 Jan 28 02:04:59.018098 kernel: rcu: Hierarchical SRCU implementation. Jan 28 02:04:59.018110 kernel: rcu: Max phase no-delay instances is 400. Jan 28 02:04:59.018122 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 28 02:04:59.018134 kernel: smp: Bringing up secondary CPUs ... Jan 28 02:04:59.018146 kernel: smpboot: x86: Booting SMP configuration: Jan 28 02:04:59.018158 kernel: .... node #0, CPUs: #1 Jan 28 02:04:59.018195 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 28 02:04:59.018206 kernel: smp: Brought up 1 node, 2 CPUs Jan 28 02:04:59.018217 kernel: smpboot: Max logical packages: 16 Jan 28 02:04:59.018251 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Jan 28 02:04:59.018263 kernel: devtmpfs: initialized Jan 28 02:04:59.018274 kernel: x86/mm: Memory block size: 128MB Jan 28 02:04:59.018286 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 28 02:04:59.018310 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jan 28 02:04:59.018321 kernel: pinctrl core: initialized pinctrl subsystem Jan 28 02:04:59.018337 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 28 02:04:59.018348 kernel: audit: initializing netlink subsys (disabled) Jan 28 02:04:59.018371 kernel: audit: type=2000 audit(1769565897.714:1): state=initialized audit_enabled=0 res=1 Jan 28 02:04:59.018383 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 28 02:04:59.018398 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 28 02:04:59.018410 kernel: cpuidle: using governor menu Jan 28 02:04:59.018421 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 28 02:04:59.018433 kernel: dca service started, version 1.12.1 Jan 28 02:04:59.018445 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 28 02:04:59.018483 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 28 02:04:59.018496 kernel: PCI: Using configuration type 1 for base access Jan 28 02:04:59.018508 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 28 02:04:59.018520 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 28 02:04:59.018532 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 28 02:04:59.018544 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 28 02:04:59.018556 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 28 02:04:59.018568 kernel: ACPI: Added _OSI(Module Device) Jan 28 02:04:59.018580 kernel: ACPI: Added _OSI(Processor Device) Jan 28 02:04:59.018596 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 28 02:04:59.018609 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 28 02:04:59.020014 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 28 02:04:59.020028 kernel: ACPI: Interpreter enabled Jan 28 02:04:59.020040 kernel: ACPI: PM: (supports S0 S5) Jan 28 02:04:59.020052 kernel: ACPI: Using IOAPIC for interrupt routing Jan 28 02:04:59.020065 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 28 02:04:59.020077 kernel: PCI: Using E820 reservations for host bridge windows Jan 28 02:04:59.020089 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 28 02:04:59.020108 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 28 02:04:59.022727 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 28 02:04:59.022937 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 28 02:04:59.023101 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 28 02:04:59.023120 kernel: PCI host bridge to bus 0000:00 Jan 28 02:04:59.023322 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 28 02:04:59.023475 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 28 02:04:59.023682 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 28 02:04:59.023873 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jan 28 02:04:59.024014 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 28 02:04:59.024154 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jan 28 02:04:59.024310 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 28 02:04:59.024542 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 28 02:04:59.024761 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jan 28 02:04:59.024934 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jan 28 02:04:59.025110 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jan 28 02:04:59.025292 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jan 28 02:04:59.025463 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 28 02:04:59.025661 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jan 28 02:04:59.025840 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jan 28 02:04:59.026028 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jan 28 02:04:59.026208 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jan 28 02:04:59.026404 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jan 28 02:04:59.026595 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jan 28 02:04:59.028848 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jan 28 02:04:59.029015 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jan 28 02:04:59.029203 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jan 28 02:04:59.029383 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jan 28 02:04:59.029578 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jan 28 02:04:59.029751 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jan 28 02:04:59.029942 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jan 28 02:04:59.030100 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jan 28 02:04:59.030280 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jan 28 02:04:59.030470 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jan 28 02:04:59.033697 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 28 02:04:59.033888 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 28 02:04:59.034050 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jan 28 02:04:59.034208 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jan 28 02:04:59.034365 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jan 28 02:04:59.034585 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jan 28 02:04:59.034756 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jan 28 02:04:59.034938 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jan 28 02:04:59.035105 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jan 28 02:04:59.035281 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 28 02:04:59.035450 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 28 02:04:59.039634 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 28 02:04:59.039867 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jan 28 02:04:59.040033 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jan 28 02:04:59.040233 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 28 02:04:59.040383 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 28 02:04:59.040582 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jan 28 02:04:59.040784 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jan 28 02:04:59.040982 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 28 02:04:59.041174 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 28 02:04:59.041353 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 02:04:59.041534 kernel: pci_bus 0000:02: extended config space not accessible Jan 28 02:04:59.044862 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jan 28 02:04:59.045057 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jan 28 02:04:59.045262 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 28 02:04:59.045460 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 28 02:04:59.046634 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jan 28 02:04:59.046863 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jan 28 02:04:59.047030 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 28 02:04:59.047190 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 28 02:04:59.047371 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 02:04:59.047560 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jan 28 02:04:59.049650 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jan 28 02:04:59.049883 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 28 02:04:59.050049 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 28 02:04:59.050209 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 02:04:59.050399 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 28 02:04:59.052582 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 28 02:04:59.052813 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 02:04:59.053000 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 28 02:04:59.053170 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 28 02:04:59.053350 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 02:04:59.053504 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 28 02:04:59.053680 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 28 02:04:59.053860 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 02:04:59.054021 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 28 02:04:59.054178 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 28 02:04:59.054366 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 02:04:59.054526 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 28 02:04:59.056750 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 28 02:04:59.056948 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 02:04:59.056969 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 28 02:04:59.056983 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 28 02:04:59.056995 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 28 02:04:59.057008 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 28 02:04:59.057028 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 28 02:04:59.057041 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 28 02:04:59.057053 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 28 02:04:59.057065 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 28 02:04:59.057077 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 28 02:04:59.057097 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 28 02:04:59.057108 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 28 02:04:59.057121 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 28 02:04:59.057132 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 28 02:04:59.057150 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 28 02:04:59.057162 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 28 02:04:59.057174 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 28 02:04:59.057195 kernel: iommu: Default domain type: Translated Jan 28 02:04:59.057208 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 28 02:04:59.057220 kernel: PCI: Using ACPI for IRQ routing Jan 28 02:04:59.057232 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 28 02:04:59.057244 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 28 02:04:59.057265 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jan 28 02:04:59.057450 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 28 02:04:59.057619 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 28 02:04:59.057765 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 28 02:04:59.057794 kernel: vgaarb: loaded Jan 28 02:04:59.057818 kernel: clocksource: Switched to clocksource kvm-clock Jan 28 02:04:59.057829 kernel: VFS: Disk quotas dquot_6.6.0 Jan 28 02:04:59.057841 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 28 02:04:59.057861 kernel: pnp: PnP ACPI init Jan 28 02:04:59.058042 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 28 02:04:59.058076 kernel: pnp: PnP ACPI: found 5 devices Jan 28 02:04:59.058089 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 28 02:04:59.058101 kernel: NET: Registered PF_INET protocol family Jan 28 02:04:59.058113 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 28 02:04:59.058126 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 28 02:04:59.058146 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 28 02:04:59.058158 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 28 02:04:59.058170 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 28 02:04:59.058187 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 28 02:04:59.058209 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 28 02:04:59.058221 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 28 02:04:59.058233 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 28 02:04:59.058245 kernel: NET: Registered PF_XDP protocol family Jan 28 02:04:59.058427 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jan 28 02:04:59.060607 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 28 02:04:59.060843 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 28 02:04:59.061017 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 28 02:04:59.061209 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 28 02:04:59.061392 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 28 02:04:59.061574 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 28 02:04:59.061762 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 28 02:04:59.061945 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jan 28 02:04:59.062135 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jan 28 02:04:59.062310 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jan 28 02:04:59.062468 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jan 28 02:04:59.064717 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jan 28 02:04:59.064900 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jan 28 02:04:59.065063 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jan 28 02:04:59.065250 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jan 28 02:04:59.065426 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jan 28 02:04:59.065612 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jan 28 02:04:59.065810 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jan 28 02:04:59.065977 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 28 02:04:59.066156 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jan 28 02:04:59.066305 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 02:04:59.066484 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jan 28 02:04:59.068716 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 28 02:04:59.068922 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jan 28 02:04:59.069102 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 02:04:59.069264 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jan 28 02:04:59.069416 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 28 02:04:59.069586 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jan 28 02:04:59.069764 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 02:04:59.069953 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jan 28 02:04:59.070130 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 28 02:04:59.070282 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jan 28 02:04:59.070456 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 02:04:59.075876 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jan 28 02:04:59.076058 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 28 02:04:59.076237 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jan 28 02:04:59.076430 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 02:04:59.076625 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jan 28 02:04:59.076867 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 28 02:04:59.077036 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jan 28 02:04:59.077204 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 02:04:59.077386 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jan 28 02:04:59.077540 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 28 02:04:59.077698 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jan 28 02:04:59.077905 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 02:04:59.078063 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jan 28 02:04:59.078219 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 28 02:04:59.078387 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jan 28 02:04:59.078525 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 02:04:59.078705 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 28 02:04:59.078876 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 28 02:04:59.079021 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 28 02:04:59.079167 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jan 28 02:04:59.079346 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 28 02:04:59.079503 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jan 28 02:04:59.079681 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 28 02:04:59.079868 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jan 28 02:04:59.080022 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jan 28 02:04:59.080197 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jan 28 02:04:59.080364 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jan 28 02:04:59.080529 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jan 28 02:04:59.080743 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jan 28 02:04:59.080934 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jan 28 02:04:59.081085 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jan 28 02:04:59.081266 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jan 28 02:04:59.081440 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jan 28 02:04:59.081607 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jan 28 02:04:59.081806 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jan 28 02:04:59.081994 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jan 28 02:04:59.082155 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jan 28 02:04:59.082338 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jan 28 02:04:59.082518 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jan 28 02:04:59.086742 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jan 28 02:04:59.086934 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jan 28 02:04:59.087096 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jan 28 02:04:59.087245 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jan 28 02:04:59.087411 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jan 28 02:04:59.087567 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jan 28 02:04:59.087732 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jan 28 02:04:59.087901 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jan 28 02:04:59.087929 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 28 02:04:59.087943 kernel: PCI: CLS 0 bytes, default 64 Jan 28 02:04:59.087956 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 28 02:04:59.087969 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jan 28 02:04:59.087981 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 28 02:04:59.087994 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Jan 28 02:04:59.088007 kernel: Initialise system trusted keyrings Jan 28 02:04:59.088020 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 28 02:04:59.088038 kernel: Key type asymmetric registered Jan 28 02:04:59.088051 kernel: Asymmetric key parser 'x509' registered Jan 28 02:04:59.088063 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 28 02:04:59.088076 kernel: io scheduler mq-deadline registered Jan 28 02:04:59.088098 kernel: io scheduler kyber registered Jan 28 02:04:59.088110 kernel: io scheduler bfq registered Jan 28 02:04:59.088268 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 28 02:04:59.088445 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 28 02:04:59.090659 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:04:59.090873 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 28 02:04:59.091036 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 28 02:04:59.091196 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:04:59.091356 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 28 02:04:59.091525 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 28 02:04:59.091734 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:04:59.091938 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 28 02:04:59.092097 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 28 02:04:59.092279 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:04:59.092447 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 28 02:04:59.094645 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 28 02:04:59.094840 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:04:59.095018 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 28 02:04:59.095179 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 28 02:04:59.095339 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:04:59.095539 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 28 02:04:59.095775 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 28 02:04:59.095952 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:04:59.096120 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 28 02:04:59.096277 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 28 02:04:59.096435 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 28 02:04:59.096455 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 28 02:04:59.096469 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 28 02:04:59.096482 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 28 02:04:59.096502 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 28 02:04:59.096515 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 28 02:04:59.096528 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 28 02:04:59.096541 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 28 02:04:59.097604 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 28 02:04:59.097806 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 28 02:04:59.097828 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 28 02:04:59.097976 kernel: rtc_cmos 00:03: registered as rtc0 Jan 28 02:04:59.098141 kernel: rtc_cmos 00:03: setting system clock to 2026-01-28T02:04:58 UTC (1769565898) Jan 28 02:04:59.098289 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 28 02:04:59.098307 kernel: intel_pstate: CPU model not supported Jan 28 02:04:59.098324 kernel: NET: Registered PF_INET6 protocol family Jan 28 02:04:59.098337 kernel: Segment Routing with IPv6 Jan 28 02:04:59.098349 kernel: In-situ OAM (IOAM) with IPv6 Jan 28 02:04:59.098362 kernel: NET: Registered PF_PACKET protocol family Jan 28 02:04:59.098381 kernel: Key type dns_resolver registered Jan 28 02:04:59.098393 kernel: IPI shorthand broadcast: enabled Jan 28 02:04:59.098413 kernel: sched_clock: Marking stable (1238025052, 223327445)->(1583071897, -121719400) Jan 28 02:04:59.098426 kernel: registered taskstats version 1 Jan 28 02:04:59.098442 kernel: Loading compiled-in X.509 certificates Jan 28 02:04:59.098455 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 828aa81885d7116cb1bcfd05d35b5b0a881d685d' Jan 28 02:04:59.098468 kernel: Key type .fscrypt registered Jan 28 02:04:59.098492 kernel: Key type fscrypt-provisioning registered Jan 28 02:04:59.098505 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 28 02:04:59.098517 kernel: ima: Allocated hash algorithm: sha1 Jan 28 02:04:59.098529 kernel: ima: No architecture policies found Jan 28 02:04:59.098546 kernel: clk: Disabling unused clocks Jan 28 02:04:59.098579 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 28 02:04:59.098625 kernel: Write protecting the kernel read-only data: 36864k Jan 28 02:04:59.098641 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 28 02:04:59.098653 kernel: Run /init as init process Jan 28 02:04:59.098664 kernel: with arguments: Jan 28 02:04:59.098676 kernel: /init Jan 28 02:04:59.098687 kernel: with environment: Jan 28 02:04:59.098698 kernel: HOME=/ Jan 28 02:04:59.098713 kernel: TERM=linux Jan 28 02:04:59.098733 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 02:04:59.098748 systemd[1]: Detected virtualization kvm. Jan 28 02:04:59.098760 systemd[1]: Detected architecture x86-64. Jan 28 02:04:59.098806 systemd[1]: Running in initrd. Jan 28 02:04:59.098821 systemd[1]: No hostname configured, using default hostname. Jan 28 02:04:59.098834 systemd[1]: Hostname set to . Jan 28 02:04:59.098848 systemd[1]: Initializing machine ID from VM UUID. Jan 28 02:04:59.098867 systemd[1]: Queued start job for default target initrd.target. Jan 28 02:04:59.098881 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 02:04:59.098894 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 02:04:59.098908 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 28 02:04:59.098922 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 02:04:59.098935 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 28 02:04:59.098949 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 28 02:04:59.098970 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 28 02:04:59.098984 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 28 02:04:59.098997 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 02:04:59.099011 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 02:04:59.099024 systemd[1]: Reached target paths.target - Path Units. Jan 28 02:04:59.099038 systemd[1]: Reached target slices.target - Slice Units. Jan 28 02:04:59.099051 systemd[1]: Reached target swap.target - Swaps. Jan 28 02:04:59.099064 systemd[1]: Reached target timers.target - Timer Units. Jan 28 02:04:59.099092 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 02:04:59.099106 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 02:04:59.099120 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 02:04:59.099133 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 02:04:59.099165 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 02:04:59.099178 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 02:04:59.099192 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 02:04:59.099213 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 02:04:59.099243 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 28 02:04:59.099256 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 02:04:59.099269 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 28 02:04:59.099287 systemd[1]: Starting systemd-fsck-usr.service... Jan 28 02:04:59.099313 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 02:04:59.099326 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 02:04:59.099339 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 02:04:59.099419 systemd-journald[202]: Collecting audit messages is disabled. Jan 28 02:04:59.099455 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 28 02:04:59.099470 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 02:04:59.099496 systemd[1]: Finished systemd-fsck-usr.service. Jan 28 02:04:59.099515 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 02:04:59.099535 systemd-journald[202]: Journal started Jan 28 02:04:59.099559 systemd-journald[202]: Runtime Journal (/run/log/journal/1540a2a1e7ff42b4a56b3fdb94ed3164) is 4.7M, max 38.0M, 33.2M free. Jan 28 02:04:59.105169 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 28 02:04:59.044053 systemd-modules-load[203]: Inserted module 'overlay' Jan 28 02:04:59.111899 kernel: Bridge firewalling registered Jan 28 02:04:59.111924 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 02:04:59.108363 systemd-modules-load[203]: Inserted module 'br_netfilter' Jan 28 02:04:59.115546 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 02:04:59.116554 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:04:59.125771 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 02:04:59.144862 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 02:04:59.150838 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 02:04:59.151958 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 02:04:59.167852 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 02:04:59.170653 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 02:04:59.177009 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 02:04:59.186849 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 28 02:04:59.188086 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 02:04:59.190049 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 02:04:59.196835 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 02:04:59.207684 dracut-cmdline[233]: dracut-dracut-053 Jan 28 02:04:59.219691 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f534874bafefe5138b6229cc8580e4eb92fdd31d412450780cdc90e6631acdd2 Jan 28 02:04:59.248818 systemd-resolved[237]: Positive Trust Anchors: Jan 28 02:04:59.249718 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 02:04:59.249779 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 02:04:59.258281 systemd-resolved[237]: Defaulting to hostname 'linux'. Jan 28 02:04:59.260191 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 02:04:59.262947 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 02:04:59.324625 kernel: SCSI subsystem initialized Jan 28 02:04:59.336623 kernel: Loading iSCSI transport class v2.0-870. Jan 28 02:04:59.348589 kernel: iscsi: registered transport (tcp) Jan 28 02:04:59.374833 kernel: iscsi: registered transport (qla4xxx) Jan 28 02:04:59.374932 kernel: QLogic iSCSI HBA Driver Jan 28 02:04:59.427717 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 28 02:04:59.433725 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 28 02:04:59.475002 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 28 02:04:59.475103 kernel: device-mapper: uevent: version 1.0.3 Jan 28 02:04:59.476768 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 28 02:04:59.527697 kernel: raid6: sse2x4 gen() 13166 MB/s Jan 28 02:04:59.544687 kernel: raid6: sse2x2 gen() 8771 MB/s Jan 28 02:04:59.563388 kernel: raid6: sse2x1 gen() 9732 MB/s Jan 28 02:04:59.563477 kernel: raid6: using algorithm sse2x4 gen() 13166 MB/s Jan 28 02:04:59.582429 kernel: raid6: .... xor() 7295 MB/s, rmw enabled Jan 28 02:04:59.582507 kernel: raid6: using ssse3x2 recovery algorithm Jan 28 02:04:59.608676 kernel: xor: automatically using best checksumming function avx Jan 28 02:04:59.797650 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 28 02:04:59.813545 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 28 02:04:59.821843 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 02:04:59.844369 systemd-udevd[420]: Using default interface naming scheme 'v255'. Jan 28 02:04:59.851001 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 02:04:59.858759 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 28 02:04:59.882965 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Jan 28 02:04:59.922663 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 02:04:59.928784 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 02:05:00.049287 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 02:05:00.059314 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 28 02:05:00.083641 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 28 02:05:00.085266 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 02:05:00.087685 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 02:05:00.091166 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 02:05:00.098081 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 28 02:05:00.126620 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 28 02:05:00.171316 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jan 28 02:05:00.182646 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 28 02:05:00.185655 kernel: cryptd: max_cpu_qlen set to 1000 Jan 28 02:05:00.204750 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 28 02:05:00.204816 kernel: GPT:17805311 != 125829119 Jan 28 02:05:00.204835 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 28 02:05:00.204864 kernel: GPT:17805311 != 125829119 Jan 28 02:05:00.204880 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 28 02:05:00.204896 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 02:05:00.227585 kernel: ACPI: bus type USB registered Jan 28 02:05:00.230619 kernel: AVX version of gcm_enc/dec engaged. Jan 28 02:05:00.236585 kernel: AES CTR mode by8 optimization enabled Jan 28 02:05:00.239578 kernel: usbcore: registered new interface driver usbfs Jan 28 02:05:00.241318 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 02:05:00.241505 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 02:05:00.244611 kernel: libata version 3.00 loaded. Jan 28 02:05:00.244381 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 02:05:00.245335 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 02:05:00.245500 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:05:00.255044 kernel: usbcore: registered new interface driver hub Jan 28 02:05:00.250058 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 02:05:00.259497 kernel: ahci 0000:00:1f.2: version 3.0 Jan 28 02:05:00.259804 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 28 02:05:00.259825 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 28 02:05:00.262750 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 28 02:05:00.262026 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 02:05:00.272357 kernel: usbcore: registered new device driver usb Jan 28 02:05:00.272385 kernel: scsi host0: ahci Jan 28 02:05:00.275006 kernel: scsi host1: ahci Jan 28 02:05:00.275236 kernel: scsi host2: ahci Jan 28 02:05:00.278186 kernel: scsi host3: ahci Jan 28 02:05:00.278409 kernel: scsi host4: ahci Jan 28 02:05:00.303021 kernel: scsi host5: ahci Jan 28 02:05:00.303329 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Jan 28 02:05:00.303351 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Jan 28 02:05:00.303378 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Jan 28 02:05:00.303396 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Jan 28 02:05:00.303412 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Jan 28 02:05:00.303429 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Jan 28 02:05:00.309653 kernel: BTRFS: device fsid 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (467) Jan 28 02:05:00.341196 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 28 02:05:00.399091 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (475) Jan 28 02:05:00.405172 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 28 02:05:00.405978 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 28 02:05:00.409383 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:05:00.421447 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 02:05:00.428004 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 28 02:05:00.436750 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 28 02:05:00.441774 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 28 02:05:00.444797 disk-uuid[557]: Primary Header is updated. Jan 28 02:05:00.444797 disk-uuid[557]: Secondary Entries is updated. Jan 28 02:05:00.444797 disk-uuid[557]: Secondary Header is updated. Jan 28 02:05:00.449609 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 02:05:00.460586 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 02:05:00.487711 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 02:05:00.617677 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 28 02:05:00.624615 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 28 02:05:00.642419 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 28 02:05:00.642457 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 28 02:05:00.649145 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 28 02:05:00.649194 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 28 02:05:00.660221 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 28 02:05:00.660670 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jan 28 02:05:00.664602 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 28 02:05:00.667007 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jan 28 02:05:00.667255 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jan 28 02:05:00.668790 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jan 28 02:05:00.671229 kernel: hub 1-0:1.0: USB hub found Jan 28 02:05:00.671514 kernel: hub 1-0:1.0: 4 ports detected Jan 28 02:05:00.676060 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 28 02:05:00.676333 kernel: hub 2-0:1.0: USB hub found Jan 28 02:05:00.676603 kernel: hub 2-0:1.0: 4 ports detected Jan 28 02:05:00.915647 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 28 02:05:01.058991 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 28 02:05:01.064162 kernel: usbcore: registered new interface driver usbhid Jan 28 02:05:01.064288 kernel: usbhid: USB HID core driver Jan 28 02:05:01.072298 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jan 28 02:05:01.072342 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jan 28 02:05:01.460640 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 28 02:05:01.462370 disk-uuid[558]: The operation has completed successfully. Jan 28 02:05:01.517478 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 28 02:05:01.517647 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 28 02:05:01.534768 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 28 02:05:01.546827 sh[584]: Success Jan 28 02:05:01.563604 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jan 28 02:05:01.623598 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 28 02:05:01.636769 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 28 02:05:01.638928 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 28 02:05:01.661772 kernel: BTRFS info (device dm-0): first mount of filesystem 2a6822f0-63ba-4278-91a8-3fe9ed12ab22 Jan 28 02:05:01.661821 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 28 02:05:01.663795 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 28 02:05:01.665869 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 28 02:05:01.668518 kernel: BTRFS info (device dm-0): using free space tree Jan 28 02:05:01.678618 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 28 02:05:01.680162 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 28 02:05:01.686761 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 28 02:05:01.691018 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 28 02:05:01.707810 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 02:05:01.707850 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 02:05:01.707869 kernel: BTRFS info (device vda6): using free space tree Jan 28 02:05:01.715603 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 02:05:01.727533 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 28 02:05:01.730130 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 02:05:01.738937 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 28 02:05:01.746826 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 28 02:05:01.833054 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 02:05:01.840822 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 02:05:01.882647 systemd-networkd[767]: lo: Link UP Jan 28 02:05:01.883646 systemd-networkd[767]: lo: Gained carrier Jan 28 02:05:01.886685 systemd-networkd[767]: Enumeration completed Jan 28 02:05:01.887890 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 02:05:01.887895 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 02:05:01.889954 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 02:05:01.892392 systemd[1]: Reached target network.target - Network. Jan 28 02:05:01.897611 ignition[682]: Ignition 2.19.0 Jan 28 02:05:01.894537 systemd-networkd[767]: eth0: Link UP Jan 28 02:05:01.897628 ignition[682]: Stage: fetch-offline Jan 28 02:05:01.894543 systemd-networkd[767]: eth0: Gained carrier Jan 28 02:05:01.897711 ignition[682]: no configs at "/usr/lib/ignition/base.d" Jan 28 02:05:01.894554 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 02:05:01.897752 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 02:05:01.901018 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 02:05:01.897928 ignition[682]: parsed url from cmdline: "" Jan 28 02:05:01.897935 ignition[682]: no config URL provided Jan 28 02:05:01.897943 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 02:05:01.897958 ignition[682]: no config at "/usr/lib/ignition/user.ign" Jan 28 02:05:01.897966 ignition[682]: failed to fetch config: resource requires networking Jan 28 02:05:01.908781 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 28 02:05:01.898209 ignition[682]: Ignition finished successfully Jan 28 02:05:01.910656 systemd-networkd[767]: eth0: DHCPv4 address 10.230.50.62/30, gateway 10.230.50.61 acquired from 10.230.50.61 Jan 28 02:05:01.930944 ignition[775]: Ignition 2.19.0 Jan 28 02:05:01.930960 ignition[775]: Stage: fetch Jan 28 02:05:01.931183 ignition[775]: no configs at "/usr/lib/ignition/base.d" Jan 28 02:05:01.931202 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 02:05:01.931352 ignition[775]: parsed url from cmdline: "" Jan 28 02:05:01.931358 ignition[775]: no config URL provided Jan 28 02:05:01.931367 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Jan 28 02:05:01.931382 ignition[775]: no config at "/usr/lib/ignition/user.ign" Jan 28 02:05:01.931509 ignition[775]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 28 02:05:01.931595 ignition[775]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 28 02:05:01.933635 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 28 02:05:01.971595 ignition[775]: GET result: OK Jan 28 02:05:01.972095 ignition[775]: parsing config with SHA512: c8ecbfcfbed99e67a3417006b0c78c55f0817910e693c51e28eaf1b2d4d6809ffdb1e1c8443b49f910a0c0ff9bec2a698b2637f683d9229685901520f73929fc Jan 28 02:05:01.978166 unknown[775]: fetched base config from "system" Jan 28 02:05:01.978187 unknown[775]: fetched base config from "system" Jan 28 02:05:01.978827 ignition[775]: fetch: fetch complete Jan 28 02:05:01.978196 unknown[775]: fetched user config from "openstack" Jan 28 02:05:01.978835 ignition[775]: fetch: fetch passed Jan 28 02:05:01.980552 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 28 02:05:01.978895 ignition[775]: Ignition finished successfully Jan 28 02:05:01.988837 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 28 02:05:02.015410 ignition[782]: Ignition 2.19.0 Jan 28 02:05:02.015429 ignition[782]: Stage: kargs Jan 28 02:05:02.015732 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jan 28 02:05:02.015755 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 02:05:02.019613 ignition[782]: kargs: kargs passed Jan 28 02:05:02.019707 ignition[782]: Ignition finished successfully Jan 28 02:05:02.021265 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 28 02:05:02.028780 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 28 02:05:02.051149 ignition[788]: Ignition 2.19.0 Jan 28 02:05:02.051168 ignition[788]: Stage: disks Jan 28 02:05:02.051423 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 28 02:05:02.051443 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 02:05:02.056421 ignition[788]: disks: disks passed Jan 28 02:05:02.057217 ignition[788]: Ignition finished successfully Jan 28 02:05:02.059203 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 28 02:05:02.060940 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 28 02:05:02.061733 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 02:05:02.063279 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 02:05:02.064895 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 02:05:02.066208 systemd[1]: Reached target basic.target - Basic System. Jan 28 02:05:02.075865 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 28 02:05:02.094272 systemd-fsck[796]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 28 02:05:02.097839 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 28 02:05:02.107709 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 28 02:05:02.220586 kernel: EXT4-fs (vda9): mounted filesystem 9c67117c-3c4f-4d47-a63c-8955eb7dbc8a r/w with ordered data mode. Quota mode: none. Jan 28 02:05:02.222131 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 28 02:05:02.223685 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 28 02:05:02.230689 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 02:05:02.239706 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 28 02:05:02.242736 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 28 02:05:02.245206 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 28 02:05:02.246257 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 28 02:05:02.246344 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 02:05:02.255522 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 28 02:05:02.259073 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (804) Jan 28 02:05:02.270124 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 02:05:02.268836 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 28 02:05:02.275141 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 02:05:02.275173 kernel: BTRFS info (device vda6): using free space tree Jan 28 02:05:02.286326 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 02:05:02.290696 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 02:05:02.333348 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Jan 28 02:05:02.341511 initrd-setup-root[838]: cut: /sysroot/etc/group: No such file or directory Jan 28 02:05:02.349106 initrd-setup-root[845]: cut: /sysroot/etc/shadow: No such file or directory Jan 28 02:05:02.355981 initrd-setup-root[853]: cut: /sysroot/etc/gshadow: No such file or directory Jan 28 02:05:02.461200 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 28 02:05:02.467703 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 28 02:05:02.470766 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 28 02:05:02.482618 kernel: BTRFS info (device vda6): last unmount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 02:05:02.507309 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 28 02:05:02.510206 ignition[920]: INFO : Ignition 2.19.0 Jan 28 02:05:02.510206 ignition[920]: INFO : Stage: mount Jan 28 02:05:02.511816 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 02:05:02.511816 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 02:05:02.513573 ignition[920]: INFO : mount: mount passed Jan 28 02:05:02.513573 ignition[920]: INFO : Ignition finished successfully Jan 28 02:05:02.513924 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 28 02:05:02.660207 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 28 02:05:03.527892 systemd-networkd[767]: eth0: Gained IPv6LL Jan 28 02:05:04.013234 systemd-networkd[767]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8c8f:24:19ff:fee6:323e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8c8f:24:19ff:fee6:323e/64 assigned by NDisc. Jan 28 02:05:04.013252 systemd-networkd[767]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 28 02:05:09.416075 coreos-metadata[806]: Jan 28 02:05:09.415 WARN failed to locate config-drive, using the metadata service API instead Jan 28 02:05:09.441469 coreos-metadata[806]: Jan 28 02:05:09.441 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 28 02:05:09.461806 coreos-metadata[806]: Jan 28 02:05:09.461 INFO Fetch successful Jan 28 02:05:09.462813 coreos-metadata[806]: Jan 28 02:05:09.462 INFO wrote hostname srv-rjxd2.gb1.brightbox.com to /sysroot/etc/hostname Jan 28 02:05:09.464971 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 28 02:05:09.465188 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 28 02:05:09.477820 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 28 02:05:09.508062 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 28 02:05:09.520589 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Jan 28 02:05:09.524672 kernel: BTRFS info (device vda6): first mount of filesystem 5195d4b2-9d51-430f-afba-abd4fdaa4f68 Jan 28 02:05:09.524710 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 28 02:05:09.525650 kernel: BTRFS info (device vda6): using free space tree Jan 28 02:05:09.531588 kernel: BTRFS info (device vda6): auto enabling async discard Jan 28 02:05:09.534047 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 28 02:05:09.564222 ignition[955]: INFO : Ignition 2.19.0 Jan 28 02:05:09.564222 ignition[955]: INFO : Stage: files Jan 28 02:05:09.566360 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 02:05:09.566360 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 02:05:09.566360 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Jan 28 02:05:09.569579 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 28 02:05:09.569579 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 28 02:05:09.572336 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 28 02:05:09.573600 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 28 02:05:09.574919 unknown[955]: wrote ssh authorized keys file for user: core Jan 28 02:05:09.576009 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 28 02:05:09.577701 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 28 02:05:09.578818 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 28 02:05:09.578818 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 02:05:09.578818 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 28 02:05:09.779348 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 28 02:05:10.182699 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 28 02:05:10.182699 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 28 02:05:10.185693 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 28 02:05:10.185693 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 28 02:05:10.185693 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 28 02:05:10.185693 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 02:05:10.185693 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 28 02:05:10.185693 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 02:05:10.185693 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 28 02:05:10.185693 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 02:05:10.185693 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 28 02:05:10.185693 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 02:05:10.185693 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 02:05:10.185693 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 02:05:10.185693 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 28 02:05:10.529596 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 28 02:05:11.920644 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 28 02:05:11.920644 ignition[955]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 28 02:05:11.925590 ignition[955]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 28 02:05:11.925590 ignition[955]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 28 02:05:11.925590 ignition[955]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 28 02:05:11.925590 ignition[955]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 28 02:05:11.925590 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 02:05:11.925590 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 28 02:05:11.925590 ignition[955]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 28 02:05:11.925590 ignition[955]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 28 02:05:11.925590 ignition[955]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 28 02:05:11.939137 ignition[955]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 28 02:05:11.939137 ignition[955]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 28 02:05:11.939137 ignition[955]: INFO : files: files passed Jan 28 02:05:11.939137 ignition[955]: INFO : Ignition finished successfully Jan 28 02:05:11.931098 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 28 02:05:11.943740 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 28 02:05:11.947216 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 28 02:05:11.950889 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 28 02:05:11.951066 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 28 02:05:11.975645 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 02:05:11.975645 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 28 02:05:11.979512 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 28 02:05:11.981050 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 02:05:11.982521 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 28 02:05:11.989758 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 28 02:05:12.023199 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 28 02:05:12.024231 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 28 02:05:12.025645 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 28 02:05:12.027250 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 28 02:05:12.029103 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 28 02:05:12.037826 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 28 02:05:12.054927 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 02:05:12.066791 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 28 02:05:12.079023 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 28 02:05:12.080963 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 02:05:12.081961 systemd[1]: Stopped target timers.target - Timer Units. Jan 28 02:05:12.083613 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 28 02:05:12.083777 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 28 02:05:12.086717 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 28 02:05:12.087683 systemd[1]: Stopped target basic.target - Basic System. Jan 28 02:05:12.089166 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 28 02:05:12.090630 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 28 02:05:12.092122 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 28 02:05:12.093842 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 28 02:05:12.095380 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 28 02:05:12.097034 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 28 02:05:12.098768 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 28 02:05:12.100294 systemd[1]: Stopped target swap.target - Swaps. Jan 28 02:05:12.101734 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 28 02:05:12.101909 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 28 02:05:12.103839 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 28 02:05:12.104841 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 02:05:12.106415 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 28 02:05:12.106581 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 02:05:12.108100 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 28 02:05:12.108245 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 28 02:05:12.110484 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 28 02:05:12.110672 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 28 02:05:12.111665 systemd[1]: ignition-files.service: Deactivated successfully. Jan 28 02:05:12.111809 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 28 02:05:12.118843 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 28 02:05:12.120481 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 28 02:05:12.121541 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 02:05:12.130871 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 28 02:05:12.133625 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 28 02:05:12.133798 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 02:05:12.137642 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 28 02:05:12.138711 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 28 02:05:12.144293 ignition[1007]: INFO : Ignition 2.19.0 Jan 28 02:05:12.144293 ignition[1007]: INFO : Stage: umount Jan 28 02:05:12.147631 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 28 02:05:12.147631 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 28 02:05:12.147631 ignition[1007]: INFO : umount: umount passed Jan 28 02:05:12.147631 ignition[1007]: INFO : Ignition finished successfully Jan 28 02:05:12.147330 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 28 02:05:12.148645 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 28 02:05:12.161869 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 28 02:05:12.162028 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 28 02:05:12.163363 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 28 02:05:12.163529 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 28 02:05:12.165107 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 28 02:05:12.165188 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 28 02:05:12.168812 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 28 02:05:12.168888 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 28 02:05:12.169616 systemd[1]: Stopped target network.target - Network. Jan 28 02:05:12.170185 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 28 02:05:12.170261 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 28 02:05:12.171780 systemd[1]: Stopped target paths.target - Path Units. Jan 28 02:05:12.173624 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 28 02:05:12.179610 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 02:05:12.180938 systemd[1]: Stopped target slices.target - Slice Units. Jan 28 02:05:12.183452 systemd[1]: Stopped target sockets.target - Socket Units. Jan 28 02:05:12.185606 systemd[1]: iscsid.socket: Deactivated successfully. Jan 28 02:05:12.185692 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 28 02:05:12.186416 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 28 02:05:12.187212 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 28 02:05:12.188746 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 28 02:05:12.188841 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 28 02:05:12.190327 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 28 02:05:12.190396 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 28 02:05:12.192010 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 28 02:05:12.195455 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 28 02:05:12.198689 systemd-networkd[767]: eth0: DHCPv6 lease lost Jan 28 02:05:12.200031 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 28 02:05:12.202618 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 28 02:05:12.202791 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 28 02:05:12.205279 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 28 02:05:12.205461 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 28 02:05:12.210125 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 28 02:05:12.210236 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 28 02:05:12.225723 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 28 02:05:12.226868 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 28 02:05:12.226948 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 28 02:05:12.227750 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 28 02:05:12.227812 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 28 02:05:12.228539 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 28 02:05:12.228634 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 28 02:05:12.230084 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 28 02:05:12.230162 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 02:05:12.231996 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 02:05:12.243028 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 28 02:05:12.243295 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 02:05:12.246953 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 28 02:05:12.247035 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 28 02:05:12.248750 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 28 02:05:12.248804 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 02:05:12.250845 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 28 02:05:12.250931 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 28 02:05:12.252949 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 28 02:05:12.253020 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 28 02:05:12.254371 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 28 02:05:12.254454 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 28 02:05:12.267758 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 28 02:05:12.269928 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 28 02:05:12.270000 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 02:05:12.271664 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 28 02:05:12.271731 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:05:12.275429 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 28 02:05:12.275622 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 28 02:05:12.279723 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 28 02:05:12.279842 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 28 02:05:12.329753 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 28 02:05:12.329915 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 28 02:05:12.331830 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 28 02:05:12.332762 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 28 02:05:12.332841 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 28 02:05:12.345770 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 28 02:05:12.355497 systemd[1]: Switching root. Jan 28 02:05:12.381932 systemd-journald[202]: Journal stopped Jan 28 02:05:13.807947 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Jan 28 02:05:13.808067 kernel: SELinux: policy capability network_peer_controls=1 Jan 28 02:05:13.808092 kernel: SELinux: policy capability open_perms=1 Jan 28 02:05:13.808117 kernel: SELinux: policy capability extended_socket_class=1 Jan 28 02:05:13.808151 kernel: SELinux: policy capability always_check_network=0 Jan 28 02:05:13.808171 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 28 02:05:13.808188 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 28 02:05:13.808229 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 28 02:05:13.808248 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 28 02:05:13.808271 kernel: audit: type=1403 audit(1769565912.669:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 28 02:05:13.808291 systemd[1]: Successfully loaded SELinux policy in 52.505ms. Jan 28 02:05:13.808336 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.682ms. Jan 28 02:05:13.808356 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 28 02:05:13.808387 systemd[1]: Detected virtualization kvm. Jan 28 02:05:13.808437 systemd[1]: Detected architecture x86-64. Jan 28 02:05:13.808465 systemd[1]: Detected first boot. Jan 28 02:05:13.808484 systemd[1]: Hostname set to . Jan 28 02:05:13.808503 systemd[1]: Initializing machine ID from VM UUID. Jan 28 02:05:13.808528 zram_generator::config[1070]: No configuration found. Jan 28 02:05:13.808549 systemd[1]: Populated /etc with preset unit settings. Jan 28 02:05:13.811242 systemd[1]: Queued start job for default target multi-user.target. Jan 28 02:05:13.811304 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 28 02:05:13.811326 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 28 02:05:13.811344 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 28 02:05:13.811383 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 28 02:05:13.811402 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 28 02:05:13.811450 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 28 02:05:13.811478 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 28 02:05:13.811499 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 28 02:05:13.811530 systemd[1]: Created slice user.slice - User and Session Slice. Jan 28 02:05:13.813283 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 28 02:05:13.813319 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 28 02:05:13.813339 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 28 02:05:13.813375 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 28 02:05:13.813395 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 28 02:05:13.813414 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 28 02:05:13.813453 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 28 02:05:13.813474 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 28 02:05:13.813507 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 28 02:05:13.813528 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 28 02:05:13.813549 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 28 02:05:13.813634 systemd[1]: Reached target slices.target - Slice Units. Jan 28 02:05:13.813658 systemd[1]: Reached target swap.target - Swaps. Jan 28 02:05:13.813678 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 28 02:05:13.813697 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 28 02:05:13.813731 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 28 02:05:13.813764 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 28 02:05:13.813782 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 28 02:05:13.813825 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 28 02:05:13.813862 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 28 02:05:13.813883 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 28 02:05:13.813914 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 28 02:05:13.813935 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 28 02:05:13.813955 systemd[1]: Mounting media.mount - External Media Directory... Jan 28 02:05:13.813973 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:05:13.813993 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 28 02:05:13.814012 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 28 02:05:13.814031 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 28 02:05:13.814051 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 28 02:05:13.814082 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 02:05:13.814103 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 28 02:05:13.814123 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 28 02:05:13.814142 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 02:05:13.814162 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 02:05:13.814181 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 02:05:13.814200 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 28 02:05:13.814219 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 02:05:13.814239 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 28 02:05:13.814270 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 28 02:05:13.814291 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 28 02:05:13.814310 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 28 02:05:13.814329 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 28 02:05:13.814355 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 28 02:05:13.814374 kernel: loop: module loaded Jan 28 02:05:13.814392 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 28 02:05:13.814411 kernel: fuse: init (API version 7.39) Jan 28 02:05:13.814453 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 28 02:05:13.814487 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:05:13.814509 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 28 02:05:13.814528 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 28 02:05:13.814548 systemd[1]: Mounted media.mount - External Media Directory. Jan 28 02:05:13.817333 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 28 02:05:13.817360 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 28 02:05:13.817380 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 28 02:05:13.817399 kernel: ACPI: bus type drm_connector registered Jan 28 02:05:13.817474 systemd-journald[1180]: Collecting audit messages is disabled. Jan 28 02:05:13.817523 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 28 02:05:13.817544 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 28 02:05:13.819505 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 28 02:05:13.819533 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 28 02:05:13.819554 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 02:05:13.819602 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 02:05:13.819638 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 02:05:13.819666 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 02:05:13.819686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 02:05:13.819706 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 02:05:13.819727 systemd-journald[1180]: Journal started Jan 28 02:05:13.819768 systemd-journald[1180]: Runtime Journal (/run/log/journal/1540a2a1e7ff42b4a56b3fdb94ed3164) is 4.7M, max 38.0M, 33.2M free. Jan 28 02:05:13.824259 systemd[1]: Started systemd-journald.service - Journal Service. Jan 28 02:05:13.827530 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 28 02:05:13.827791 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 28 02:05:13.828892 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 02:05:13.829180 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 02:05:13.830881 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 28 02:05:13.832163 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 28 02:05:13.833377 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 28 02:05:13.846880 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 28 02:05:13.855687 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 28 02:05:13.857976 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 28 02:05:13.859628 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 28 02:05:13.870706 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 28 02:05:13.888031 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 28 02:05:13.891415 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 02:05:13.904317 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 28 02:05:13.905671 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 02:05:13.915152 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 28 02:05:13.920411 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 28 02:05:13.945754 systemd-journald[1180]: Time spent on flushing to /var/log/journal/1540a2a1e7ff42b4a56b3fdb94ed3164 is 62.663ms for 1122 entries. Jan 28 02:05:13.945754 systemd-journald[1180]: System Journal (/var/log/journal/1540a2a1e7ff42b4a56b3fdb94ed3164) is 8.0M, max 584.8M, 576.8M free. Jan 28 02:05:14.020840 systemd-journald[1180]: Received client request to flush runtime journal. Jan 28 02:05:13.936634 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 28 02:05:13.939747 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 28 02:05:13.940927 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 28 02:05:13.951881 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 28 02:05:13.984970 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 28 02:05:13.995731 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 28 02:05:14.008578 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 28 02:05:14.017992 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Jan 28 02:05:14.018012 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Jan 28 02:05:14.029027 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 28 02:05:14.035056 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 28 02:05:14.052746 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 28 02:05:14.057304 udevadm[1234]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 28 02:05:14.092794 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 28 02:05:14.101810 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 28 02:05:14.127581 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 28 02:05:14.127610 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Jan 28 02:05:14.135033 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 28 02:05:14.601198 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 28 02:05:14.611834 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 28 02:05:14.645184 systemd-udevd[1253]: Using default interface naming scheme 'v255'. Jan 28 02:05:14.672851 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 28 02:05:14.684417 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 28 02:05:14.713942 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 28 02:05:14.794535 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 28 02:05:14.823877 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 28 02:05:14.859597 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1257) Jan 28 02:05:14.905680 systemd-networkd[1258]: lo: Link UP Jan 28 02:05:14.906285 systemd-networkd[1258]: lo: Gained carrier Jan 28 02:05:14.908775 systemd-networkd[1258]: Enumeration completed Jan 28 02:05:14.909348 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 28 02:05:14.909973 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 02:05:14.909979 systemd-networkd[1258]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 28 02:05:14.912062 systemd-networkd[1258]: eth0: Link UP Jan 28 02:05:14.912226 systemd-networkd[1258]: eth0: Gained carrier Jan 28 02:05:14.912320 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 02:05:14.918796 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 28 02:05:14.928650 systemd-networkd[1258]: eth0: DHCPv4 address 10.230.50.62/30, gateway 10.230.50.61 acquired from 10.230.50.61 Jan 28 02:05:14.973490 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 28 02:05:14.977582 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 28 02:05:14.990609 kernel: ACPI: button: Power Button [PWRF] Jan 28 02:05:14.995580 kernel: mousedev: PS/2 mouse device common for all mice Jan 28 02:05:14.997915 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 28 02:05:15.034617 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 28 02:05:15.034935 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 28 02:05:15.034969 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 28 02:05:15.037250 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 28 02:05:15.096945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 28 02:05:15.270124 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 28 02:05:15.297201 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 28 02:05:15.303800 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 28 02:05:15.324659 lvm[1293]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 02:05:15.361722 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 28 02:05:15.363165 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 28 02:05:15.375767 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 28 02:05:15.382871 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 28 02:05:15.415791 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 28 02:05:15.417447 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 28 02:05:15.418707 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 28 02:05:15.418886 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 28 02:05:15.419883 systemd[1]: Reached target machines.target - Containers. Jan 28 02:05:15.422292 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 28 02:05:15.428778 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 28 02:05:15.431601 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 28 02:05:15.432622 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 02:05:15.436749 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 28 02:05:15.443841 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 28 02:05:15.458731 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 28 02:05:15.460896 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 28 02:05:15.481108 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 28 02:05:15.486888 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 28 02:05:15.489835 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 28 02:05:15.501599 kernel: loop0: detected capacity change from 0 to 142488 Jan 28 02:05:15.535684 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 28 02:05:15.556629 kernel: loop1: detected capacity change from 0 to 8 Jan 28 02:05:15.581963 kernel: loop2: detected capacity change from 0 to 140768 Jan 28 02:05:15.624924 kernel: loop3: detected capacity change from 0 to 224512 Jan 28 02:05:15.660594 kernel: loop4: detected capacity change from 0 to 142488 Jan 28 02:05:15.684595 kernel: loop5: detected capacity change from 0 to 8 Jan 28 02:05:15.691589 kernel: loop6: detected capacity change from 0 to 140768 Jan 28 02:05:15.711074 kernel: loop7: detected capacity change from 0 to 224512 Jan 28 02:05:15.730247 (sd-merge)[1318]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jan 28 02:05:15.731157 (sd-merge)[1318]: Merged extensions into '/usr'. Jan 28 02:05:15.737429 systemd[1]: Reloading requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... Jan 28 02:05:15.737458 systemd[1]: Reloading... Jan 28 02:05:15.831626 zram_generator::config[1349]: No configuration found. Jan 28 02:05:16.035590 ldconfig[1300]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 28 02:05:16.055239 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 02:05:16.144768 systemd[1]: Reloading finished in 406 ms. Jan 28 02:05:16.171360 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 28 02:05:16.172823 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 28 02:05:16.186862 systemd[1]: Starting ensure-sysext.service... Jan 28 02:05:16.189744 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 28 02:05:16.206851 systemd[1]: Reloading requested from client PID 1409 ('systemctl') (unit ensure-sysext.service)... Jan 28 02:05:16.206880 systemd[1]: Reloading... Jan 28 02:05:16.237420 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 28 02:05:16.238063 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 28 02:05:16.239483 systemd-tmpfiles[1410]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 28 02:05:16.241962 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Jan 28 02:05:16.242084 systemd-tmpfiles[1410]: ACLs are not supported, ignoring. Jan 28 02:05:16.246973 systemd-tmpfiles[1410]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 02:05:16.246990 systemd-tmpfiles[1410]: Skipping /boot Jan 28 02:05:16.269592 zram_generator::config[1438]: No configuration found. Jan 28 02:05:16.276722 systemd-tmpfiles[1410]: Detected autofs mount point /boot during canonicalization of boot. Jan 28 02:05:16.276740 systemd-tmpfiles[1410]: Skipping /boot Jan 28 02:05:16.472294 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 02:05:16.562677 systemd[1]: Reloading finished in 355 ms. Jan 28 02:05:16.585190 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 28 02:05:16.606850 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 02:05:16.610764 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 28 02:05:16.620806 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 28 02:05:16.630740 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 28 02:05:16.640630 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 28 02:05:16.655221 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:05:16.656118 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 02:05:16.662824 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 02:05:16.675666 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 02:05:16.681873 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 02:05:16.689831 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 02:05:16.690036 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:05:16.693547 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 28 02:05:16.700042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 02:05:16.700319 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 02:05:16.705921 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 02:05:16.706208 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 02:05:16.709514 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 02:05:16.710222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 02:05:16.716739 systemd-networkd[1258]: eth0: Gained IPv6LL Jan 28 02:05:16.724269 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 28 02:05:16.731211 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 28 02:05:16.736753 augenrules[1536]: No rules Jan 28 02:05:16.739375 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 02:05:16.741683 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:05:16.742363 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 28 02:05:16.748955 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 28 02:05:16.754386 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 28 02:05:16.767885 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 28 02:05:16.779844 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 28 02:05:16.780743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 28 02:05:16.798835 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 28 02:05:16.801715 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 28 02:05:16.806885 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 28 02:05:16.807161 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 28 02:05:16.810835 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 28 02:05:16.811065 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 28 02:05:16.812918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 28 02:05:16.813182 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 28 02:05:16.824735 systemd[1]: Finished ensure-sysext.service. Jan 28 02:05:16.827952 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 28 02:05:16.828206 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 28 02:05:16.845654 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 28 02:05:16.848262 systemd-resolved[1512]: Positive Trust Anchors: Jan 28 02:05:16.848308 systemd-resolved[1512]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 28 02:05:16.848366 systemd-resolved[1512]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 28 02:05:16.851036 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 28 02:05:16.851175 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 28 02:05:16.856088 systemd-resolved[1512]: Using system hostname 'srv-rjxd2.gb1.brightbox.com'. Jan 28 02:05:16.870815 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 28 02:05:16.871641 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 28 02:05:16.871948 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 28 02:05:16.873216 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 28 02:05:16.874937 systemd[1]: Reached target network.target - Network. Jan 28 02:05:16.875599 systemd[1]: Reached target network-online.target - Network is Online. Jan 28 02:05:16.876317 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 28 02:05:16.941248 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 28 02:05:16.942397 systemd[1]: Reached target sysinit.target - System Initialization. Jan 28 02:05:16.943311 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 28 02:05:16.944256 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 28 02:05:16.945063 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 28 02:05:16.945910 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 28 02:05:16.945982 systemd[1]: Reached target paths.target - Path Units. Jan 28 02:05:16.946651 systemd[1]: Reached target time-set.target - System Time Set. Jan 28 02:05:16.947532 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 28 02:05:16.948458 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 28 02:05:16.949227 systemd[1]: Reached target timers.target - Timer Units. Jan 28 02:05:16.950659 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 28 02:05:16.953591 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 28 02:05:16.956338 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 28 02:05:16.959720 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 28 02:05:16.960491 systemd[1]: Reached target sockets.target - Socket Units. Jan 28 02:05:16.961157 systemd[1]: Reached target basic.target - Basic System. Jan 28 02:05:16.962133 systemd[1]: System is tainted: cgroupsv1 Jan 28 02:05:16.962199 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 28 02:05:16.962249 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 28 02:05:16.964593 systemd[1]: Starting containerd.service - containerd container runtime... Jan 28 02:05:16.967772 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 28 02:05:16.974844 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 28 02:05:16.978697 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 28 02:05:16.983752 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 28 02:05:16.993664 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 28 02:05:17.001448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:05:17.008061 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 28 02:05:17.009590 jq[1575]: false Jan 28 02:05:17.018609 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 28 02:05:17.039609 extend-filesystems[1578]: Found loop4 Jan 28 02:05:17.043407 extend-filesystems[1578]: Found loop5 Jan 28 02:05:17.043407 extend-filesystems[1578]: Found loop6 Jan 28 02:05:17.043407 extend-filesystems[1578]: Found loop7 Jan 28 02:05:17.043407 extend-filesystems[1578]: Found vda Jan 28 02:05:17.043407 extend-filesystems[1578]: Found vda1 Jan 28 02:05:17.043407 extend-filesystems[1578]: Found vda2 Jan 28 02:05:17.043407 extend-filesystems[1578]: Found vda3 Jan 28 02:05:17.043407 extend-filesystems[1578]: Found usr Jan 28 02:05:17.043407 extend-filesystems[1578]: Found vda4 Jan 28 02:05:17.043407 extend-filesystems[1578]: Found vda6 Jan 28 02:05:17.043407 extend-filesystems[1578]: Found vda7 Jan 28 02:05:17.043407 extend-filesystems[1578]: Found vda9 Jan 28 02:05:17.043407 extend-filesystems[1578]: Checking size of /dev/vda9 Jan 28 02:05:17.048694 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 28 02:05:17.067115 dbus-daemon[1574]: [system] SELinux support is enabled Jan 28 02:05:17.058126 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 28 02:05:17.080852 dbus-daemon[1574]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1258 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 28 02:05:17.078774 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 28 02:05:17.094597 extend-filesystems[1578]: Resized partition /dev/vda9 Jan 28 02:05:17.095470 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 28 02:05:17.098447 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 28 02:05:17.109757 systemd[1]: Starting update-engine.service - Update Engine... Jan 28 02:05:17.112798 extend-filesystems[1603]: resize2fs 1.47.1 (20-May-2024) Jan 28 02:05:17.124222 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jan 28 02:05:17.123585 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 28 02:05:17.126074 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 28 02:05:17.147391 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 28 02:05:17.151617 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 28 02:05:17.153841 systemd[1]: motdgen.service: Deactivated successfully. Jan 28 02:05:17.154167 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 28 02:05:17.175951 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 28 02:05:17.176354 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 28 02:05:17.858002 systemd-timesyncd[1566]: Contacted time server 178.79.158.157:123 (0.flatcar.pool.ntp.org). Jan 28 02:05:17.858077 systemd-timesyncd[1566]: Initial clock synchronization to Wed 2026-01-28 02:05:17.857761 UTC. Jan 28 02:05:17.858167 systemd-resolved[1512]: Clock change detected. Flushing caches. Jan 28 02:05:17.877127 jq[1607]: true Jan 28 02:05:17.886360 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 28 02:05:17.906775 dbus-daemon[1574]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 28 02:05:17.910107 (ntainerd)[1627]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 28 02:05:17.913285 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 28 02:05:17.913344 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 28 02:05:17.926443 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 28 02:05:17.927283 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 28 02:05:17.927321 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 28 02:05:17.951611 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1256) Jan 28 02:05:17.951744 tar[1614]: linux-amd64/LICENSE Jan 28 02:05:17.965040 tar[1614]: linux-amd64/helm Jan 28 02:05:17.976496 update_engine[1604]: I20260128 02:05:17.973699 1604 main.cc:92] Flatcar Update Engine starting Jan 28 02:05:17.990261 jq[1624]: true Jan 28 02:05:17.994992 systemd[1]: Started update-engine.service - Update Engine. Jan 28 02:05:17.997712 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 28 02:05:18.002031 update_engine[1604]: I20260128 02:05:17.999801 1604 update_check_scheduler.cc:74] Next update check in 8m29s Jan 28 02:05:18.002787 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 28 02:05:18.059779 systemd-logind[1601]: Watching system buttons on /dev/input/event2 (Power Button) Jan 28 02:05:18.061434 systemd-logind[1601]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 28 02:05:18.061954 systemd-logind[1601]: New seat seat0. Jan 28 02:05:18.085920 systemd[1]: Started systemd-logind.service - User Login Management. Jan 28 02:05:18.223154 bash[1651]: Updated "/home/core/.ssh/authorized_keys" Jan 28 02:05:18.228581 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 28 02:05:18.244577 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 28 02:05:18.255246 systemd[1]: Starting sshkeys.service... Jan 28 02:05:18.269900 locksmithd[1637]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 28 02:05:18.286303 extend-filesystems[1603]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 28 02:05:18.286303 extend-filesystems[1603]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 28 02:05:18.286303 extend-filesystems[1603]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 28 02:05:18.278438 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 28 02:05:18.300907 extend-filesystems[1578]: Resized filesystem in /dev/vda9 Jan 28 02:05:18.279111 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 28 02:05:18.292542 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 28 02:05:18.303913 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 28 02:05:18.429447 dbus-daemon[1574]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 28 02:05:18.430847 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 28 02:05:18.431217 dbus-daemon[1574]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1632 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 28 02:05:18.447975 systemd[1]: Starting polkit.service - Authorization Manager... Jan 28 02:05:18.495836 polkitd[1669]: Started polkitd version 121 Jan 28 02:05:18.527863 polkitd[1669]: Loading rules from directory /etc/polkit-1/rules.d Jan 28 02:05:18.527965 polkitd[1669]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 28 02:05:18.528815 polkitd[1669]: Finished loading, compiling and executing 2 rules Jan 28 02:05:18.529781 systemd[1]: Started polkit.service - Authorization Manager. Jan 28 02:05:18.529491 dbus-daemon[1574]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 28 02:05:18.530010 polkitd[1669]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 28 02:05:18.547162 systemd-hostnamed[1632]: Hostname set to (static) Jan 28 02:05:18.556191 containerd[1627]: time="2026-01-28T02:05:18.551999247Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 28 02:05:18.559752 systemd-networkd[1258]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8c8f:24:19ff:fee6:323e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8c8f:24:19ff:fee6:323e/64 assigned by NDisc. Jan 28 02:05:18.559762 systemd-networkd[1258]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jan 28 02:05:18.663650 containerd[1627]: time="2026-01-28T02:05:18.661309369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 28 02:05:18.673922 containerd[1627]: time="2026-01-28T02:05:18.673872975Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 28 02:05:18.675694 containerd[1627]: time="2026-01-28T02:05:18.675666740Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 28 02:05:18.676574 containerd[1627]: time="2026-01-28T02:05:18.675847958Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 28 02:05:18.676574 containerd[1627]: time="2026-01-28T02:05:18.676261683Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 28 02:05:18.676574 containerd[1627]: time="2026-01-28T02:05:18.676308113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 28 02:05:18.676574 containerd[1627]: time="2026-01-28T02:05:18.676481964Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 02:05:18.676574 containerd[1627]: time="2026-01-28T02:05:18.676524668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 28 02:05:18.678368 containerd[1627]: time="2026-01-28T02:05:18.678336019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 02:05:18.680401 containerd[1627]: time="2026-01-28T02:05:18.680374239Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 28 02:05:18.680547 containerd[1627]: time="2026-01-28T02:05:18.680520282Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 02:05:18.680688 containerd[1627]: time="2026-01-28T02:05:18.680652163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 28 02:05:18.681843 containerd[1627]: time="2026-01-28T02:05:18.680978893Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 28 02:05:18.681843 containerd[1627]: time="2026-01-28T02:05:18.681363016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 28 02:05:18.684918 containerd[1627]: time="2026-01-28T02:05:18.684447844Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 28 02:05:18.684918 containerd[1627]: time="2026-01-28T02:05:18.684478179Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 28 02:05:18.684918 containerd[1627]: time="2026-01-28T02:05:18.684687939Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 28 02:05:18.684918 containerd[1627]: time="2026-01-28T02:05:18.684796844Z" level=info msg="metadata content store policy set" policy=shared Jan 28 02:05:18.693900 containerd[1627]: time="2026-01-28T02:05:18.693351049Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 28 02:05:18.693900 containerd[1627]: time="2026-01-28T02:05:18.693444183Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 28 02:05:18.693900 containerd[1627]: time="2026-01-28T02:05:18.693474109Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 28 02:05:18.693900 containerd[1627]: time="2026-01-28T02:05:18.693498644Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 28 02:05:18.693900 containerd[1627]: time="2026-01-28T02:05:18.693585844Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 28 02:05:18.693900 containerd[1627]: time="2026-01-28T02:05:18.693820520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 28 02:05:18.697087 containerd[1627]: time="2026-01-28T02:05:18.695579513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 28 02:05:18.701653 containerd[1627]: time="2026-01-28T02:05:18.701614965Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 28 02:05:18.701732 containerd[1627]: time="2026-01-28T02:05:18.701661940Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 28 02:05:18.701732 containerd[1627]: time="2026-01-28T02:05:18.701688518Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 28 02:05:18.701732 containerd[1627]: time="2026-01-28T02:05:18.701724530Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 28 02:05:18.701906 containerd[1627]: time="2026-01-28T02:05:18.701752187Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 28 02:05:18.701906 containerd[1627]: time="2026-01-28T02:05:18.701790348Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 28 02:05:18.701906 containerd[1627]: time="2026-01-28T02:05:18.701813324Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 28 02:05:18.701906 containerd[1627]: time="2026-01-28T02:05:18.701836253Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 28 02:05:18.701906 containerd[1627]: time="2026-01-28T02:05:18.701870241Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 28 02:05:18.701906 containerd[1627]: time="2026-01-28T02:05:18.701900026Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 28 02:05:18.702134 containerd[1627]: time="2026-01-28T02:05:18.701930533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 28 02:05:18.702134 containerd[1627]: time="2026-01-28T02:05:18.701974697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.702134 containerd[1627]: time="2026-01-28T02:05:18.701999328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.702134 containerd[1627]: time="2026-01-28T02:05:18.702028761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.702134 containerd[1627]: time="2026-01-28T02:05:18.702048496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.702134 containerd[1627]: time="2026-01-28T02:05:18.702071494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.702134 containerd[1627]: time="2026-01-28T02:05:18.702120319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.702431 containerd[1627]: time="2026-01-28T02:05:18.702140358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.702431 containerd[1627]: time="2026-01-28T02:05:18.702171409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.702431 containerd[1627]: time="2026-01-28T02:05:18.702188559Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.702431 containerd[1627]: time="2026-01-28T02:05:18.702230545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.702431 containerd[1627]: time="2026-01-28T02:05:18.702246581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.702431 containerd[1627]: time="2026-01-28T02:05:18.702262739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.702431 containerd[1627]: time="2026-01-28T02:05:18.702298151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.702431 containerd[1627]: time="2026-01-28T02:05:18.702357461Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 28 02:05:18.702431 containerd[1627]: time="2026-01-28T02:05:18.702407438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.702431 containerd[1627]: time="2026-01-28T02:05:18.702430122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.702871 containerd[1627]: time="2026-01-28T02:05:18.702448396Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 28 02:05:18.702871 containerd[1627]: time="2026-01-28T02:05:18.702524922Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 28 02:05:18.705996 containerd[1627]: time="2026-01-28T02:05:18.703605128Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 28 02:05:18.705996 containerd[1627]: time="2026-01-28T02:05:18.703637921Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 28 02:05:18.705996 containerd[1627]: time="2026-01-28T02:05:18.703659182Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 28 02:05:18.705996 containerd[1627]: time="2026-01-28T02:05:18.703675074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.705996 containerd[1627]: time="2026-01-28T02:05:18.703699740Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 28 02:05:18.705996 containerd[1627]: time="2026-01-28T02:05:18.703721352Z" level=info msg="NRI interface is disabled by configuration." Jan 28 02:05:18.705996 containerd[1627]: time="2026-01-28T02:05:18.703738476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 28 02:05:18.706310 containerd[1627]: time="2026-01-28T02:05:18.704182958Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 28 02:05:18.706310 containerd[1627]: time="2026-01-28T02:05:18.704268241Z" level=info msg="Connect containerd service" Jan 28 02:05:18.706310 containerd[1627]: time="2026-01-28T02:05:18.704342885Z" level=info msg="using legacy CRI server" Jan 28 02:05:18.706310 containerd[1627]: time="2026-01-28T02:05:18.704365047Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 28 02:05:18.706310 containerd[1627]: time="2026-01-28T02:05:18.704521163Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 28 02:05:18.720719 containerd[1627]: time="2026-01-28T02:05:18.713535985Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 28 02:05:18.720719 containerd[1627]: time="2026-01-28T02:05:18.718302014Z" level=info msg="Start subscribing containerd event" Jan 28 02:05:18.720719 containerd[1627]: time="2026-01-28T02:05:18.718469875Z" level=info msg="Start recovering state" Jan 28 02:05:18.720719 containerd[1627]: time="2026-01-28T02:05:18.718854659Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 28 02:05:18.720719 containerd[1627]: time="2026-01-28T02:05:18.718946925Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 28 02:05:18.720719 containerd[1627]: time="2026-01-28T02:05:18.720668074Z" level=info msg="Start event monitor" Jan 28 02:05:18.723369 containerd[1627]: time="2026-01-28T02:05:18.721610396Z" level=info msg="Start snapshots syncer" Jan 28 02:05:18.723369 containerd[1627]: time="2026-01-28T02:05:18.721685954Z" level=info msg="Start cni network conf syncer for default" Jan 28 02:05:18.723369 containerd[1627]: time="2026-01-28T02:05:18.721710583Z" level=info msg="Start streaming server" Jan 28 02:05:18.724212 systemd[1]: Started containerd.service - containerd container runtime. Jan 28 02:05:18.725193 containerd[1627]: time="2026-01-28T02:05:18.725158439Z" level=info msg="containerd successfully booted in 0.174381s" Jan 28 02:05:18.997810 sshd_keygen[1615]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 28 02:05:19.048374 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 28 02:05:19.064931 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 28 02:05:19.092119 systemd[1]: issuegen.service: Deactivated successfully. Jan 28 02:05:19.092522 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 28 02:05:19.103913 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 28 02:05:19.138953 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 28 02:05:19.154045 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 28 02:05:19.166736 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 28 02:05:19.167943 systemd[1]: Reached target getty.target - Login Prompts. Jan 28 02:05:19.200585 tar[1614]: linux-amd64/README.md Jan 28 02:05:19.231136 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 28 02:05:19.468804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:05:19.482141 (kubelet)[1717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:05:20.077070 kubelet[1717]: E0128 02:05:20.076939 1717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:05:20.079334 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:05:20.079669 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:05:24.221739 login[1701]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 28 02:05:24.224289 login[1702]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 28 02:05:24.243524 systemd-logind[1601]: New session 1 of user core. Jan 28 02:05:24.246298 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 28 02:05:24.252015 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 28 02:05:24.280884 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 28 02:05:24.288038 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 28 02:05:24.297292 (systemd)[1735]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 28 02:05:24.428886 systemd[1735]: Queued start job for default target default.target. Jan 28 02:05:24.429941 systemd[1735]: Created slice app.slice - User Application Slice. Jan 28 02:05:24.429985 systemd[1735]: Reached target paths.target - Paths. Jan 28 02:05:24.430008 systemd[1735]: Reached target timers.target - Timers. Jan 28 02:05:24.440692 systemd[1735]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 28 02:05:24.448671 systemd[1735]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 28 02:05:24.448854 systemd[1735]: Reached target sockets.target - Sockets. Jan 28 02:05:24.448975 systemd[1735]: Reached target basic.target - Basic System. Jan 28 02:05:24.449162 systemd[1735]: Reached target default.target - Main User Target. Jan 28 02:05:24.449348 systemd[1735]: Startup finished in 143ms. Jan 28 02:05:24.451710 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 28 02:05:24.465026 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 28 02:05:24.762946 coreos-metadata[1572]: Jan 28 02:05:24.762 WARN failed to locate config-drive, using the metadata service API instead Jan 28 02:05:24.788780 coreos-metadata[1572]: Jan 28 02:05:24.788 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 28 02:05:24.804752 coreos-metadata[1572]: Jan 28 02:05:24.804 INFO Fetch failed with 404: resource not found Jan 28 02:05:24.804827 coreos-metadata[1572]: Jan 28 02:05:24.804 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 28 02:05:24.805452 coreos-metadata[1572]: Jan 28 02:05:24.805 INFO Fetch successful Jan 28 02:05:24.805598 coreos-metadata[1572]: Jan 28 02:05:24.805 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 28 02:05:24.834050 coreos-metadata[1572]: Jan 28 02:05:24.834 INFO Fetch successful Jan 28 02:05:24.834217 coreos-metadata[1572]: Jan 28 02:05:24.834 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 28 02:05:24.925459 coreos-metadata[1572]: Jan 28 02:05:24.925 INFO Fetch successful Jan 28 02:05:24.925642 coreos-metadata[1572]: Jan 28 02:05:24.925 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 28 02:05:24.945930 coreos-metadata[1572]: Jan 28 02:05:24.945 INFO Fetch successful Jan 28 02:05:24.946064 coreos-metadata[1572]: Jan 28 02:05:24.946 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 28 02:05:24.970426 coreos-metadata[1572]: Jan 28 02:05:24.970 INFO Fetch successful Jan 28 02:05:25.000035 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 28 02:05:25.001257 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 28 02:05:25.222873 login[1701]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 28 02:05:25.229591 systemd-logind[1601]: New session 2 of user core. Jan 28 02:05:25.237286 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 28 02:05:25.409602 coreos-metadata[1665]: Jan 28 02:05:25.409 WARN failed to locate config-drive, using the metadata service API instead Jan 28 02:05:25.433031 coreos-metadata[1665]: Jan 28 02:05:25.432 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 28 02:05:25.472050 coreos-metadata[1665]: Jan 28 02:05:25.472 INFO Fetch successful Jan 28 02:05:25.472267 coreos-metadata[1665]: Jan 28 02:05:25.472 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 28 02:05:25.507378 coreos-metadata[1665]: Jan 28 02:05:25.507 INFO Fetch successful Jan 28 02:05:25.516032 unknown[1665]: wrote ssh authorized keys file for user: core Jan 28 02:05:25.534691 update-ssh-keys[1780]: Updated "/home/core/.ssh/authorized_keys" Jan 28 02:05:25.535513 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 28 02:05:25.539507 systemd[1]: Finished sshkeys.service. Jan 28 02:05:25.547014 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 28 02:05:25.547519 systemd[1]: Startup finished in 15.331s (kernel) + 12.255s (userspace) = 27.586s. Jan 28 02:05:26.460166 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 28 02:05:26.474946 systemd[1]: Started sshd@0-10.230.50.62:22-68.220.241.50:41844.service - OpenSSH per-connection server daemon (68.220.241.50:41844). Jan 28 02:05:27.043246 sshd[1786]: Accepted publickey for core from 68.220.241.50 port 41844 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:05:27.045611 sshd[1786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:05:27.052917 systemd-logind[1601]: New session 3 of user core. Jan 28 02:05:27.071383 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 28 02:05:27.535000 systemd[1]: Started sshd@1-10.230.50.62:22-68.220.241.50:41852.service - OpenSSH per-connection server daemon (68.220.241.50:41852). Jan 28 02:05:28.108596 sshd[1791]: Accepted publickey for core from 68.220.241.50 port 41852 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:05:28.111280 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:05:28.120121 systemd-logind[1601]: New session 4 of user core. Jan 28 02:05:28.128253 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 28 02:05:28.511964 sshd[1791]: pam_unix(sshd:session): session closed for user core Jan 28 02:05:28.516961 systemd[1]: sshd@1-10.230.50.62:22-68.220.241.50:41852.service: Deactivated successfully. Jan 28 02:05:28.521540 systemd-logind[1601]: Session 4 logged out. Waiting for processes to exit. Jan 28 02:05:28.523296 systemd[1]: session-4.scope: Deactivated successfully. Jan 28 02:05:28.524708 systemd-logind[1601]: Removed session 4. Jan 28 02:05:28.614081 systemd[1]: Started sshd@2-10.230.50.62:22-68.220.241.50:41864.service - OpenSSH per-connection server daemon (68.220.241.50:41864). Jan 28 02:05:29.177319 sshd[1799]: Accepted publickey for core from 68.220.241.50 port 41864 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:05:29.179572 sshd[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:05:29.186700 systemd-logind[1601]: New session 5 of user core. Jan 28 02:05:29.198301 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 28 02:05:29.575066 sshd[1799]: pam_unix(sshd:session): session closed for user core Jan 28 02:05:29.579882 systemd[1]: sshd@2-10.230.50.62:22-68.220.241.50:41864.service: Deactivated successfully. Jan 28 02:05:29.583883 systemd-logind[1601]: Session 5 logged out. Waiting for processes to exit. Jan 28 02:05:29.585850 systemd[1]: session-5.scope: Deactivated successfully. Jan 28 02:05:29.589001 systemd-logind[1601]: Removed session 5. Jan 28 02:05:29.674979 systemd[1]: Started sshd@3-10.230.50.62:22-68.220.241.50:41880.service - OpenSSH per-connection server daemon (68.220.241.50:41880). Jan 28 02:05:30.135587 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 28 02:05:30.142786 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:05:30.237845 sshd[1807]: Accepted publickey for core from 68.220.241.50 port 41880 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:05:30.240396 sshd[1807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:05:30.249447 systemd-logind[1601]: New session 6 of user core. Jan 28 02:05:30.260993 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 28 02:05:30.352785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:05:30.358063 (kubelet)[1823]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:05:30.452581 kubelet[1823]: E0128 02:05:30.452347 1823 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:05:30.459933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:05:30.460269 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:05:30.647963 sshd[1807]: pam_unix(sshd:session): session closed for user core Jan 28 02:05:30.652711 systemd[1]: sshd@3-10.230.50.62:22-68.220.241.50:41880.service: Deactivated successfully. Jan 28 02:05:30.657824 systemd-logind[1601]: Session 6 logged out. Waiting for processes to exit. Jan 28 02:05:30.659189 systemd[1]: session-6.scope: Deactivated successfully. Jan 28 02:05:30.660605 systemd-logind[1601]: Removed session 6. Jan 28 02:05:30.744941 systemd[1]: Started sshd@4-10.230.50.62:22-68.220.241.50:41896.service - OpenSSH per-connection server daemon (68.220.241.50:41896). Jan 28 02:05:31.348601 sshd[1835]: Accepted publickey for core from 68.220.241.50 port 41896 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:05:31.351154 sshd[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:05:31.358675 systemd-logind[1601]: New session 7 of user core. Jan 28 02:05:31.370905 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 28 02:05:31.717529 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 28 02:05:31.718700 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:05:31.732768 sudo[1839]: pam_unix(sudo:session): session closed for user root Jan 28 02:05:31.827199 sshd[1835]: pam_unix(sshd:session): session closed for user core Jan 28 02:05:31.831979 systemd[1]: sshd@4-10.230.50.62:22-68.220.241.50:41896.service: Deactivated successfully. Jan 28 02:05:31.835656 systemd-logind[1601]: Session 7 logged out. Waiting for processes to exit. Jan 28 02:05:31.836471 systemd[1]: session-7.scope: Deactivated successfully. Jan 28 02:05:31.838003 systemd-logind[1601]: Removed session 7. Jan 28 02:05:31.922918 systemd[1]: Started sshd@5-10.230.50.62:22-68.220.241.50:41906.service - OpenSSH per-connection server daemon (68.220.241.50:41906). Jan 28 02:05:32.542083 sshd[1844]: Accepted publickey for core from 68.220.241.50 port 41906 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:05:32.544191 sshd[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:05:32.551001 systemd-logind[1601]: New session 8 of user core. Jan 28 02:05:32.561012 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 28 02:05:32.898560 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 28 02:05:32.899643 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:05:32.905436 sudo[1849]: pam_unix(sudo:session): session closed for user root Jan 28 02:05:32.912915 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 28 02:05:32.913371 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:05:32.940940 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 28 02:05:32.943013 auditctl[1852]: No rules Jan 28 02:05:32.943803 systemd[1]: audit-rules.service: Deactivated successfully. Jan 28 02:05:32.944170 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 28 02:05:32.949008 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 28 02:05:32.988195 augenrules[1871]: No rules Jan 28 02:05:32.990047 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 28 02:05:32.991376 sudo[1848]: pam_unix(sudo:session): session closed for user root Jan 28 02:05:33.092984 sshd[1844]: pam_unix(sshd:session): session closed for user core Jan 28 02:05:33.096331 systemd[1]: sshd@5-10.230.50.62:22-68.220.241.50:41906.service: Deactivated successfully. Jan 28 02:05:33.100789 systemd-logind[1601]: Session 8 logged out. Waiting for processes to exit. Jan 28 02:05:33.101937 systemd[1]: session-8.scope: Deactivated successfully. Jan 28 02:05:33.103042 systemd-logind[1601]: Removed session 8. Jan 28 02:05:33.216932 systemd[1]: Started sshd@6-10.230.50.62:22-68.220.241.50:33426.service - OpenSSH per-connection server daemon (68.220.241.50:33426). Jan 28 02:05:33.841737 sshd[1880]: Accepted publickey for core from 68.220.241.50 port 33426 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:05:33.843901 sshd[1880]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:05:33.850372 systemd-logind[1601]: New session 9 of user core. Jan 28 02:05:33.859118 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 28 02:05:34.177371 sudo[1884]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 28 02:05:34.177977 sudo[1884]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 28 02:05:34.641870 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 28 02:05:34.642227 (dockerd)[1899]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 28 02:05:35.080018 dockerd[1899]: time="2026-01-28T02:05:35.078936787Z" level=info msg="Starting up" Jan 28 02:05:35.356765 dockerd[1899]: time="2026-01-28T02:05:35.356652824Z" level=info msg="Loading containers: start." Jan 28 02:05:35.482615 kernel: Initializing XFRM netlink socket Jan 28 02:05:35.587046 systemd-networkd[1258]: docker0: Link UP Jan 28 02:05:35.603414 dockerd[1899]: time="2026-01-28T02:05:35.603365822Z" level=info msg="Loading containers: done." Jan 28 02:05:35.622850 dockerd[1899]: time="2026-01-28T02:05:35.621408509Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 28 02:05:35.622850 dockerd[1899]: time="2026-01-28T02:05:35.621541467Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 28 02:05:35.622850 dockerd[1899]: time="2026-01-28T02:05:35.621743730Z" level=info msg="Daemon has completed initialization" Jan 28 02:05:35.657930 dockerd[1899]: time="2026-01-28T02:05:35.657794840Z" level=info msg="API listen on /run/docker.sock" Jan 28 02:05:35.658091 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 28 02:05:36.834810 containerd[1627]: time="2026-01-28T02:05:36.834669422Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 28 02:05:37.806712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3774168873.mount: Deactivated successfully. Jan 28 02:05:40.137615 containerd[1627]: time="2026-01-28T02:05:40.136792157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:40.141579 containerd[1627]: time="2026-01-28T02:05:40.140507914Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070655" Jan 28 02:05:40.141579 containerd[1627]: time="2026-01-28T02:05:40.140996775Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:40.146629 containerd[1627]: time="2026-01-28T02:05:40.146591206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:40.148082 containerd[1627]: time="2026-01-28T02:05:40.148045588Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.313264424s" Jan 28 02:05:40.148276 containerd[1627]: time="2026-01-28T02:05:40.148245607Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 28 02:05:40.150531 containerd[1627]: time="2026-01-28T02:05:40.150479814Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 28 02:05:40.599228 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 28 02:05:40.611608 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:05:40.803842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:05:40.809801 (kubelet)[2112]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:05:40.867712 kubelet[2112]: E0128 02:05:40.866976 2112 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:05:40.871360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:05:40.872848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:05:44.881688 containerd[1627]: time="2026-01-28T02:05:44.880523078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:44.883366 containerd[1627]: time="2026-01-28T02:05:44.882322887Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993362" Jan 28 02:05:44.883759 containerd[1627]: time="2026-01-28T02:05:44.883726167Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:44.889597 containerd[1627]: time="2026-01-28T02:05:44.887792843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:44.890302 containerd[1627]: time="2026-01-28T02:05:44.890237675Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 4.739702825s" Jan 28 02:05:44.890477 containerd[1627]: time="2026-01-28T02:05:44.890444871Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 28 02:05:44.891941 containerd[1627]: time="2026-01-28T02:05:44.891864536Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 28 02:05:48.107610 containerd[1627]: time="2026-01-28T02:05:48.107432860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:48.109519 containerd[1627]: time="2026-01-28T02:05:48.109273367Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405084" Jan 28 02:05:48.110611 containerd[1627]: time="2026-01-28T02:05:48.110444689Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:48.115602 containerd[1627]: time="2026-01-28T02:05:48.114707946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:48.117832 containerd[1627]: time="2026-01-28T02:05:48.116458478Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 3.224551051s" Jan 28 02:05:48.117832 containerd[1627]: time="2026-01-28T02:05:48.116585972Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 28 02:05:48.118416 containerd[1627]: time="2026-01-28T02:05:48.118375326Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 28 02:05:48.605434 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 28 02:05:50.183262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount930771303.mount: Deactivated successfully. Jan 28 02:05:51.099288 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 28 02:05:51.107798 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:05:51.321787 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:05:51.335061 (kubelet)[2153]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:05:51.425937 kubelet[2153]: E0128 02:05:51.418930 2153 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:05:51.423784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:05:51.424087 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:05:51.840364 containerd[1627]: time="2026-01-28T02:05:51.840202162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:51.841641 containerd[1627]: time="2026-01-28T02:05:51.840788593Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161907" Jan 28 02:05:51.844527 containerd[1627]: time="2026-01-28T02:05:51.842760693Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:51.846252 containerd[1627]: time="2026-01-28T02:05:51.846137960Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:51.847660 containerd[1627]: time="2026-01-28T02:05:51.847600300Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 3.729080832s" Jan 28 02:05:51.847660 containerd[1627]: time="2026-01-28T02:05:51.847655889Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 28 02:05:51.848464 containerd[1627]: time="2026-01-28T02:05:51.848413195Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 28 02:05:52.496544 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1632208458.mount: Deactivated successfully. Jan 28 02:05:54.021711 containerd[1627]: time="2026-01-28T02:05:54.021494470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:54.024334 containerd[1627]: time="2026-01-28T02:05:54.024161814Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jan 28 02:05:54.026675 containerd[1627]: time="2026-01-28T02:05:54.026613681Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:54.030029 containerd[1627]: time="2026-01-28T02:05:54.029962039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:54.033121 containerd[1627]: time="2026-01-28T02:05:54.032690133Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.184203203s" Jan 28 02:05:54.033121 containerd[1627]: time="2026-01-28T02:05:54.032760033Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 28 02:05:54.034719 containerd[1627]: time="2026-01-28T02:05:54.034392406Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 28 02:05:54.638154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2739544464.mount: Deactivated successfully. Jan 28 02:05:54.645964 containerd[1627]: time="2026-01-28T02:05:54.645884661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:54.647361 containerd[1627]: time="2026-01-28T02:05:54.647318162Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 28 02:05:54.648268 containerd[1627]: time="2026-01-28T02:05:54.647803832Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:54.651254 containerd[1627]: time="2026-01-28T02:05:54.651185452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:05:54.653060 containerd[1627]: time="2026-01-28T02:05:54.653008520Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 618.562921ms" Jan 28 02:05:54.653206 containerd[1627]: time="2026-01-28T02:05:54.653075636Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 28 02:05:54.655309 containerd[1627]: time="2026-01-28T02:05:54.655089577Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 28 02:05:55.387215 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2581516749.mount: Deactivated successfully. Jan 28 02:06:01.599486 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 28 02:06:01.622026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:06:01.845952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:06:01.847653 (kubelet)[2278]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 28 02:06:01.937678 kubelet[2278]: E0128 02:06:01.937196 2278 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 28 02:06:01.941921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 28 02:06:01.942543 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 28 02:06:02.964310 update_engine[1604]: I20260128 02:06:02.963992 1604 update_attempter.cc:509] Updating boot flags... Jan 28 02:06:03.064025 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2297) Jan 28 02:06:03.199578 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2295) Jan 28 02:06:04.841496 containerd[1627]: time="2026-01-28T02:06:04.841394911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:06:04.842845 containerd[1627]: time="2026-01-28T02:06:04.842775126Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Jan 28 02:06:04.844228 containerd[1627]: time="2026-01-28T02:06:04.844191788Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:06:04.848756 containerd[1627]: time="2026-01-28T02:06:04.848714987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:06:04.850406 containerd[1627]: time="2026-01-28T02:06:04.850371889Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 10.195230894s" Jan 28 02:06:04.850510 containerd[1627]: time="2026-01-28T02:06:04.850422001Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 28 02:06:08.683462 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:06:08.695107 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:06:08.730009 systemd[1]: Reloading requested from client PID 2334 ('systemctl') (unit session-9.scope)... Jan 28 02:06:08.730203 systemd[1]: Reloading... Jan 28 02:06:08.907605 zram_generator::config[2373]: No configuration found. Jan 28 02:06:09.091814 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 02:06:09.201485 systemd[1]: Reloading finished in 470 ms. Jan 28 02:06:09.265790 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 02:06:09.266163 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:06:09.273988 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:06:09.434784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:06:09.445228 (kubelet)[2453]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 02:06:09.567375 kubelet[2453]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 02:06:09.567375 kubelet[2453]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 02:06:09.567375 kubelet[2453]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 02:06:09.568064 kubelet[2453]: I0128 02:06:09.567474 2453 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 02:06:10.202329 kubelet[2453]: I0128 02:06:10.202204 2453 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 02:06:10.202329 kubelet[2453]: I0128 02:06:10.202286 2453 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 02:06:10.202903 kubelet[2453]: I0128 02:06:10.202857 2453 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 02:06:10.235310 kubelet[2453]: I0128 02:06:10.234665 2453 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 02:06:10.237517 kubelet[2453]: E0128 02:06:10.237475 2453 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.50.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.50.62:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:06:10.253390 kubelet[2453]: E0128 02:06:10.253343 2453 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 02:06:10.253390 kubelet[2453]: I0128 02:06:10.253391 2453 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 02:06:10.261132 kubelet[2453]: I0128 02:06:10.261091 2453 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 02:06:10.265524 kubelet[2453]: I0128 02:06:10.265408 2453 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 02:06:10.265870 kubelet[2453]: I0128 02:06:10.265473 2453 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-rjxd2.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 28 02:06:10.267631 kubelet[2453]: I0128 02:06:10.267590 2453 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 02:06:10.267631 kubelet[2453]: I0128 02:06:10.267616 2453 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 02:06:10.268967 kubelet[2453]: I0128 02:06:10.268910 2453 state_mem.go:36] "Initialized new in-memory state store" Jan 28 02:06:10.272618 kubelet[2453]: I0128 02:06:10.272583 2453 kubelet.go:446] "Attempting to sync node with API server" Jan 28 02:06:10.272716 kubelet[2453]: I0128 02:06:10.272646 2453 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 02:06:10.272716 kubelet[2453]: I0128 02:06:10.272691 2453 kubelet.go:352] "Adding apiserver pod source" Jan 28 02:06:10.274168 kubelet[2453]: I0128 02:06:10.272731 2453 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 02:06:10.282018 kubelet[2453]: W0128 02:06:10.281663 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.50.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.50.62:6443: connect: connection refused Jan 28 02:06:10.282018 kubelet[2453]: E0128 02:06:10.281776 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.50.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.50.62:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:06:10.282194 kubelet[2453]: W0128 02:06:10.282140 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.50.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rjxd2.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.50.62:6443: connect: connection refused Jan 28 02:06:10.282286 kubelet[2453]: E0128 02:06:10.282206 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.50.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rjxd2.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.50.62:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:06:10.285733 kubelet[2453]: I0128 02:06:10.285706 2453 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 02:06:10.289188 kubelet[2453]: I0128 02:06:10.289166 2453 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 02:06:10.290005 kubelet[2453]: W0128 02:06:10.289983 2453 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 28 02:06:10.291199 kubelet[2453]: I0128 02:06:10.291179 2453 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 02:06:10.291359 kubelet[2453]: I0128 02:06:10.291340 2453 server.go:1287] "Started kubelet" Jan 28 02:06:10.294867 kubelet[2453]: I0128 02:06:10.294610 2453 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 02:06:10.295977 kubelet[2453]: I0128 02:06:10.295943 2453 server.go:479] "Adding debug handlers to kubelet server" Jan 28 02:06:10.299599 kubelet[2453]: I0128 02:06:10.299168 2453 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 02:06:10.299747 kubelet[2453]: I0128 02:06:10.299726 2453 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 02:06:10.300715 kubelet[2453]: I0128 02:06:10.300331 2453 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 02:06:10.308789 kubelet[2453]: I0128 02:06:10.308758 2453 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 02:06:10.311801 kubelet[2453]: E0128 02:06:10.304231 2453 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.50.62:6443/api/v1/namespaces/default/events\": dial tcp 10.230.50.62:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-rjxd2.gb1.brightbox.com.188ec2dfd9d541f4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-rjxd2.gb1.brightbox.com,UID:srv-rjxd2.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-rjxd2.gb1.brightbox.com,},FirstTimestamp:2026-01-28 02:06:10.291311092 +0000 UTC m=+0.839827819,LastTimestamp:2026-01-28 02:06:10.291311092 +0000 UTC m=+0.839827819,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-rjxd2.gb1.brightbox.com,}" Jan 28 02:06:10.312801 kubelet[2453]: I0128 02:06:10.312776 2453 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 02:06:10.313361 kubelet[2453]: E0128 02:06:10.313075 2453 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-rjxd2.gb1.brightbox.com\" not found" Jan 28 02:06:10.317602 kubelet[2453]: E0128 02:06:10.317521 2453 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.50.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rjxd2.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.50.62:6443: connect: connection refused" interval="200ms" Jan 28 02:06:10.318122 kubelet[2453]: I0128 02:06:10.318087 2453 factory.go:221] Registration of the systemd container factory successfully Jan 28 02:06:10.318236 kubelet[2453]: I0128 02:06:10.318211 2453 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 02:06:10.318912 kubelet[2453]: I0128 02:06:10.318867 2453 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 02:06:10.318980 kubelet[2453]: I0128 02:06:10.318963 2453 reconciler.go:26] "Reconciler: start to sync state" Jan 28 02:06:10.325056 kubelet[2453]: W0128 02:06:10.325000 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.50.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.50.62:6443: connect: connection refused Jan 28 02:06:10.325145 kubelet[2453]: E0128 02:06:10.325056 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.50.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.50.62:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:06:10.325970 kubelet[2453]: I0128 02:06:10.325939 2453 factory.go:221] Registration of the containerd container factory successfully Jan 28 02:06:10.337602 kubelet[2453]: I0128 02:06:10.335545 2453 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 02:06:10.340422 kubelet[2453]: I0128 02:06:10.340382 2453 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 02:06:10.340476 kubelet[2453]: I0128 02:06:10.340435 2453 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 02:06:10.340547 kubelet[2453]: I0128 02:06:10.340466 2453 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 02:06:10.340547 kubelet[2453]: I0128 02:06:10.340501 2453 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 02:06:10.340664 kubelet[2453]: E0128 02:06:10.340609 2453 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 02:06:10.354299 kubelet[2453]: W0128 02:06:10.354236 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.50.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.50.62:6443: connect: connection refused Jan 28 02:06:10.354447 kubelet[2453]: E0128 02:06:10.354314 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.50.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.50.62:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:06:10.373106 kubelet[2453]: I0128 02:06:10.373078 2453 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 02:06:10.373337 kubelet[2453]: I0128 02:06:10.373274 2453 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 02:06:10.373476 kubelet[2453]: I0128 02:06:10.373413 2453 state_mem.go:36] "Initialized new in-memory state store" Jan 28 02:06:10.375598 kubelet[2453]: I0128 02:06:10.375393 2453 policy_none.go:49] "None policy: Start" Jan 28 02:06:10.375598 kubelet[2453]: I0128 02:06:10.375426 2453 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 02:06:10.375598 kubelet[2453]: I0128 02:06:10.375450 2453 state_mem.go:35] "Initializing new in-memory state store" Jan 28 02:06:10.382310 kubelet[2453]: I0128 02:06:10.382231 2453 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 02:06:10.382503 kubelet[2453]: I0128 02:06:10.382469 2453 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 02:06:10.382685 kubelet[2453]: I0128 02:06:10.382512 2453 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 02:06:10.384898 kubelet[2453]: I0128 02:06:10.384677 2453 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 02:06:10.386466 kubelet[2453]: E0128 02:06:10.386402 2453 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 02:06:10.386575 kubelet[2453]: E0128 02:06:10.386470 2453 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-rjxd2.gb1.brightbox.com\" not found" Jan 28 02:06:10.450228 kubelet[2453]: E0128 02:06:10.450171 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rjxd2.gb1.brightbox.com\" not found" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.456604 kubelet[2453]: E0128 02:06:10.454710 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rjxd2.gb1.brightbox.com\" not found" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.464100 kubelet[2453]: E0128 02:06:10.464074 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rjxd2.gb1.brightbox.com\" not found" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.485348 kubelet[2453]: I0128 02:06:10.485310 2453 kubelet_node_status.go:75] "Attempting to register node" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.485884 kubelet[2453]: E0128 02:06:10.485828 2453 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.50.62:6443/api/v1/nodes\": dial tcp 10.230.50.62:6443: connect: connection refused" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.518776 kubelet[2453]: E0128 02:06:10.518738 2453 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.50.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rjxd2.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.50.62:6443: connect: connection refused" interval="400ms" Jan 28 02:06:10.520574 kubelet[2453]: I0128 02:06:10.520202 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/413f12ea42e45f3e010878a6e0ac7d14-kubeconfig\") pod \"kube-controller-manager-srv-rjxd2.gb1.brightbox.com\" (UID: \"413f12ea42e45f3e010878a6e0ac7d14\") " pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.520574 kubelet[2453]: I0128 02:06:10.520258 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/413f12ea42e45f3e010878a6e0ac7d14-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-rjxd2.gb1.brightbox.com\" (UID: \"413f12ea42e45f3e010878a6e0ac7d14\") " pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.520574 kubelet[2453]: I0128 02:06:10.520299 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a256014e0a0f80e7b48a33d6bee237ad-kubeconfig\") pod \"kube-scheduler-srv-rjxd2.gb1.brightbox.com\" (UID: \"a256014e0a0f80e7b48a33d6bee237ad\") " pod="kube-system/kube-scheduler-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.520574 kubelet[2453]: I0128 02:06:10.520324 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92edf9d3639b04b154c7a8a894618efb-ca-certs\") pod \"kube-apiserver-srv-rjxd2.gb1.brightbox.com\" (UID: \"92edf9d3639b04b154c7a8a894618efb\") " pod="kube-system/kube-apiserver-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.520574 kubelet[2453]: I0128 02:06:10.520352 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/413f12ea42e45f3e010878a6e0ac7d14-flexvolume-dir\") pod \"kube-controller-manager-srv-rjxd2.gb1.brightbox.com\" (UID: \"413f12ea42e45f3e010878a6e0ac7d14\") " pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.520856 kubelet[2453]: I0128 02:06:10.520405 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/413f12ea42e45f3e010878a6e0ac7d14-ca-certs\") pod \"kube-controller-manager-srv-rjxd2.gb1.brightbox.com\" (UID: \"413f12ea42e45f3e010878a6e0ac7d14\") " pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.520856 kubelet[2453]: I0128 02:06:10.520438 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/413f12ea42e45f3e010878a6e0ac7d14-k8s-certs\") pod \"kube-controller-manager-srv-rjxd2.gb1.brightbox.com\" (UID: \"413f12ea42e45f3e010878a6e0ac7d14\") " pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.520856 kubelet[2453]: I0128 02:06:10.520462 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92edf9d3639b04b154c7a8a894618efb-k8s-certs\") pod \"kube-apiserver-srv-rjxd2.gb1.brightbox.com\" (UID: \"92edf9d3639b04b154c7a8a894618efb\") " pod="kube-system/kube-apiserver-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.520856 kubelet[2453]: I0128 02:06:10.520505 2453 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92edf9d3639b04b154c7a8a894618efb-usr-share-ca-certificates\") pod \"kube-apiserver-srv-rjxd2.gb1.brightbox.com\" (UID: \"92edf9d3639b04b154c7a8a894618efb\") " pod="kube-system/kube-apiserver-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.690220 kubelet[2453]: I0128 02:06:10.689849 2453 kubelet_node_status.go:75] "Attempting to register node" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.691007 kubelet[2453]: E0128 02:06:10.690964 2453 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.50.62:6443/api/v1/nodes\": dial tcp 10.230.50.62:6443: connect: connection refused" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:10.754230 containerd[1627]: time="2026-01-28T02:06:10.754042356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-rjxd2.gb1.brightbox.com,Uid:92edf9d3639b04b154c7a8a894618efb,Namespace:kube-system,Attempt:0,}" Jan 28 02:06:10.759633 containerd[1627]: time="2026-01-28T02:06:10.759600563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-rjxd2.gb1.brightbox.com,Uid:413f12ea42e45f3e010878a6e0ac7d14,Namespace:kube-system,Attempt:0,}" Jan 28 02:06:10.765683 containerd[1627]: time="2026-01-28T02:06:10.765642661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-rjxd2.gb1.brightbox.com,Uid:a256014e0a0f80e7b48a33d6bee237ad,Namespace:kube-system,Attempt:0,}" Jan 28 02:06:10.920203 kubelet[2453]: E0128 02:06:10.920060 2453 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.50.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rjxd2.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.50.62:6443: connect: connection refused" interval="800ms" Jan 28 02:06:11.094668 kubelet[2453]: I0128 02:06:11.094489 2453 kubelet_node_status.go:75] "Attempting to register node" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:11.095018 kubelet[2453]: E0128 02:06:11.094978 2453 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.50.62:6443/api/v1/nodes\": dial tcp 10.230.50.62:6443: connect: connection refused" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:11.213185 kubelet[2453]: W0128 02:06:11.213018 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.50.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.50.62:6443: connect: connection refused Jan 28 02:06:11.213185 kubelet[2453]: E0128 02:06:11.213124 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.50.62:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.50.62:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:06:11.405140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1753777439.mount: Deactivated successfully. Jan 28 02:06:11.408538 containerd[1627]: time="2026-01-28T02:06:11.408479786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:06:11.410061 containerd[1627]: time="2026-01-28T02:06:11.409974633Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 02:06:11.410773 containerd[1627]: time="2026-01-28T02:06:11.410739715Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:06:11.411919 containerd[1627]: time="2026-01-28T02:06:11.411823730Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:06:11.413178 containerd[1627]: time="2026-01-28T02:06:11.413128119Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:06:11.414584 containerd[1627]: time="2026-01-28T02:06:11.414444182Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 28 02:06:11.414584 containerd[1627]: time="2026-01-28T02:06:11.414518264Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 28 02:06:11.418620 containerd[1627]: time="2026-01-28T02:06:11.418528400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 28 02:06:11.421580 containerd[1627]: time="2026-01-28T02:06:11.419917557Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 660.135311ms" Jan 28 02:06:11.423539 containerd[1627]: time="2026-01-28T02:06:11.423504352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 669.285564ms" Jan 28 02:06:11.429962 containerd[1627]: time="2026-01-28T02:06:11.429916958Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 664.185058ms" Jan 28 02:06:11.660262 kubelet[2453]: W0128 02:06:11.659818 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.50.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.50.62:6443: connect: connection refused Jan 28 02:06:11.660262 kubelet[2453]: E0128 02:06:11.659900 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.50.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.50.62:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:06:11.663569 kubelet[2453]: W0128 02:06:11.663425 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.50.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.50.62:6443: connect: connection refused Jan 28 02:06:11.663569 kubelet[2453]: E0128 02:06:11.663501 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.50.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.50.62:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:06:11.721302 kubelet[2453]: E0128 02:06:11.721241 2453 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.50.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-rjxd2.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.50.62:6443: connect: connection refused" interval="1.6s" Jan 28 02:06:11.783254 containerd[1627]: time="2026-01-28T02:06:11.782226442Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:06:11.783254 containerd[1627]: time="2026-01-28T02:06:11.782329765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:06:11.783254 containerd[1627]: time="2026-01-28T02:06:11.782359062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:06:11.783254 containerd[1627]: time="2026-01-28T02:06:11.782512956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:06:11.791043 containerd[1627]: time="2026-01-28T02:06:11.790315009Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:06:11.791043 containerd[1627]: time="2026-01-28T02:06:11.790412301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:06:11.791043 containerd[1627]: time="2026-01-28T02:06:11.790473835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:06:11.791043 containerd[1627]: time="2026-01-28T02:06:11.790639890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:06:11.797150 containerd[1627]: time="2026-01-28T02:06:11.796933961Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:06:11.797150 containerd[1627]: time="2026-01-28T02:06:11.797086342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:06:11.797630 containerd[1627]: time="2026-01-28T02:06:11.797376784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:06:11.798137 containerd[1627]: time="2026-01-28T02:06:11.798049646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:06:11.855708 kubelet[2453]: W0128 02:06:11.853234 2453 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.50.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rjxd2.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.50.62:6443: connect: connection refused Jan 28 02:06:11.857075 kubelet[2453]: E0128 02:06:11.855671 2453 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.50.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-rjxd2.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.50.62:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:06:11.935525 kubelet[2453]: I0128 02:06:11.934515 2453 kubelet_node_status.go:75] "Attempting to register node" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:11.935525 kubelet[2453]: E0128 02:06:11.935509 2453 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.50.62:6443/api/v1/nodes\": dial tcp 10.230.50.62:6443: connect: connection refused" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:11.940606 containerd[1627]: time="2026-01-28T02:06:11.937959597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-rjxd2.gb1.brightbox.com,Uid:413f12ea42e45f3e010878a6e0ac7d14,Namespace:kube-system,Attempt:0,} returns sandbox id \"e85e00bb9b4fbe6cecc26d31f4ea74b544af1beb59f73cba4f851678cc71089c\"" Jan 28 02:06:11.956227 containerd[1627]: time="2026-01-28T02:06:11.956056218Z" level=info msg="CreateContainer within sandbox \"e85e00bb9b4fbe6cecc26d31f4ea74b544af1beb59f73cba4f851678cc71089c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 28 02:06:11.966495 containerd[1627]: time="2026-01-28T02:06:11.966430774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-rjxd2.gb1.brightbox.com,Uid:92edf9d3639b04b154c7a8a894618efb,Namespace:kube-system,Attempt:0,} returns sandbox id \"a1bd2a63ddb70361f45773ee61c656cd5cda26777d1f4b2327f2932b1f9411a8\"" Jan 28 02:06:11.971655 containerd[1627]: time="2026-01-28T02:06:11.971519624Z" level=info msg="CreateContainer within sandbox \"a1bd2a63ddb70361f45773ee61c656cd5cda26777d1f4b2327f2932b1f9411a8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 28 02:06:11.977182 containerd[1627]: time="2026-01-28T02:06:11.976941088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-rjxd2.gb1.brightbox.com,Uid:a256014e0a0f80e7b48a33d6bee237ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"c2c1bee176fe4ee6aa5e9c7abb93d4fd953ffb1e8bfa2e155813dd5843eb59dd\"" Jan 28 02:06:11.978911 containerd[1627]: time="2026-01-28T02:06:11.978774588Z" level=info msg="CreateContainer within sandbox \"e85e00bb9b4fbe6cecc26d31f4ea74b544af1beb59f73cba4f851678cc71089c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"6444d81a7e70bacd81aba7a9975386d9a28bf89ed91fec1665fa1fcd33b3cfb3\"" Jan 28 02:06:11.979439 containerd[1627]: time="2026-01-28T02:06:11.979364787Z" level=info msg="StartContainer for \"6444d81a7e70bacd81aba7a9975386d9a28bf89ed91fec1665fa1fcd33b3cfb3\"" Jan 28 02:06:11.984922 containerd[1627]: time="2026-01-28T02:06:11.984825798Z" level=info msg="CreateContainer within sandbox \"c2c1bee176fe4ee6aa5e9c7abb93d4fd953ffb1e8bfa2e155813dd5843eb59dd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 28 02:06:11.993749 containerd[1627]: time="2026-01-28T02:06:11.993656348Z" level=info msg="CreateContainer within sandbox \"a1bd2a63ddb70361f45773ee61c656cd5cda26777d1f4b2327f2932b1f9411a8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dcc9341eb56f910b3729d5298168ce6db8622817c5c1274ead889cbab719be86\"" Jan 28 02:06:11.995666 containerd[1627]: time="2026-01-28T02:06:11.995595199Z" level=info msg="StartContainer for \"dcc9341eb56f910b3729d5298168ce6db8622817c5c1274ead889cbab719be86\"" Jan 28 02:06:12.002151 containerd[1627]: time="2026-01-28T02:06:12.002015168Z" level=info msg="CreateContainer within sandbox \"c2c1bee176fe4ee6aa5e9c7abb93d4fd953ffb1e8bfa2e155813dd5843eb59dd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"57d9024cccc9e464baa548048584ab7d072db5e301a396fa42a4516d195f829d\"" Jan 28 02:06:12.004587 containerd[1627]: time="2026-01-28T02:06:12.004492480Z" level=info msg="StartContainer for \"57d9024cccc9e464baa548048584ab7d072db5e301a396fa42a4516d195f829d\"" Jan 28 02:06:12.148204 containerd[1627]: time="2026-01-28T02:06:12.148095801Z" level=info msg="StartContainer for \"6444d81a7e70bacd81aba7a9975386d9a28bf89ed91fec1665fa1fcd33b3cfb3\" returns successfully" Jan 28 02:06:12.166667 containerd[1627]: time="2026-01-28T02:06:12.166605414Z" level=info msg="StartContainer for \"57d9024cccc9e464baa548048584ab7d072db5e301a396fa42a4516d195f829d\" returns successfully" Jan 28 02:06:12.212811 containerd[1627]: time="2026-01-28T02:06:12.212692439Z" level=info msg="StartContainer for \"dcc9341eb56f910b3729d5298168ce6db8622817c5c1274ead889cbab719be86\" returns successfully" Jan 28 02:06:12.383276 kubelet[2453]: E0128 02:06:12.383206 2453 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.50.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.50.62:6443: connect: connection refused" logger="UnhandledError" Jan 28 02:06:12.385982 kubelet[2453]: E0128 02:06:12.385951 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rjxd2.gb1.brightbox.com\" not found" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:12.405343 kubelet[2453]: E0128 02:06:12.404960 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rjxd2.gb1.brightbox.com\" not found" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:12.409586 kubelet[2453]: E0128 02:06:12.405858 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rjxd2.gb1.brightbox.com\" not found" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:13.404774 kubelet[2453]: E0128 02:06:13.403995 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rjxd2.gb1.brightbox.com\" not found" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:13.404774 kubelet[2453]: E0128 02:06:13.404392 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rjxd2.gb1.brightbox.com\" not found" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:13.542642 kubelet[2453]: I0128 02:06:13.540475 2453 kubelet_node_status.go:75] "Attempting to register node" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:14.412651 kubelet[2453]: E0128 02:06:14.410630 2453 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-rjxd2.gb1.brightbox.com\" not found" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:15.149711 kubelet[2453]: E0128 02:06:15.149642 2453 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-rjxd2.gb1.brightbox.com\" not found" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:15.231591 kubelet[2453]: I0128 02:06:15.231460 2453 kubelet_node_status.go:78] "Successfully registered node" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:15.231591 kubelet[2453]: E0128 02:06:15.231535 2453 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-rjxd2.gb1.brightbox.com\": node \"srv-rjxd2.gb1.brightbox.com\" not found" Jan 28 02:06:15.279986 kubelet[2453]: I0128 02:06:15.279146 2453 apiserver.go:52] "Watching apiserver" Jan 28 02:06:15.313786 kubelet[2453]: I0128 02:06:15.313649 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:15.319768 kubelet[2453]: I0128 02:06:15.319730 2453 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 02:06:15.357247 kubelet[2453]: E0128 02:06:15.357200 2453 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-rjxd2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:15.357247 kubelet[2453]: I0128 02:06:15.357231 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:15.360994 kubelet[2453]: E0128 02:06:15.360777 2453 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-rjxd2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:15.360994 kubelet[2453]: I0128 02:06:15.360806 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:15.363610 kubelet[2453]: E0128 02:06:15.363538 2453 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-rjxd2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:15.432726 kubelet[2453]: I0128 02:06:15.431738 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:15.438021 kubelet[2453]: E0128 02:06:15.437771 2453 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-rjxd2.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:16.449168 kubelet[2453]: I0128 02:06:16.448775 2453 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:16.458035 kubelet[2453]: W0128 02:06:16.457725 2453 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 02:06:17.505263 systemd[1]: Reloading requested from client PID 2730 ('systemctl') (unit session-9.scope)... Jan 28 02:06:17.505319 systemd[1]: Reloading... Jan 28 02:06:17.629594 zram_generator::config[2769]: No configuration found. Jan 28 02:06:17.813187 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 28 02:06:17.931429 systemd[1]: Reloading finished in 425 ms. Jan 28 02:06:17.980708 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:06:17.997114 systemd[1]: kubelet.service: Deactivated successfully. Jan 28 02:06:17.997779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:06:18.004294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 28 02:06:18.238876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 28 02:06:18.239634 (kubelet)[2843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 28 02:06:18.358590 kubelet[2843]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 02:06:18.358590 kubelet[2843]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 28 02:06:18.358590 kubelet[2843]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 02:06:18.358590 kubelet[2843]: I0128 02:06:18.357829 2843 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 02:06:18.370590 kubelet[2843]: I0128 02:06:18.370521 2843 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 28 02:06:18.370938 kubelet[2843]: I0128 02:06:18.370895 2843 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 02:06:18.371607 kubelet[2843]: I0128 02:06:18.371575 2843 server.go:954] "Client rotation is on, will bootstrap in background" Jan 28 02:06:18.373631 kubelet[2843]: I0128 02:06:18.373610 2843 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 02:06:18.380579 kubelet[2843]: I0128 02:06:18.380541 2843 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 28 02:06:18.384678 kubelet[2843]: E0128 02:06:18.384638 2843 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 28 02:06:18.384913 kubelet[2843]: I0128 02:06:18.384892 2843 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 28 02:06:18.390492 kubelet[2843]: I0128 02:06:18.390460 2843 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 28 02:06:18.392020 kubelet[2843]: I0128 02:06:18.391984 2843 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 02:06:18.392770 kubelet[2843]: I0128 02:06:18.392129 2843 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-rjxd2.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 28 02:06:18.392770 kubelet[2843]: I0128 02:06:18.392538 2843 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 02:06:18.392770 kubelet[2843]: I0128 02:06:18.392607 2843 container_manager_linux.go:304] "Creating device plugin manager" Jan 28 02:06:18.395228 kubelet[2843]: I0128 02:06:18.395196 2843 state_mem.go:36] "Initialized new in-memory state store" Jan 28 02:06:18.395800 kubelet[2843]: I0128 02:06:18.395780 2843 kubelet.go:446] "Attempting to sync node with API server" Jan 28 02:06:18.396619 kubelet[2843]: I0128 02:06:18.396598 2843 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 02:06:18.396830 kubelet[2843]: I0128 02:06:18.396801 2843 kubelet.go:352] "Adding apiserver pod source" Jan 28 02:06:18.396943 kubelet[2843]: I0128 02:06:18.396925 2843 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 02:06:18.407509 kubelet[2843]: I0128 02:06:18.405293 2843 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 28 02:06:18.414215 kubelet[2843]: I0128 02:06:18.412924 2843 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 02:06:18.414359 kubelet[2843]: I0128 02:06:18.414320 2843 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 28 02:06:18.414420 kubelet[2843]: I0128 02:06:18.414373 2843 server.go:1287] "Started kubelet" Jan 28 02:06:18.419863 kubelet[2843]: I0128 02:06:18.419526 2843 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 02:06:18.419863 kubelet[2843]: I0128 02:06:18.419702 2843 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 02:06:18.420827 kubelet[2843]: I0128 02:06:18.420725 2843 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 02:06:18.423441 kubelet[2843]: I0128 02:06:18.423420 2843 server.go:479] "Adding debug handlers to kubelet server" Jan 28 02:06:18.425300 kubelet[2843]: I0128 02:06:18.425248 2843 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 02:06:18.427155 kubelet[2843]: E0128 02:06:18.427131 2843 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 28 02:06:18.427838 kubelet[2843]: I0128 02:06:18.427772 2843 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 28 02:06:18.433039 kubelet[2843]: I0128 02:06:18.433015 2843 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 28 02:06:18.433497 kubelet[2843]: I0128 02:06:18.433474 2843 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 28 02:06:18.433868 kubelet[2843]: I0128 02:06:18.433849 2843 reconciler.go:26] "Reconciler: start to sync state" Jan 28 02:06:18.440341 kubelet[2843]: I0128 02:06:18.439945 2843 factory.go:221] Registration of the containerd container factory successfully Jan 28 02:06:18.440472 kubelet[2843]: I0128 02:06:18.440454 2843 factory.go:221] Registration of the systemd container factory successfully Jan 28 02:06:18.440735 kubelet[2843]: I0128 02:06:18.440684 2843 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 28 02:06:18.446541 kubelet[2843]: I0128 02:06:18.446508 2843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 02:06:18.448427 kubelet[2843]: I0128 02:06:18.448002 2843 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 02:06:18.448427 kubelet[2843]: I0128 02:06:18.448054 2843 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 28 02:06:18.448427 kubelet[2843]: I0128 02:06:18.448085 2843 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 28 02:06:18.448427 kubelet[2843]: I0128 02:06:18.448101 2843 kubelet.go:2382] "Starting kubelet main sync loop" Jan 28 02:06:18.448427 kubelet[2843]: E0128 02:06:18.448165 2843 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 02:06:18.548398 kubelet[2843]: E0128 02:06:18.548274 2843 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 02:06:18.552412 kubelet[2843]: I0128 02:06:18.552283 2843 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 28 02:06:18.552598 kubelet[2843]: I0128 02:06:18.552518 2843 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 28 02:06:18.553606 kubelet[2843]: I0128 02:06:18.552703 2843 state_mem.go:36] "Initialized new in-memory state store" Jan 28 02:06:18.553606 kubelet[2843]: I0128 02:06:18.552956 2843 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 28 02:06:18.553606 kubelet[2843]: I0128 02:06:18.552988 2843 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 28 02:06:18.553606 kubelet[2843]: I0128 02:06:18.553024 2843 policy_none.go:49] "None policy: Start" Jan 28 02:06:18.553606 kubelet[2843]: I0128 02:06:18.553061 2843 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 28 02:06:18.553606 kubelet[2843]: I0128 02:06:18.553089 2843 state_mem.go:35] "Initializing new in-memory state store" Jan 28 02:06:18.553606 kubelet[2843]: I0128 02:06:18.553263 2843 state_mem.go:75] "Updated machine memory state" Jan 28 02:06:18.558598 kubelet[2843]: I0128 02:06:18.557124 2843 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 02:06:18.558598 kubelet[2843]: I0128 02:06:18.557400 2843 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 02:06:18.558598 kubelet[2843]: I0128 02:06:18.557415 2843 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 02:06:18.559659 kubelet[2843]: I0128 02:06:18.559640 2843 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 02:06:18.561870 kubelet[2843]: E0128 02:06:18.561806 2843 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 28 02:06:18.682399 kubelet[2843]: I0128 02:06:18.682361 2843 kubelet_node_status.go:75] "Attempting to register node" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:18.697358 kubelet[2843]: I0128 02:06:18.696345 2843 kubelet_node_status.go:124] "Node was previously registered" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:18.697358 kubelet[2843]: I0128 02:06:18.697082 2843 kubelet_node_status.go:78] "Successfully registered node" node="srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:18.751439 kubelet[2843]: I0128 02:06:18.750696 2843 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:18.751439 kubelet[2843]: I0128 02:06:18.751240 2843 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:18.751439 kubelet[2843]: I0128 02:06:18.751386 2843 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:18.760072 kubelet[2843]: W0128 02:06:18.759566 2843 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 02:06:18.760843 kubelet[2843]: W0128 02:06:18.760741 2843 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 02:06:18.763003 kubelet[2843]: W0128 02:06:18.762971 2843 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 02:06:18.763207 kubelet[2843]: E0128 02:06:18.763183 2843 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-rjxd2.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:18.837832 kubelet[2843]: I0128 02:06:18.837684 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92edf9d3639b04b154c7a8a894618efb-usr-share-ca-certificates\") pod \"kube-apiserver-srv-rjxd2.gb1.brightbox.com\" (UID: \"92edf9d3639b04b154c7a8a894618efb\") " pod="kube-system/kube-apiserver-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:18.838153 kubelet[2843]: I0128 02:06:18.838125 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/413f12ea42e45f3e010878a6e0ac7d14-flexvolume-dir\") pod \"kube-controller-manager-srv-rjxd2.gb1.brightbox.com\" (UID: \"413f12ea42e45f3e010878a6e0ac7d14\") " pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:18.838404 kubelet[2843]: I0128 02:06:18.838383 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a256014e0a0f80e7b48a33d6bee237ad-kubeconfig\") pod \"kube-scheduler-srv-rjxd2.gb1.brightbox.com\" (UID: \"a256014e0a0f80e7b48a33d6bee237ad\") " pod="kube-system/kube-scheduler-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:18.838596 kubelet[2843]: I0128 02:06:18.838529 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92edf9d3639b04b154c7a8a894618efb-ca-certs\") pod \"kube-apiserver-srv-rjxd2.gb1.brightbox.com\" (UID: \"92edf9d3639b04b154c7a8a894618efb\") " pod="kube-system/kube-apiserver-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:18.838874 kubelet[2843]: I0128 02:06:18.838734 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92edf9d3639b04b154c7a8a894618efb-k8s-certs\") pod \"kube-apiserver-srv-rjxd2.gb1.brightbox.com\" (UID: \"92edf9d3639b04b154c7a8a894618efb\") " pod="kube-system/kube-apiserver-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:18.838874 kubelet[2843]: I0128 02:06:18.838839 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/413f12ea42e45f3e010878a6e0ac7d14-ca-certs\") pod \"kube-controller-manager-srv-rjxd2.gb1.brightbox.com\" (UID: \"413f12ea42e45f3e010878a6e0ac7d14\") " pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:18.839240 kubelet[2843]: I0128 02:06:18.839053 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/413f12ea42e45f3e010878a6e0ac7d14-k8s-certs\") pod \"kube-controller-manager-srv-rjxd2.gb1.brightbox.com\" (UID: \"413f12ea42e45f3e010878a6e0ac7d14\") " pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:18.839240 kubelet[2843]: I0128 02:06:18.839116 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/413f12ea42e45f3e010878a6e0ac7d14-kubeconfig\") pod \"kube-controller-manager-srv-rjxd2.gb1.brightbox.com\" (UID: \"413f12ea42e45f3e010878a6e0ac7d14\") " pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:18.839240 kubelet[2843]: I0128 02:06:18.839191 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/413f12ea42e45f3e010878a6e0ac7d14-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-rjxd2.gb1.brightbox.com\" (UID: \"413f12ea42e45f3e010878a6e0ac7d14\") " pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:19.398260 kubelet[2843]: I0128 02:06:19.397963 2843 apiserver.go:52] "Watching apiserver" Jan 28 02:06:19.433917 kubelet[2843]: I0128 02:06:19.433875 2843 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 28 02:06:19.501765 kubelet[2843]: I0128 02:06:19.501633 2843 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:19.512964 kubelet[2843]: W0128 02:06:19.512639 2843 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 28 02:06:19.512964 kubelet[2843]: E0128 02:06:19.512773 2843 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-rjxd2.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-rjxd2.gb1.brightbox.com" Jan 28 02:06:19.552722 kubelet[2843]: I0128 02:06:19.552166 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-rjxd2.gb1.brightbox.com" podStartSLOduration=1.552135937 podStartE2EDuration="1.552135937s" podCreationTimestamp="2026-01-28 02:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:06:19.551152516 +0000 UTC m=+1.279053147" watchObservedRunningTime="2026-01-28 02:06:19.552135937 +0000 UTC m=+1.280036559" Jan 28 02:06:19.552722 kubelet[2843]: I0128 02:06:19.552298 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-rjxd2.gb1.brightbox.com" podStartSLOduration=1.552290308 podStartE2EDuration="1.552290308s" podCreationTimestamp="2026-01-28 02:06:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:06:19.537780298 +0000 UTC m=+1.265680933" watchObservedRunningTime="2026-01-28 02:06:19.552290308 +0000 UTC m=+1.280190931" Jan 28 02:06:19.579280 kubelet[2843]: I0128 02:06:19.579204 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-rjxd2.gb1.brightbox.com" podStartSLOduration=3.579184192 podStartE2EDuration="3.579184192s" podCreationTimestamp="2026-01-28 02:06:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:06:19.565025068 +0000 UTC m=+1.292925701" watchObservedRunningTime="2026-01-28 02:06:19.579184192 +0000 UTC m=+1.307084816" Jan 28 02:06:23.870914 kubelet[2843]: I0128 02:06:23.870783 2843 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 28 02:06:23.872427 containerd[1627]: time="2026-01-28T02:06:23.872246751Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 28 02:06:23.873278 kubelet[2843]: I0128 02:06:23.872677 2843 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 28 02:06:24.572090 kubelet[2843]: I0128 02:06:24.571799 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73211136-2a61-4cc0-b9f4-be5ca4c92ca7-xtables-lock\") pod \"kube-proxy-lppb5\" (UID: \"73211136-2a61-4cc0-b9f4-be5ca4c92ca7\") " pod="kube-system/kube-proxy-lppb5" Jan 28 02:06:24.572090 kubelet[2843]: I0128 02:06:24.571868 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h62qw\" (UniqueName: \"kubernetes.io/projected/73211136-2a61-4cc0-b9f4-be5ca4c92ca7-kube-api-access-h62qw\") pod \"kube-proxy-lppb5\" (UID: \"73211136-2a61-4cc0-b9f4-be5ca4c92ca7\") " pod="kube-system/kube-proxy-lppb5" Jan 28 02:06:24.572090 kubelet[2843]: I0128 02:06:24.571955 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/73211136-2a61-4cc0-b9f4-be5ca4c92ca7-kube-proxy\") pod \"kube-proxy-lppb5\" (UID: \"73211136-2a61-4cc0-b9f4-be5ca4c92ca7\") " pod="kube-system/kube-proxy-lppb5" Jan 28 02:06:24.572090 kubelet[2843]: I0128 02:06:24.571985 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73211136-2a61-4cc0-b9f4-be5ca4c92ca7-lib-modules\") pod \"kube-proxy-lppb5\" (UID: \"73211136-2a61-4cc0-b9f4-be5ca4c92ca7\") " pod="kube-system/kube-proxy-lppb5" Jan 28 02:06:24.836386 containerd[1627]: time="2026-01-28T02:06:24.834076700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lppb5,Uid:73211136-2a61-4cc0-b9f4-be5ca4c92ca7,Namespace:kube-system,Attempt:0,}" Jan 28 02:06:24.912597 containerd[1627]: time="2026-01-28T02:06:24.911909242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:06:24.912597 containerd[1627]: time="2026-01-28T02:06:24.912033597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:06:24.912597 containerd[1627]: time="2026-01-28T02:06:24.912051802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:06:24.912597 containerd[1627]: time="2026-01-28T02:06:24.912243244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:06:25.045842 containerd[1627]: time="2026-01-28T02:06:25.045765863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lppb5,Uid:73211136-2a61-4cc0-b9f4-be5ca4c92ca7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2deaaa67b4515e6920c72cce006059d35802fa12508e3ef0175fdbc14fbc9b9\"" Jan 28 02:06:25.055527 containerd[1627]: time="2026-01-28T02:06:25.055487687Z" level=info msg="CreateContainer within sandbox \"b2deaaa67b4515e6920c72cce006059d35802fa12508e3ef0175fdbc14fbc9b9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 28 02:06:25.075525 kubelet[2843]: I0128 02:06:25.075454 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmqb9\" (UniqueName: \"kubernetes.io/projected/0ccf52d7-2867-4a5a-8472-a2417fc13117-kube-api-access-vmqb9\") pod \"tigera-operator-7dcd859c48-cndwt\" (UID: \"0ccf52d7-2867-4a5a-8472-a2417fc13117\") " pod="tigera-operator/tigera-operator-7dcd859c48-cndwt" Jan 28 02:06:25.076054 kubelet[2843]: I0128 02:06:25.075561 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0ccf52d7-2867-4a5a-8472-a2417fc13117-var-lib-calico\") pod \"tigera-operator-7dcd859c48-cndwt\" (UID: \"0ccf52d7-2867-4a5a-8472-a2417fc13117\") " pod="tigera-operator/tigera-operator-7dcd859c48-cndwt" Jan 28 02:06:25.091961 containerd[1627]: time="2026-01-28T02:06:25.091860660Z" level=info msg="CreateContainer within sandbox \"b2deaaa67b4515e6920c72cce006059d35802fa12508e3ef0175fdbc14fbc9b9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"81d83c2cc66cbb400a29b00f34b987e288844df4b33b42fa160706f6f6460e98\"" Jan 28 02:06:25.094788 containerd[1627]: time="2026-01-28T02:06:25.093634834Z" level=info msg="StartContainer for \"81d83c2cc66cbb400a29b00f34b987e288844df4b33b42fa160706f6f6460e98\"" Jan 28 02:06:25.178313 containerd[1627]: time="2026-01-28T02:06:25.177047781Z" level=info msg="StartContainer for \"81d83c2cc66cbb400a29b00f34b987e288844df4b33b42fa160706f6f6460e98\" returns successfully" Jan 28 02:06:25.336370 containerd[1627]: time="2026-01-28T02:06:25.336272581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cndwt,Uid:0ccf52d7-2867-4a5a-8472-a2417fc13117,Namespace:tigera-operator,Attempt:0,}" Jan 28 02:06:25.379288 containerd[1627]: time="2026-01-28T02:06:25.379118968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:06:25.381547 containerd[1627]: time="2026-01-28T02:06:25.381251673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:06:25.381547 containerd[1627]: time="2026-01-28T02:06:25.381283144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:06:25.381547 containerd[1627]: time="2026-01-28T02:06:25.381435180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:06:25.509494 containerd[1627]: time="2026-01-28T02:06:25.509035126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-cndwt,Uid:0ccf52d7-2867-4a5a-8472-a2417fc13117,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"555cae1e65e41fb578151485a940610543981568f8bcd0d9de69b19d831a1d77\"" Jan 28 02:06:25.512958 containerd[1627]: time="2026-01-28T02:06:25.512919084Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 28 02:06:25.706146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4230311715.mount: Deactivated successfully. Jan 28 02:06:27.697077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4157264384.mount: Deactivated successfully. Jan 28 02:06:28.386444 kubelet[2843]: I0128 02:06:28.386302 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lppb5" podStartSLOduration=4.386221778 podStartE2EDuration="4.386221778s" podCreationTimestamp="2026-01-28 02:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:06:25.548436408 +0000 UTC m=+7.276337044" watchObservedRunningTime="2026-01-28 02:06:28.386221778 +0000 UTC m=+10.114122400" Jan 28 02:06:28.780447 containerd[1627]: time="2026-01-28T02:06:28.780323005Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:06:28.782527 containerd[1627]: time="2026-01-28T02:06:28.782478131Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 28 02:06:28.784698 containerd[1627]: time="2026-01-28T02:06:28.784626340Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:06:28.787673 containerd[1627]: time="2026-01-28T02:06:28.787605570Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:06:28.790149 containerd[1627]: time="2026-01-28T02:06:28.789180098Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.27619827s" Jan 28 02:06:28.790149 containerd[1627]: time="2026-01-28T02:06:28.789218986Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 28 02:06:28.798223 containerd[1627]: time="2026-01-28T02:06:28.797372185Z" level=info msg="CreateContainer within sandbox \"555cae1e65e41fb578151485a940610543981568f8bcd0d9de69b19d831a1d77\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 28 02:06:28.813544 containerd[1627]: time="2026-01-28T02:06:28.812846650Z" level=info msg="CreateContainer within sandbox \"555cae1e65e41fb578151485a940610543981568f8bcd0d9de69b19d831a1d77\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a27a89003158d008bd969e4cff5d9e7f81d57402223d09ec11587ed6d1617ef3\"" Jan 28 02:06:28.815606 containerd[1627]: time="2026-01-28T02:06:28.814538621Z" level=info msg="StartContainer for \"a27a89003158d008bd969e4cff5d9e7f81d57402223d09ec11587ed6d1617ef3\"" Jan 28 02:06:28.864081 systemd[1]: run-containerd-runc-k8s.io-a27a89003158d008bd969e4cff5d9e7f81d57402223d09ec11587ed6d1617ef3-runc.gZjdy2.mount: Deactivated successfully. Jan 28 02:06:28.903631 containerd[1627]: time="2026-01-28T02:06:28.902805901Z" level=info msg="StartContainer for \"a27a89003158d008bd969e4cff5d9e7f81d57402223d09ec11587ed6d1617ef3\" returns successfully" Jan 28 02:06:31.113317 kubelet[2843]: I0128 02:06:31.112961 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-cndwt" podStartSLOduration=3.831793439 podStartE2EDuration="7.112924464s" podCreationTimestamp="2026-01-28 02:06:24 +0000 UTC" firstStartedPulling="2026-01-28 02:06:25.510746448 +0000 UTC m=+7.238647083" lastFinishedPulling="2026-01-28 02:06:28.791877498 +0000 UTC m=+10.519778108" observedRunningTime="2026-01-28 02:06:29.552311246 +0000 UTC m=+11.280211873" watchObservedRunningTime="2026-01-28 02:06:31.112924464 +0000 UTC m=+12.840825088" Jan 28 02:06:36.252444 sudo[1884]: pam_unix(sudo:session): session closed for user root Jan 28 02:06:36.351985 sshd[1880]: pam_unix(sshd:session): session closed for user core Jan 28 02:06:36.369157 systemd[1]: sshd@6-10.230.50.62:22-68.220.241.50:33426.service: Deactivated successfully. Jan 28 02:06:36.389450 systemd-logind[1601]: Session 9 logged out. Waiting for processes to exit. Jan 28 02:06:36.391840 systemd[1]: session-9.scope: Deactivated successfully. Jan 28 02:06:36.398033 systemd-logind[1601]: Removed session 9. Jan 28 02:06:43.400503 kubelet[2843]: I0128 02:06:43.399925 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4b8e718-150e-4bd9-8ae7-859dc217c1d0-tigera-ca-bundle\") pod \"calico-typha-6b777cdc69-pqjxf\" (UID: \"b4b8e718-150e-4bd9-8ae7-859dc217c1d0\") " pod="calico-system/calico-typha-6b777cdc69-pqjxf" Jan 28 02:06:43.400503 kubelet[2843]: I0128 02:06:43.400008 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b4b8e718-150e-4bd9-8ae7-859dc217c1d0-typha-certs\") pod \"calico-typha-6b777cdc69-pqjxf\" (UID: \"b4b8e718-150e-4bd9-8ae7-859dc217c1d0\") " pod="calico-system/calico-typha-6b777cdc69-pqjxf" Jan 28 02:06:43.400503 kubelet[2843]: I0128 02:06:43.400057 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl2td\" (UniqueName: \"kubernetes.io/projected/b4b8e718-150e-4bd9-8ae7-859dc217c1d0-kube-api-access-tl2td\") pod \"calico-typha-6b777cdc69-pqjxf\" (UID: \"b4b8e718-150e-4bd9-8ae7-859dc217c1d0\") " pod="calico-system/calico-typha-6b777cdc69-pqjxf" Jan 28 02:06:43.502602 kubelet[2843]: I0128 02:06:43.501773 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0693e23d-897d-45a8-b233-0d8b48fcba69-cni-bin-dir\") pod \"calico-node-ghssf\" (UID: \"0693e23d-897d-45a8-b233-0d8b48fcba69\") " pod="calico-system/calico-node-ghssf" Jan 28 02:06:43.502602 kubelet[2843]: I0128 02:06:43.501820 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0693e23d-897d-45a8-b233-0d8b48fcba69-lib-modules\") pod \"calico-node-ghssf\" (UID: \"0693e23d-897d-45a8-b233-0d8b48fcba69\") " pod="calico-system/calico-node-ghssf" Jan 28 02:06:43.502602 kubelet[2843]: I0128 02:06:43.501849 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0693e23d-897d-45a8-b233-0d8b48fcba69-var-run-calico\") pod \"calico-node-ghssf\" (UID: \"0693e23d-897d-45a8-b233-0d8b48fcba69\") " pod="calico-system/calico-node-ghssf" Jan 28 02:06:43.503874 kubelet[2843]: I0128 02:06:43.502976 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0693e23d-897d-45a8-b233-0d8b48fcba69-cni-log-dir\") pod \"calico-node-ghssf\" (UID: \"0693e23d-897d-45a8-b233-0d8b48fcba69\") " pod="calico-system/calico-node-ghssf" Jan 28 02:06:43.503874 kubelet[2843]: I0128 02:06:43.503639 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0693e23d-897d-45a8-b233-0d8b48fcba69-tigera-ca-bundle\") pod \"calico-node-ghssf\" (UID: \"0693e23d-897d-45a8-b233-0d8b48fcba69\") " pod="calico-system/calico-node-ghssf" Jan 28 02:06:43.503874 kubelet[2843]: I0128 02:06:43.503829 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrfcq\" (UniqueName: \"kubernetes.io/projected/0693e23d-897d-45a8-b233-0d8b48fcba69-kube-api-access-hrfcq\") pod \"calico-node-ghssf\" (UID: \"0693e23d-897d-45a8-b233-0d8b48fcba69\") " pod="calico-system/calico-node-ghssf" Jan 28 02:06:43.506572 kubelet[2843]: I0128 02:06:43.504410 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0693e23d-897d-45a8-b233-0d8b48fcba69-policysync\") pod \"calico-node-ghssf\" (UID: \"0693e23d-897d-45a8-b233-0d8b48fcba69\") " pod="calico-system/calico-node-ghssf" Jan 28 02:06:43.506572 kubelet[2843]: I0128 02:06:43.504477 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0693e23d-897d-45a8-b233-0d8b48fcba69-cni-net-dir\") pod \"calico-node-ghssf\" (UID: \"0693e23d-897d-45a8-b233-0d8b48fcba69\") " pod="calico-system/calico-node-ghssf" Jan 28 02:06:43.506572 kubelet[2843]: I0128 02:06:43.504506 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0693e23d-897d-45a8-b233-0d8b48fcba69-node-certs\") pod \"calico-node-ghssf\" (UID: \"0693e23d-897d-45a8-b233-0d8b48fcba69\") " pod="calico-system/calico-node-ghssf" Jan 28 02:06:43.506572 kubelet[2843]: I0128 02:06:43.504533 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0693e23d-897d-45a8-b233-0d8b48fcba69-var-lib-calico\") pod \"calico-node-ghssf\" (UID: \"0693e23d-897d-45a8-b233-0d8b48fcba69\") " pod="calico-system/calico-node-ghssf" Jan 28 02:06:43.506969 kubelet[2843]: I0128 02:06:43.506944 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0693e23d-897d-45a8-b233-0d8b48fcba69-xtables-lock\") pod \"calico-node-ghssf\" (UID: \"0693e23d-897d-45a8-b233-0d8b48fcba69\") " pod="calico-system/calico-node-ghssf" Jan 28 02:06:43.507133 kubelet[2843]: I0128 02:06:43.507099 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0693e23d-897d-45a8-b233-0d8b48fcba69-flexvol-driver-host\") pod \"calico-node-ghssf\" (UID: \"0693e23d-897d-45a8-b233-0d8b48fcba69\") " pod="calico-system/calico-node-ghssf" Jan 28 02:06:43.625703 containerd[1627]: time="2026-01-28T02:06:43.620986956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b777cdc69-pqjxf,Uid:b4b8e718-150e-4bd9-8ae7-859dc217c1d0,Namespace:calico-system,Attempt:0,}" Jan 28 02:06:43.639613 kubelet[2843]: E0128 02:06:43.635210 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.639613 kubelet[2843]: W0128 02:06:43.635298 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.639613 kubelet[2843]: E0128 02:06:43.635394 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.645272 kubelet[2843]: E0128 02:06:43.642733 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.645272 kubelet[2843]: W0128 02:06:43.642761 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.645272 kubelet[2843]: E0128 02:06:43.642788 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.657020 kubelet[2843]: E0128 02:06:43.655180 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.657020 kubelet[2843]: W0128 02:06:43.655284 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.657020 kubelet[2843]: E0128 02:06:43.655323 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.715266 containerd[1627]: time="2026-01-28T02:06:43.714867801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:06:43.715598 containerd[1627]: time="2026-01-28T02:06:43.715245724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:06:43.715847 containerd[1627]: time="2026-01-28T02:06:43.715766527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:06:43.716390 containerd[1627]: time="2026-01-28T02:06:43.716306599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:06:43.759453 containerd[1627]: time="2026-01-28T02:06:43.759207435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ghssf,Uid:0693e23d-897d-45a8-b233-0d8b48fcba69,Namespace:calico-system,Attempt:0,}" Jan 28 02:06:43.805716 kubelet[2843]: E0128 02:06:43.804786 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:06:43.819583 containerd[1627]: time="2026-01-28T02:06:43.819028852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:06:43.819583 containerd[1627]: time="2026-01-28T02:06:43.819117849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:06:43.819583 containerd[1627]: time="2026-01-28T02:06:43.819148673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:06:43.822375 containerd[1627]: time="2026-01-28T02:06:43.820490368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:06:43.883305 kubelet[2843]: E0128 02:06:43.883217 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.883305 kubelet[2843]: W0128 02:06:43.883280 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.883536 kubelet[2843]: E0128 02:06:43.883321 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.884683 kubelet[2843]: E0128 02:06:43.883781 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.884683 kubelet[2843]: W0128 02:06:43.884105 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.884683 kubelet[2843]: E0128 02:06:43.884129 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.884683 kubelet[2843]: E0128 02:06:43.884429 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.884683 kubelet[2843]: W0128 02:06:43.884442 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.884683 kubelet[2843]: E0128 02:06:43.884457 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.887492 kubelet[2843]: E0128 02:06:43.885063 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.887492 kubelet[2843]: W0128 02:06:43.885085 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.887492 kubelet[2843]: E0128 02:06:43.885107 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.887492 kubelet[2843]: E0128 02:06:43.885443 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.887492 kubelet[2843]: W0128 02:06:43.885457 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.887492 kubelet[2843]: E0128 02:06:43.885482 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.887492 kubelet[2843]: E0128 02:06:43.885819 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.887492 kubelet[2843]: W0128 02:06:43.885832 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.887492 kubelet[2843]: E0128 02:06:43.885846 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.887492 kubelet[2843]: E0128 02:06:43.886122 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.890287 kubelet[2843]: W0128 02:06:43.886135 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.890287 kubelet[2843]: E0128 02:06:43.886149 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.890287 kubelet[2843]: E0128 02:06:43.886529 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.890287 kubelet[2843]: W0128 02:06:43.886543 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.890287 kubelet[2843]: E0128 02:06:43.886573 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.890287 kubelet[2843]: E0128 02:06:43.886869 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.890287 kubelet[2843]: W0128 02:06:43.886889 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.890287 kubelet[2843]: E0128 02:06:43.886905 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.890287 kubelet[2843]: E0128 02:06:43.887247 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.890287 kubelet[2843]: W0128 02:06:43.887261 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.892474 kubelet[2843]: E0128 02:06:43.887277 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.892474 kubelet[2843]: E0128 02:06:43.887521 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.892474 kubelet[2843]: W0128 02:06:43.887534 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.892474 kubelet[2843]: E0128 02:06:43.887548 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.892474 kubelet[2843]: E0128 02:06:43.887956 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.892474 kubelet[2843]: W0128 02:06:43.887969 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.892474 kubelet[2843]: E0128 02:06:43.887992 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.892474 kubelet[2843]: E0128 02:06:43.888218 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.892474 kubelet[2843]: W0128 02:06:43.888231 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.892474 kubelet[2843]: E0128 02:06:43.888244 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.897016 kubelet[2843]: E0128 02:06:43.888720 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.897016 kubelet[2843]: W0128 02:06:43.888734 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.897016 kubelet[2843]: E0128 02:06:43.888748 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.897016 kubelet[2843]: E0128 02:06:43.889310 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.897016 kubelet[2843]: W0128 02:06:43.889324 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.897016 kubelet[2843]: E0128 02:06:43.889338 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.897016 kubelet[2843]: E0128 02:06:43.889941 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.897016 kubelet[2843]: W0128 02:06:43.889955 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.897016 kubelet[2843]: E0128 02:06:43.889969 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.897016 kubelet[2843]: E0128 02:06:43.890258 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.897430 kubelet[2843]: W0128 02:06:43.890277 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.897430 kubelet[2843]: E0128 02:06:43.890292 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.897430 kubelet[2843]: E0128 02:06:43.890630 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.897430 kubelet[2843]: W0128 02:06:43.890647 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.897430 kubelet[2843]: E0128 02:06:43.890660 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.897430 kubelet[2843]: E0128 02:06:43.890898 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.897430 kubelet[2843]: W0128 02:06:43.890911 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.897430 kubelet[2843]: E0128 02:06:43.890925 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.897430 kubelet[2843]: E0128 02:06:43.891157 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.897430 kubelet[2843]: W0128 02:06:43.891169 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.909022 kubelet[2843]: E0128 02:06:43.891182 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.915756 kubelet[2843]: E0128 02:06:43.915111 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.915756 kubelet[2843]: W0128 02:06:43.915147 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.915756 kubelet[2843]: E0128 02:06:43.915171 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.921383 kubelet[2843]: E0128 02:06:43.920865 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.921383 kubelet[2843]: W0128 02:06:43.920891 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.921383 kubelet[2843]: E0128 02:06:43.920908 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.921383 kubelet[2843]: I0128 02:06:43.921214 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/11c042ea-f3ed-451b-a4e5-0f06212804a3-socket-dir\") pod \"csi-node-driver-dgqzm\" (UID: \"11c042ea-f3ed-451b-a4e5-0f06212804a3\") " pod="calico-system/csi-node-driver-dgqzm" Jan 28 02:06:43.921383 kubelet[2843]: E0128 02:06:43.921302 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.921383 kubelet[2843]: W0128 02:06:43.921315 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.921383 kubelet[2843]: E0128 02:06:43.921330 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.922815 kubelet[2843]: E0128 02:06:43.922406 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.922815 kubelet[2843]: W0128 02:06:43.922426 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.922815 kubelet[2843]: E0128 02:06:43.922441 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.922815 kubelet[2843]: I0128 02:06:43.922490 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/11c042ea-f3ed-451b-a4e5-0f06212804a3-varrun\") pod \"csi-node-driver-dgqzm\" (UID: \"11c042ea-f3ed-451b-a4e5-0f06212804a3\") " pod="calico-system/csi-node-driver-dgqzm" Jan 28 02:06:43.923955 kubelet[2843]: E0128 02:06:43.923518 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.923955 kubelet[2843]: W0128 02:06:43.923537 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.923955 kubelet[2843]: E0128 02:06:43.923584 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.923955 kubelet[2843]: I0128 02:06:43.923624 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/11c042ea-f3ed-451b-a4e5-0f06212804a3-registration-dir\") pod \"csi-node-driver-dgqzm\" (UID: \"11c042ea-f3ed-451b-a4e5-0f06212804a3\") " pod="calico-system/csi-node-driver-dgqzm" Jan 28 02:06:43.924966 kubelet[2843]: E0128 02:06:43.924594 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.924966 kubelet[2843]: W0128 02:06:43.924611 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.924966 kubelet[2843]: E0128 02:06:43.924634 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.926015 kubelet[2843]: E0128 02:06:43.925509 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.926015 kubelet[2843]: W0128 02:06:43.925527 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.926015 kubelet[2843]: E0128 02:06:43.925665 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.926520 kubelet[2843]: E0128 02:06:43.926365 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.926520 kubelet[2843]: W0128 02:06:43.926491 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.927105 kubelet[2843]: E0128 02:06:43.926715 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.927105 kubelet[2843]: I0128 02:06:43.927052 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11c042ea-f3ed-451b-a4e5-0f06212804a3-kubelet-dir\") pod \"csi-node-driver-dgqzm\" (UID: \"11c042ea-f3ed-451b-a4e5-0f06212804a3\") " pod="calico-system/csi-node-driver-dgqzm" Jan 28 02:06:43.927855 kubelet[2843]: E0128 02:06:43.927612 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.927855 kubelet[2843]: W0128 02:06:43.927629 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.927855 kubelet[2843]: E0128 02:06:43.927784 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.928741 kubelet[2843]: E0128 02:06:43.928466 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.928741 kubelet[2843]: W0128 02:06:43.928483 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.928741 kubelet[2843]: E0128 02:06:43.928707 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.930773 kubelet[2843]: E0128 02:06:43.930398 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.930773 kubelet[2843]: W0128 02:06:43.930416 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.931330 kubelet[2843]: E0128 02:06:43.930937 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.931330 kubelet[2843]: I0128 02:06:43.930972 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l69f\" (UniqueName: \"kubernetes.io/projected/11c042ea-f3ed-451b-a4e5-0f06212804a3-kube-api-access-9l69f\") pod \"csi-node-driver-dgqzm\" (UID: \"11c042ea-f3ed-451b-a4e5-0f06212804a3\") " pod="calico-system/csi-node-driver-dgqzm" Jan 28 02:06:43.931873 kubelet[2843]: E0128 02:06:43.931688 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.931873 kubelet[2843]: W0128 02:06:43.931713 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.931873 kubelet[2843]: E0128 02:06:43.931753 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.932618 kubelet[2843]: E0128 02:06:43.932295 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.932618 kubelet[2843]: W0128 02:06:43.932312 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.932618 kubelet[2843]: E0128 02:06:43.932415 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.933505 kubelet[2843]: E0128 02:06:43.933255 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.933505 kubelet[2843]: W0128 02:06:43.933283 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.933505 kubelet[2843]: E0128 02:06:43.933297 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.934497 kubelet[2843]: E0128 02:06:43.934310 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:43.934497 kubelet[2843]: W0128 02:06:43.934334 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:43.934497 kubelet[2843]: E0128 02:06:43.934361 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:43.937242 containerd[1627]: time="2026-01-28T02:06:43.937005341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ghssf,Uid:0693e23d-897d-45a8-b233-0d8b48fcba69,Namespace:calico-system,Attempt:0,} returns sandbox id \"b829d06423181e0a357ec365dadafc36277160a6bc70d8bd36258dcaf326043f\"" Jan 28 02:06:43.957898 containerd[1627]: time="2026-01-28T02:06:43.957739571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 28 02:06:44.032330 kubelet[2843]: E0128 02:06:44.032227 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.032330 kubelet[2843]: W0128 02:06:44.032253 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.032330 kubelet[2843]: E0128 02:06:44.032276 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.033132 kubelet[2843]: E0128 02:06:44.032906 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.033132 kubelet[2843]: W0128 02:06:44.032923 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.033132 kubelet[2843]: E0128 02:06:44.032963 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.035300 kubelet[2843]: E0128 02:06:44.035072 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.035300 kubelet[2843]: W0128 02:06:44.035094 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.035300 kubelet[2843]: E0128 02:06:44.035131 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.035784 kubelet[2843]: E0128 02:06:44.035544 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.035784 kubelet[2843]: W0128 02:06:44.035575 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.036038 kubelet[2843]: E0128 02:06:44.035927 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.041772 kubelet[2843]: E0128 02:06:44.041608 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.041772 kubelet[2843]: W0128 02:06:44.041627 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.041772 kubelet[2843]: E0128 02:06:44.041644 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.044501 kubelet[2843]: E0128 02:06:44.044176 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.044501 kubelet[2843]: W0128 02:06:44.044201 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.044501 kubelet[2843]: E0128 02:06:44.044410 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.046529 kubelet[2843]: E0128 02:06:44.046332 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.046529 kubelet[2843]: W0128 02:06:44.046362 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.046529 kubelet[2843]: E0128 02:06:44.046482 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.048428 kubelet[2843]: E0128 02:06:44.048161 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.048428 kubelet[2843]: W0128 02:06:44.048176 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.048428 kubelet[2843]: E0128 02:06:44.048273 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.051039 kubelet[2843]: E0128 02:06:44.050754 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.051039 kubelet[2843]: W0128 02:06:44.050772 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.051039 kubelet[2843]: E0128 02:06:44.050822 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.052446 kubelet[2843]: E0128 02:06:44.052180 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.052446 kubelet[2843]: W0128 02:06:44.052223 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.052446 kubelet[2843]: E0128 02:06:44.052410 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.053450 kubelet[2843]: E0128 02:06:44.053251 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.053450 kubelet[2843]: W0128 02:06:44.053268 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.053450 kubelet[2843]: E0128 02:06:44.053423 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.055049 kubelet[2843]: E0128 02:06:44.054831 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.055049 kubelet[2843]: W0128 02:06:44.054859 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.055049 kubelet[2843]: E0128 02:06:44.054977 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.056032 kubelet[2843]: E0128 02:06:44.056016 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.056125 kubelet[2843]: W0128 02:06:44.056108 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.056314 kubelet[2843]: E0128 02:06:44.056295 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.056966 kubelet[2843]: E0128 02:06:44.056949 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.057081 kubelet[2843]: W0128 02:06:44.057065 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.057265 kubelet[2843]: E0128 02:06:44.057247 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.058430 kubelet[2843]: E0128 02:06:44.058405 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.058582 kubelet[2843]: W0128 02:06:44.058544 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.058774 kubelet[2843]: E0128 02:06:44.058757 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.065032 kubelet[2843]: E0128 02:06:44.064726 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.065171 kubelet[2843]: W0128 02:06:44.065151 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.065438 kubelet[2843]: E0128 02:06:44.065419 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.065801 kubelet[2843]: E0128 02:06:44.065783 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.065938 kubelet[2843]: W0128 02:06:44.065907 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.066106 kubelet[2843]: E0128 02:06:44.066088 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.066662 kubelet[2843]: E0128 02:06:44.066644 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.066896 kubelet[2843]: W0128 02:06:44.066790 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.067098 kubelet[2843]: E0128 02:06:44.067078 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.067769 kubelet[2843]: E0128 02:06:44.067581 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.067912 kubelet[2843]: W0128 02:06:44.067892 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.070077 kubelet[2843]: E0128 02:06:44.070056 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.070204 kubelet[2843]: W0128 02:06:44.070184 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.073058 kubelet[2843]: E0128 02:06:44.073018 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.073794 kubelet[2843]: W0128 02:06:44.073760 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.075634 kubelet[2843]: E0128 02:06:44.075596 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.075908 kubelet[2843]: E0128 02:06:44.075885 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.076612 kubelet[2843]: E0128 02:06:44.076132 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.076612 kubelet[2843]: E0128 02:06:44.076224 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.076612 kubelet[2843]: W0128 02:06:44.076250 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.076612 kubelet[2843]: E0128 02:06:44.076264 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.077628 kubelet[2843]: E0128 02:06:44.077132 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.077628 kubelet[2843]: W0128 02:06:44.077387 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.077628 kubelet[2843]: E0128 02:06:44.077411 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.079645 containerd[1627]: time="2026-01-28T02:06:44.079526029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b777cdc69-pqjxf,Uid:b4b8e718-150e-4bd9-8ae7-859dc217c1d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"25cab1431199a10a81751854a54af130c2104c17251891c989e5c97f810a3b59\"" Jan 28 02:06:44.085452 kubelet[2843]: E0128 02:06:44.085192 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.085452 kubelet[2843]: W0128 02:06:44.085238 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.085452 kubelet[2843]: E0128 02:06:44.085443 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.086947 kubelet[2843]: E0128 02:06:44.086414 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.086947 kubelet[2843]: W0128 02:06:44.086432 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.086947 kubelet[2843]: E0128 02:06:44.086453 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:44.105625 kubelet[2843]: E0128 02:06:44.105513 2843 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 28 02:06:44.105625 kubelet[2843]: W0128 02:06:44.105579 2843 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 28 02:06:44.105625 kubelet[2843]: E0128 02:06:44.105603 2843 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 28 02:06:45.452052 kubelet[2843]: E0128 02:06:45.451076 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:06:45.625522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3251518103.mount: Deactivated successfully. Jan 28 02:06:45.771330 containerd[1627]: time="2026-01-28T02:06:45.771098281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:06:45.782155 containerd[1627]: time="2026-01-28T02:06:45.782071809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Jan 28 02:06:45.783282 containerd[1627]: time="2026-01-28T02:06:45.783104086Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:06:45.789404 containerd[1627]: time="2026-01-28T02:06:45.789368895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:06:45.792001 containerd[1627]: time="2026-01-28T02:06:45.791954706Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.834130535s" Jan 28 02:06:45.792118 containerd[1627]: time="2026-01-28T02:06:45.792092139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 28 02:06:45.795496 containerd[1627]: time="2026-01-28T02:06:45.795467254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 28 02:06:45.809263 containerd[1627]: time="2026-01-28T02:06:45.809203466Z" level=info msg="CreateContainer within sandbox \"b829d06423181e0a357ec365dadafc36277160a6bc70d8bd36258dcaf326043f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 28 02:06:45.832723 containerd[1627]: time="2026-01-28T02:06:45.832668367Z" level=info msg="CreateContainer within sandbox \"b829d06423181e0a357ec365dadafc36277160a6bc70d8bd36258dcaf326043f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"795795e2aff44336e2321cbb7917492a06c506dfe16e805bb084a4387f207e3b\"" Jan 28 02:06:45.842179 containerd[1627]: time="2026-01-28T02:06:45.839880480Z" level=info msg="StartContainer for \"795795e2aff44336e2321cbb7917492a06c506dfe16e805bb084a4387f207e3b\"" Jan 28 02:06:45.942925 containerd[1627]: time="2026-01-28T02:06:45.942851721Z" level=info msg="StartContainer for \"795795e2aff44336e2321cbb7917492a06c506dfe16e805bb084a4387f207e3b\" returns successfully" Jan 28 02:06:46.088438 containerd[1627]: time="2026-01-28T02:06:46.030930790Z" level=info msg="shim disconnected" id=795795e2aff44336e2321cbb7917492a06c506dfe16e805bb084a4387f207e3b namespace=k8s.io Jan 28 02:06:46.089153 containerd[1627]: time="2026-01-28T02:06:46.088869182Z" level=warning msg="cleaning up after shim disconnected" id=795795e2aff44336e2321cbb7917492a06c506dfe16e805bb084a4387f207e3b namespace=k8s.io Jan 28 02:06:46.089153 containerd[1627]: time="2026-01-28T02:06:46.088910344Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 02:06:46.561392 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-795795e2aff44336e2321cbb7917492a06c506dfe16e805bb084a4387f207e3b-rootfs.mount: Deactivated successfully. Jan 28 02:06:47.449712 kubelet[2843]: E0128 02:06:47.449629 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:06:48.937852 containerd[1627]: time="2026-01-28T02:06:48.937763585Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:06:48.939398 containerd[1627]: time="2026-01-28T02:06:48.938610567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Jan 28 02:06:48.939819 containerd[1627]: time="2026-01-28T02:06:48.939790916Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:06:48.943809 containerd[1627]: time="2026-01-28T02:06:48.943778196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:06:48.944947 containerd[1627]: time="2026-01-28T02:06:48.944898770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.14795844s" Jan 28 02:06:48.945082 containerd[1627]: time="2026-01-28T02:06:48.945056187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 28 02:06:48.946917 containerd[1627]: time="2026-01-28T02:06:48.946878723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 28 02:06:48.969547 containerd[1627]: time="2026-01-28T02:06:48.969355311Z" level=info msg="CreateContainer within sandbox \"25cab1431199a10a81751854a54af130c2104c17251891c989e5c97f810a3b59\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 28 02:06:48.987149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3673512054.mount: Deactivated successfully. Jan 28 02:06:48.998770 containerd[1627]: time="2026-01-28T02:06:48.998712829Z" level=info msg="CreateContainer within sandbox \"25cab1431199a10a81751854a54af130c2104c17251891c989e5c97f810a3b59\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9844a01e7b1c545e62b72dad6b3d40e8ce61db217674558da857ece3261743b7\"" Jan 28 02:06:48.999849 containerd[1627]: time="2026-01-28T02:06:48.999697972Z" level=info msg="StartContainer for \"9844a01e7b1c545e62b72dad6b3d40e8ce61db217674558da857ece3261743b7\"" Jan 28 02:06:49.191651 containerd[1627]: time="2026-01-28T02:06:49.191515577Z" level=info msg="StartContainer for \"9844a01e7b1c545e62b72dad6b3d40e8ce61db217674558da857ece3261743b7\" returns successfully" Jan 28 02:06:49.449339 kubelet[2843]: E0128 02:06:49.449152 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:06:49.683799 kubelet[2843]: I0128 02:06:49.683683 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b777cdc69-pqjxf" podStartSLOduration=1.818632365 podStartE2EDuration="6.682245556s" podCreationTimestamp="2026-01-28 02:06:43 +0000 UTC" firstStartedPulling="2026-01-28 02:06:44.082740133 +0000 UTC m=+25.810640744" lastFinishedPulling="2026-01-28 02:06:48.946353325 +0000 UTC m=+30.674253935" observedRunningTime="2026-01-28 02:06:49.681779349 +0000 UTC m=+31.409679976" watchObservedRunningTime="2026-01-28 02:06:49.682245556 +0000 UTC m=+31.410146180" Jan 28 02:06:50.641935 kubelet[2843]: I0128 02:06:50.641807 2843 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 02:06:51.449611 kubelet[2843]: E0128 02:06:51.448964 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:06:53.449498 kubelet[2843]: E0128 02:06:53.449354 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:06:54.920896 containerd[1627]: time="2026-01-28T02:06:54.920819101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:06:54.923071 containerd[1627]: time="2026-01-28T02:06:54.922996337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 28 02:06:54.923407 containerd[1627]: time="2026-01-28T02:06:54.923355489Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:06:54.926877 containerd[1627]: time="2026-01-28T02:06:54.926325784Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:06:54.928186 containerd[1627]: time="2026-01-28T02:06:54.927529745Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.980596928s" Jan 28 02:06:54.928186 containerd[1627]: time="2026-01-28T02:06:54.927599147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 28 02:06:54.933268 containerd[1627]: time="2026-01-28T02:06:54.933231422Z" level=info msg="CreateContainer within sandbox \"b829d06423181e0a357ec365dadafc36277160a6bc70d8bd36258dcaf326043f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 28 02:06:55.082844 containerd[1627]: time="2026-01-28T02:06:55.082784184Z" level=info msg="CreateContainer within sandbox \"b829d06423181e0a357ec365dadafc36277160a6bc70d8bd36258dcaf326043f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"97092f29c478d6df1a676fb7f802b54e4e32a590a7b3d44d5f50328be2839e52\"" Jan 28 02:06:55.086775 containerd[1627]: time="2026-01-28T02:06:55.084776473Z" level=info msg="StartContainer for \"97092f29c478d6df1a676fb7f802b54e4e32a590a7b3d44d5f50328be2839e52\"" Jan 28 02:06:55.163222 systemd[1]: run-containerd-runc-k8s.io-97092f29c478d6df1a676fb7f802b54e4e32a590a7b3d44d5f50328be2839e52-runc.eAObgi.mount: Deactivated successfully. Jan 28 02:06:55.217009 containerd[1627]: time="2026-01-28T02:06:55.216379642Z" level=info msg="StartContainer for \"97092f29c478d6df1a676fb7f802b54e4e32a590a7b3d44d5f50328be2839e52\" returns successfully" Jan 28 02:06:55.451400 kubelet[2843]: E0128 02:06:55.451216 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:06:56.415025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97092f29c478d6df1a676fb7f802b54e4e32a590a7b3d44d5f50328be2839e52-rootfs.mount: Deactivated successfully. Jan 28 02:06:56.419212 containerd[1627]: time="2026-01-28T02:06:56.418351725Z" level=info msg="shim disconnected" id=97092f29c478d6df1a676fb7f802b54e4e32a590a7b3d44d5f50328be2839e52 namespace=k8s.io Jan 28 02:06:56.419212 containerd[1627]: time="2026-01-28T02:06:56.418516967Z" level=warning msg="cleaning up after shim disconnected" id=97092f29c478d6df1a676fb7f802b54e4e32a590a7b3d44d5f50328be2839e52 namespace=k8s.io Jan 28 02:06:56.419212 containerd[1627]: time="2026-01-28T02:06:56.418553027Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 28 02:06:56.462421 kubelet[2843]: I0128 02:06:56.462351 2843 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 28 02:06:56.653222 kubelet[2843]: I0128 02:06:56.653161 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhrfq\" (UniqueName: \"kubernetes.io/projected/9d134562-e8ff-432f-bfe2-7f69c1332017-kube-api-access-lhrfq\") pod \"coredns-668d6bf9bc-drn4k\" (UID: \"9d134562-e8ff-432f-bfe2-7f69c1332017\") " pod="kube-system/coredns-668d6bf9bc-drn4k" Jan 28 02:06:56.653462 kubelet[2843]: I0128 02:06:56.653427 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8b2fed1f-0989-4c8f-98b5-dfc06958e7db-goldmane-key-pair\") pod \"goldmane-666569f655-qjnq7\" (UID: \"8b2fed1f-0989-4c8f-98b5-dfc06958e7db\") " pod="calico-system/goldmane-666569f655-qjnq7" Jan 28 02:06:56.654049 kubelet[2843]: I0128 02:06:56.653604 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/91d0c55e-7a98-404e-a4c2-3c6f8edba99c-calico-apiserver-certs\") pod \"calico-apiserver-66dfb7f7f9-w9b4h\" (UID: \"91d0c55e-7a98-404e-a4c2-3c6f8edba99c\") " pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" Jan 28 02:06:56.654049 kubelet[2843]: I0128 02:06:56.653982 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsks6\" (UniqueName: \"kubernetes.io/projected/785cc68f-05e2-4db5-9c76-e2a7381de538-kube-api-access-dsks6\") pod \"whisker-8f88cb457-rs5wl\" (UID: \"785cc68f-05e2-4db5-9c76-e2a7381de538\") " pod="calico-system/whisker-8f88cb457-rs5wl" Jan 28 02:06:56.654251 kubelet[2843]: I0128 02:06:56.654109 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d134562-e8ff-432f-bfe2-7f69c1332017-config-volume\") pod \"coredns-668d6bf9bc-drn4k\" (UID: \"9d134562-e8ff-432f-bfe2-7f69c1332017\") " pod="kube-system/coredns-668d6bf9bc-drn4k" Jan 28 02:06:56.654457 kubelet[2843]: I0128 02:06:56.654270 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdd2c\" (UniqueName: \"kubernetes.io/projected/8b2fed1f-0989-4c8f-98b5-dfc06958e7db-kube-api-access-wdd2c\") pod \"goldmane-666569f655-qjnq7\" (UID: \"8b2fed1f-0989-4c8f-98b5-dfc06958e7db\") " pod="calico-system/goldmane-666569f655-qjnq7" Jan 28 02:06:56.654959 kubelet[2843]: I0128 02:06:56.654489 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/785cc68f-05e2-4db5-9c76-e2a7381de538-whisker-ca-bundle\") pod \"whisker-8f88cb457-rs5wl\" (UID: \"785cc68f-05e2-4db5-9c76-e2a7381de538\") " pod="calico-system/whisker-8f88cb457-rs5wl" Jan 28 02:06:56.654959 kubelet[2843]: I0128 02:06:56.654693 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngvzc\" (UniqueName: \"kubernetes.io/projected/8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4-kube-api-access-ngvzc\") pod \"coredns-668d6bf9bc-clcr9\" (UID: \"8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4\") " pod="kube-system/coredns-668d6bf9bc-clcr9" Jan 28 02:06:56.654959 kubelet[2843]: I0128 02:06:56.654871 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b2fed1f-0989-4c8f-98b5-dfc06958e7db-goldmane-ca-bundle\") pod \"goldmane-666569f655-qjnq7\" (UID: \"8b2fed1f-0989-4c8f-98b5-dfc06958e7db\") " pod="calico-system/goldmane-666569f655-qjnq7" Jan 28 02:06:56.655303 kubelet[2843]: I0128 02:06:56.655123 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1159a5fb-a1ac-4f76-832d-c5be127c9405-tigera-ca-bundle\") pod \"calico-kube-controllers-5c95698587-q576f\" (UID: \"1159a5fb-a1ac-4f76-832d-c5be127c9405\") " pod="calico-system/calico-kube-controllers-5c95698587-q576f" Jan 28 02:06:56.655430 kubelet[2843]: I0128 02:06:56.655352 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhs92\" (UniqueName: \"kubernetes.io/projected/91d0c55e-7a98-404e-a4c2-3c6f8edba99c-kube-api-access-jhs92\") pod \"calico-apiserver-66dfb7f7f9-w9b4h\" (UID: \"91d0c55e-7a98-404e-a4c2-3c6f8edba99c\") " pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" Jan 28 02:06:56.655583 kubelet[2843]: I0128 02:06:56.655526 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb9wz\" (UniqueName: \"kubernetes.io/projected/1159a5fb-a1ac-4f76-832d-c5be127c9405-kube-api-access-zb9wz\") pod \"calico-kube-controllers-5c95698587-q576f\" (UID: \"1159a5fb-a1ac-4f76-832d-c5be127c9405\") " pod="calico-system/calico-kube-controllers-5c95698587-q576f" Jan 28 02:06:56.655887 kubelet[2843]: I0128 02:06:56.655772 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b2fed1f-0989-4c8f-98b5-dfc06958e7db-config\") pod \"goldmane-666569f655-qjnq7\" (UID: \"8b2fed1f-0989-4c8f-98b5-dfc06958e7db\") " pod="calico-system/goldmane-666569f655-qjnq7" Jan 28 02:06:56.655981 kubelet[2843]: I0128 02:06:56.655930 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a4d006c-455e-43f1-8c29-a9bee0e4e963-calico-apiserver-certs\") pod \"calico-apiserver-66dfb7f7f9-v4czv\" (UID: \"9a4d006c-455e-43f1-8c29-a9bee0e4e963\") " pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" Jan 28 02:06:56.656282 kubelet[2843]: I0128 02:06:56.656195 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znfnm\" (UniqueName: \"kubernetes.io/projected/9a4d006c-455e-43f1-8c29-a9bee0e4e963-kube-api-access-znfnm\") pod \"calico-apiserver-66dfb7f7f9-v4czv\" (UID: \"9a4d006c-455e-43f1-8c29-a9bee0e4e963\") " pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" Jan 28 02:06:56.656744 kubelet[2843]: I0128 02:06:56.656236 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4-config-volume\") pod \"coredns-668d6bf9bc-clcr9\" (UID: \"8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4\") " pod="kube-system/coredns-668d6bf9bc-clcr9" Jan 28 02:06:56.656744 kubelet[2843]: I0128 02:06:56.656591 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/785cc68f-05e2-4db5-9c76-e2a7381de538-whisker-backend-key-pair\") pod \"whisker-8f88cb457-rs5wl\" (UID: \"785cc68f-05e2-4db5-9c76-e2a7381de538\") " pod="calico-system/whisker-8f88cb457-rs5wl" Jan 28 02:06:56.698800 containerd[1627]: time="2026-01-28T02:06:56.698039019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 28 02:06:56.850438 containerd[1627]: time="2026-01-28T02:06:56.850365988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-drn4k,Uid:9d134562-e8ff-432f-bfe2-7f69c1332017,Namespace:kube-system,Attempt:0,}" Jan 28 02:06:56.857182 containerd[1627]: time="2026-01-28T02:06:56.856539149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8f88cb457-rs5wl,Uid:785cc68f-05e2-4db5-9c76-e2a7381de538,Namespace:calico-system,Attempt:0,}" Jan 28 02:06:56.866814 containerd[1627]: time="2026-01-28T02:06:56.865590499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66dfb7f7f9-w9b4h,Uid:91d0c55e-7a98-404e-a4c2-3c6f8edba99c,Namespace:calico-apiserver,Attempt:0,}" Jan 28 02:06:56.888497 containerd[1627]: time="2026-01-28T02:06:56.888130805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66dfb7f7f9-v4czv,Uid:9a4d006c-455e-43f1-8c29-a9bee0e4e963,Namespace:calico-apiserver,Attempt:0,}" Jan 28 02:06:56.890799 containerd[1627]: time="2026-01-28T02:06:56.889726897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c95698587-q576f,Uid:1159a5fb-a1ac-4f76-832d-c5be127c9405,Namespace:calico-system,Attempt:0,}" Jan 28 02:06:56.890799 containerd[1627]: time="2026-01-28T02:06:56.890100005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-clcr9,Uid:8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4,Namespace:kube-system,Attempt:0,}" Jan 28 02:06:56.890799 containerd[1627]: time="2026-01-28T02:06:56.890313763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qjnq7,Uid:8b2fed1f-0989-4c8f-98b5-dfc06958e7db,Namespace:calico-system,Attempt:0,}" Jan 28 02:06:57.357603 containerd[1627]: time="2026-01-28T02:06:57.357212415Z" level=error msg="Failed to destroy network for sandbox \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.381512 containerd[1627]: time="2026-01-28T02:06:57.381449710Z" level=error msg="Failed to destroy network for sandbox \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.384219 containerd[1627]: time="2026-01-28T02:06:57.384176613Z" level=error msg="encountered an error cleaning up failed sandbox \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.387089 containerd[1627]: time="2026-01-28T02:06:57.386288765Z" level=error msg="Failed to destroy network for sandbox \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.388573 containerd[1627]: time="2026-01-28T02:06:57.387540369Z" level=error msg="encountered an error cleaning up failed sandbox \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.392189 containerd[1627]: time="2026-01-28T02:06:57.392136182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c95698587-q576f,Uid:1159a5fb-a1ac-4f76-832d-c5be127c9405,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.395867 containerd[1627]: time="2026-01-28T02:06:57.395821852Z" level=error msg="Failed to destroy network for sandbox \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.398941 containerd[1627]: time="2026-01-28T02:06:57.398902713Z" level=error msg="encountered an error cleaning up failed sandbox \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.399686 containerd[1627]: time="2026-01-28T02:06:57.399650574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-clcr9,Uid:8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.399913 containerd[1627]: time="2026-01-28T02:06:57.399864474Z" level=error msg="encountered an error cleaning up failed sandbox \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.400119 containerd[1627]: time="2026-01-28T02:06:57.400048842Z" level=error msg="Failed to destroy network for sandbox \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.400705 kubelet[2843]: E0128 02:06:57.400621 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.400941 kubelet[2843]: E0128 02:06:57.400784 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c95698587-q576f" Jan 28 02:06:57.400941 kubelet[2843]: E0128 02:06:57.400835 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c95698587-q576f" Jan 28 02:06:57.401527 kubelet[2843]: E0128 02:06:57.400957 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c95698587-q576f_calico-system(1159a5fb-a1ac-4f76-832d-c5be127c9405)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c95698587-q576f_calico-system(1159a5fb-a1ac-4f76-832d-c5be127c9405)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c95698587-q576f" podUID="1159a5fb-a1ac-4f76-832d-c5be127c9405" Jan 28 02:06:57.401527 kubelet[2843]: E0128 02:06:57.401241 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.401527 kubelet[2843]: E0128 02:06:57.401301 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-clcr9" Jan 28 02:06:57.402388 kubelet[2843]: E0128 02:06:57.401328 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-clcr9" Jan 28 02:06:57.402388 kubelet[2843]: E0128 02:06:57.401368 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-clcr9_kube-system(8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-clcr9_kube-system(8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-clcr9" podUID="8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4" Jan 28 02:06:57.402804 containerd[1627]: time="2026-01-28T02:06:57.400063555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qjnq7,Uid:8b2fed1f-0989-4c8f-98b5-dfc06958e7db,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.402804 containerd[1627]: time="2026-01-28T02:06:57.400098561Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66dfb7f7f9-v4czv,Uid:9a4d006c-455e-43f1-8c29-a9bee0e4e963,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.404621 kubelet[2843]: E0128 02:06:57.404181 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.404621 kubelet[2843]: E0128 02:06:57.404232 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qjnq7" Jan 28 02:06:57.404621 kubelet[2843]: E0128 02:06:57.404256 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qjnq7" Jan 28 02:06:57.405144 kubelet[2843]: E0128 02:06:57.404293 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-qjnq7_calico-system(8b2fed1f-0989-4c8f-98b5-dfc06958e7db)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-qjnq7_calico-system(8b2fed1f-0989-4c8f-98b5-dfc06958e7db)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qjnq7" podUID="8b2fed1f-0989-4c8f-98b5-dfc06958e7db" Jan 28 02:06:57.405144 kubelet[2843]: E0128 02:06:57.404353 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.405144 kubelet[2843]: E0128 02:06:57.404383 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" Jan 28 02:06:57.405333 kubelet[2843]: E0128 02:06:57.404428 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" Jan 28 02:06:57.405333 kubelet[2843]: E0128 02:06:57.404487 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66dfb7f7f9-v4czv_calico-apiserver(9a4d006c-455e-43f1-8c29-a9bee0e4e963)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66dfb7f7f9-v4czv_calico-apiserver(9a4d006c-455e-43f1-8c29-a9bee0e4e963)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" podUID="9a4d006c-455e-43f1-8c29-a9bee0e4e963" Jan 28 02:06:57.406325 containerd[1627]: time="2026-01-28T02:06:57.406012103Z" level=error msg="encountered an error cleaning up failed sandbox \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.406325 containerd[1627]: time="2026-01-28T02:06:57.406110806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66dfb7f7f9-w9b4h,Uid:91d0c55e-7a98-404e-a4c2-3c6f8edba99c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.407493 kubelet[2843]: E0128 02:06:57.407453 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.407603 kubelet[2843]: E0128 02:06:57.407502 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" Jan 28 02:06:57.407603 kubelet[2843]: E0128 02:06:57.407530 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" Jan 28 02:06:57.408567 kubelet[2843]: E0128 02:06:57.407695 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66dfb7f7f9-w9b4h_calico-apiserver(91d0c55e-7a98-404e-a4c2-3c6f8edba99c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66dfb7f7f9-w9b4h_calico-apiserver(91d0c55e-7a98-404e-a4c2-3c6f8edba99c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" podUID="91d0c55e-7a98-404e-a4c2-3c6f8edba99c" Jan 28 02:06:57.425082 containerd[1627]: time="2026-01-28T02:06:57.424108216Z" level=error msg="Failed to destroy network for sandbox \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.429540 containerd[1627]: time="2026-01-28T02:06:57.429130359Z" level=error msg="encountered an error cleaning up failed sandbox \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.429540 containerd[1627]: time="2026-01-28T02:06:57.429218865Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-drn4k,Uid:9d134562-e8ff-432f-bfe2-7f69c1332017,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.432380 kubelet[2843]: E0128 02:06:57.430740 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.432380 kubelet[2843]: E0128 02:06:57.430838 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-drn4k" Jan 28 02:06:57.432380 kubelet[2843]: E0128 02:06:57.430869 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-drn4k" Jan 28 02:06:57.432597 kubelet[2843]: E0128 02:06:57.430931 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-drn4k_kube-system(9d134562-e8ff-432f-bfe2-7f69c1332017)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-drn4k_kube-system(9d134562-e8ff-432f-bfe2-7f69c1332017)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-drn4k" podUID="9d134562-e8ff-432f-bfe2-7f69c1332017" Jan 28 02:06:57.436389 containerd[1627]: time="2026-01-28T02:06:57.436333644Z" level=error msg="Failed to destroy network for sandbox \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.446261 containerd[1627]: time="2026-01-28T02:06:57.446068330Z" level=error msg="encountered an error cleaning up failed sandbox \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.446601 containerd[1627]: time="2026-01-28T02:06:57.446475706Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8f88cb457-rs5wl,Uid:785cc68f-05e2-4db5-9c76-e2a7381de538,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.447516 kubelet[2843]: E0128 02:06:57.447039 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.447516 kubelet[2843]: E0128 02:06:57.447371 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8f88cb457-rs5wl" Jan 28 02:06:57.447516 kubelet[2843]: E0128 02:06:57.447456 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-8f88cb457-rs5wl" Jan 28 02:06:57.448824 kubelet[2843]: E0128 02:06:57.447849 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-8f88cb457-rs5wl_calico-system(785cc68f-05e2-4db5-9c76-e2a7381de538)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-8f88cb457-rs5wl_calico-system(785cc68f-05e2-4db5-9c76-e2a7381de538)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8f88cb457-rs5wl" podUID="785cc68f-05e2-4db5-9c76-e2a7381de538" Jan 28 02:06:57.456750 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5-shm.mount: Deactivated successfully. Jan 28 02:06:57.457015 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0-shm.mount: Deactivated successfully. Jan 28 02:06:57.468330 containerd[1627]: time="2026-01-28T02:06:57.468271576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dgqzm,Uid:11c042ea-f3ed-451b-a4e5-0f06212804a3,Namespace:calico-system,Attempt:0,}" Jan 28 02:06:57.578658 containerd[1627]: time="2026-01-28T02:06:57.578516954Z" level=error msg="Failed to destroy network for sandbox \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.581332 containerd[1627]: time="2026-01-28T02:06:57.581040252Z" level=error msg="encountered an error cleaning up failed sandbox \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.581332 containerd[1627]: time="2026-01-28T02:06:57.581185425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dgqzm,Uid:11c042ea-f3ed-451b-a4e5-0f06212804a3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.584304 kubelet[2843]: E0128 02:06:57.581833 2843 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.584304 kubelet[2843]: E0128 02:06:57.581966 2843 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dgqzm" Jan 28 02:06:57.584304 kubelet[2843]: E0128 02:06:57.582012 2843 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dgqzm" Jan 28 02:06:57.584957 kubelet[2843]: E0128 02:06:57.582116 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dgqzm_calico-system(11c042ea-f3ed-451b-a4e5-0f06212804a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dgqzm_calico-system(11c042ea-f3ed-451b-a4e5-0f06212804a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:06:57.585514 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791-shm.mount: Deactivated successfully. Jan 28 02:06:57.684737 kubelet[2843]: I0128 02:06:57.683849 2843 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Jan 28 02:06:57.688151 kubelet[2843]: I0128 02:06:57.688118 2843 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Jan 28 02:06:57.692494 kubelet[2843]: I0128 02:06:57.692463 2843 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Jan 28 02:06:57.693895 containerd[1627]: time="2026-01-28T02:06:57.693302326Z" level=info msg="StopPodSandbox for \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\"" Jan 28 02:06:57.695176 containerd[1627]: time="2026-01-28T02:06:57.695129951Z" level=info msg="Ensure that sandbox 653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4 in task-service has been cleanup successfully" Jan 28 02:06:57.700857 containerd[1627]: time="2026-01-28T02:06:57.700684103Z" level=info msg="StopPodSandbox for \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\"" Jan 28 02:06:57.702151 containerd[1627]: time="2026-01-28T02:06:57.701837551Z" level=info msg="Ensure that sandbox 39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467 in task-service has been cleanup successfully" Jan 28 02:06:57.702151 containerd[1627]: time="2026-01-28T02:06:57.701936544Z" level=info msg="StopPodSandbox for \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\"" Jan 28 02:06:57.702826 containerd[1627]: time="2026-01-28T02:06:57.702179767Z" level=info msg="Ensure that sandbox 478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0 in task-service has been cleanup successfully" Jan 28 02:06:57.705996 kubelet[2843]: I0128 02:06:57.705870 2843 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Jan 28 02:06:57.710577 containerd[1627]: time="2026-01-28T02:06:57.707707931Z" level=info msg="StopPodSandbox for \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\"" Jan 28 02:06:57.712609 kubelet[2843]: I0128 02:06:57.712579 2843 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Jan 28 02:06:57.724145 containerd[1627]: time="2026-01-28T02:06:57.722430268Z" level=info msg="StopPodSandbox for \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\"" Jan 28 02:06:57.724145 containerd[1627]: time="2026-01-28T02:06:57.722822876Z" level=info msg="Ensure that sandbox dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd in task-service has been cleanup successfully" Jan 28 02:06:57.730821 containerd[1627]: time="2026-01-28T02:06:57.730786933Z" level=info msg="Ensure that sandbox 26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d in task-service has been cleanup successfully" Jan 28 02:06:57.731848 kubelet[2843]: I0128 02:06:57.731815 2843 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Jan 28 02:06:57.734531 containerd[1627]: time="2026-01-28T02:06:57.734501740Z" level=info msg="StopPodSandbox for \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\"" Jan 28 02:06:57.735537 containerd[1627]: time="2026-01-28T02:06:57.735501449Z" level=info msg="Ensure that sandbox 4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6 in task-service has been cleanup successfully" Jan 28 02:06:57.737976 kubelet[2843]: I0128 02:06:57.737943 2843 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Jan 28 02:06:57.740609 containerd[1627]: time="2026-01-28T02:06:57.740383289Z" level=info msg="StopPodSandbox for \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\"" Jan 28 02:06:57.742415 kubelet[2843]: I0128 02:06:57.742390 2843 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Jan 28 02:06:57.743944 containerd[1627]: time="2026-01-28T02:06:57.742018846Z" level=info msg="Ensure that sandbox b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791 in task-service has been cleanup successfully" Jan 28 02:06:57.745354 containerd[1627]: time="2026-01-28T02:06:57.745314815Z" level=info msg="StopPodSandbox for \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\"" Jan 28 02:06:57.747126 containerd[1627]: time="2026-01-28T02:06:57.746736871Z" level=info msg="Ensure that sandbox ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5 in task-service has been cleanup successfully" Jan 28 02:06:57.900788 containerd[1627]: time="2026-01-28T02:06:57.900602096Z" level=error msg="StopPodSandbox for \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\" failed" error="failed to destroy network for sandbox \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.901684 kubelet[2843]: E0128 02:06:57.901408 2843 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Jan 28 02:06:57.901684 kubelet[2843]: E0128 02:06:57.901530 2843 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467"} Jan 28 02:06:57.901684 kubelet[2843]: E0128 02:06:57.901670 2843 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:06:57.902010 kubelet[2843]: E0128 02:06:57.901710 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-clcr9" podUID="8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4" Jan 28 02:06:57.911402 containerd[1627]: time="2026-01-28T02:06:57.910753742Z" level=error msg="StopPodSandbox for \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\" failed" error="failed to destroy network for sandbox \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.911402 containerd[1627]: time="2026-01-28T02:06:57.911386410Z" level=error msg="StopPodSandbox for \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\" failed" error="failed to destroy network for sandbox \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.911727 kubelet[2843]: E0128 02:06:57.911050 2843 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Jan 28 02:06:57.911727 kubelet[2843]: E0128 02:06:57.911128 2843 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4"} Jan 28 02:06:57.911727 kubelet[2843]: E0128 02:06:57.911176 2843 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b2fed1f-0989-4c8f-98b5-dfc06958e7db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:06:57.911727 kubelet[2843]: E0128 02:06:57.911208 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b2fed1f-0989-4c8f-98b5-dfc06958e7db\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qjnq7" podUID="8b2fed1f-0989-4c8f-98b5-dfc06958e7db" Jan 28 02:06:57.912067 kubelet[2843]: E0128 02:06:57.911586 2843 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Jan 28 02:06:57.912067 kubelet[2843]: E0128 02:06:57.911634 2843 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0"} Jan 28 02:06:57.912067 kubelet[2843]: E0128 02:06:57.911678 2843 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9d134562-e8ff-432f-bfe2-7f69c1332017\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:06:57.912067 kubelet[2843]: E0128 02:06:57.911704 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9d134562-e8ff-432f-bfe2-7f69c1332017\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-drn4k" podUID="9d134562-e8ff-432f-bfe2-7f69c1332017" Jan 28 02:06:57.931358 containerd[1627]: time="2026-01-28T02:06:57.930381864Z" level=error msg="StopPodSandbox for \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\" failed" error="failed to destroy network for sandbox \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.931358 containerd[1627]: time="2026-01-28T02:06:57.931123819Z" level=error msg="StopPodSandbox for \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\" failed" error="failed to destroy network for sandbox \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.931808 kubelet[2843]: E0128 02:06:57.931183 2843 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Jan 28 02:06:57.931808 kubelet[2843]: E0128 02:06:57.931278 2843 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791"} Jan 28 02:06:57.931808 kubelet[2843]: E0128 02:06:57.931443 2843 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"11c042ea-f3ed-451b-a4e5-0f06212804a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:06:57.931808 kubelet[2843]: E0128 02:06:57.931608 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"11c042ea-f3ed-451b-a4e5-0f06212804a3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:06:57.932170 kubelet[2843]: E0128 02:06:57.931745 2843 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Jan 28 02:06:57.932170 kubelet[2843]: E0128 02:06:57.931943 2843 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd"} Jan 28 02:06:57.932170 kubelet[2843]: E0128 02:06:57.932012 2843 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a4d006c-455e-43f1-8c29-a9bee0e4e963\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:06:57.932350 kubelet[2843]: E0128 02:06:57.932039 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a4d006c-455e-43f1-8c29-a9bee0e4e963\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" podUID="9a4d006c-455e-43f1-8c29-a9bee0e4e963" Jan 28 02:06:57.951454 containerd[1627]: time="2026-01-28T02:06:57.947192744Z" level=error msg="StopPodSandbox for \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\" failed" error="failed to destroy network for sandbox \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.951685 kubelet[2843]: E0128 02:06:57.949005 2843 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Jan 28 02:06:57.951685 kubelet[2843]: E0128 02:06:57.949200 2843 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d"} Jan 28 02:06:57.951685 kubelet[2843]: E0128 02:06:57.949284 2843 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1159a5fb-a1ac-4f76-832d-c5be127c9405\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:06:57.951685 kubelet[2843]: E0128 02:06:57.949394 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1159a5fb-a1ac-4f76-832d-c5be127c9405\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c95698587-q576f" podUID="1159a5fb-a1ac-4f76-832d-c5be127c9405" Jan 28 02:06:57.966991 containerd[1627]: time="2026-01-28T02:06:57.966813941Z" level=error msg="StopPodSandbox for \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\" failed" error="failed to destroy network for sandbox \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.967442 kubelet[2843]: E0128 02:06:57.967185 2843 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Jan 28 02:06:57.967442 kubelet[2843]: E0128 02:06:57.967254 2843 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5"} Jan 28 02:06:57.967442 kubelet[2843]: E0128 02:06:57.967300 2843 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"785cc68f-05e2-4db5-9c76-e2a7381de538\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:06:57.967442 kubelet[2843]: E0128 02:06:57.967332 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"785cc68f-05e2-4db5-9c76-e2a7381de538\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-8f88cb457-rs5wl" podUID="785cc68f-05e2-4db5-9c76-e2a7381de538" Jan 28 02:06:57.970287 containerd[1627]: time="2026-01-28T02:06:57.970243338Z" level=error msg="StopPodSandbox for \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\" failed" error="failed to destroy network for sandbox \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 28 02:06:57.970740 kubelet[2843]: E0128 02:06:57.970444 2843 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Jan 28 02:06:57.970740 kubelet[2843]: E0128 02:06:57.970519 2843 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6"} Jan 28 02:06:57.970740 kubelet[2843]: E0128 02:06:57.970550 2843 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91d0c55e-7a98-404e-a4c2-3c6f8edba99c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 28 02:06:57.970740 kubelet[2843]: E0128 02:06:57.970613 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91d0c55e-7a98-404e-a4c2-3c6f8edba99c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" podUID="91d0c55e-7a98-404e-a4c2-3c6f8edba99c" Jan 28 02:07:06.514119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1118075495.mount: Deactivated successfully. Jan 28 02:07:06.712749 containerd[1627]: time="2026-01-28T02:07:06.711534196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 28 02:07:06.734897 containerd[1627]: time="2026-01-28T02:07:06.732771494Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:07:06.803391 containerd[1627]: time="2026-01-28T02:07:06.802776316Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:07:06.806069 containerd[1627]: time="2026-01-28T02:07:06.805420918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 28 02:07:06.810097 containerd[1627]: time="2026-01-28T02:07:06.809112587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.106633258s" Jan 28 02:07:06.810097 containerd[1627]: time="2026-01-28T02:07:06.809168354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 28 02:07:06.880932 containerd[1627]: time="2026-01-28T02:07:06.880871367Z" level=info msg="CreateContainer within sandbox \"b829d06423181e0a357ec365dadafc36277160a6bc70d8bd36258dcaf326043f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 28 02:07:06.952338 containerd[1627]: time="2026-01-28T02:07:06.952275212Z" level=info msg="CreateContainer within sandbox \"b829d06423181e0a357ec365dadafc36277160a6bc70d8bd36258dcaf326043f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9ffd77077e3a2d99fef42bfe4e1d18e57add42ba677aac6c41ce36732cc1519e\"" Jan 28 02:07:06.954406 containerd[1627]: time="2026-01-28T02:07:06.954269120Z" level=info msg="StartContainer for \"9ffd77077e3a2d99fef42bfe4e1d18e57add42ba677aac6c41ce36732cc1519e\"" Jan 28 02:07:07.243452 containerd[1627]: time="2026-01-28T02:07:07.242609915Z" level=info msg="StartContainer for \"9ffd77077e3a2d99fef42bfe4e1d18e57add42ba677aac6c41ce36732cc1519e\" returns successfully" Jan 28 02:07:07.416764 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 28 02:07:07.417688 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 28 02:07:07.681016 containerd[1627]: time="2026-01-28T02:07:07.680870420Z" level=info msg="StopPodSandbox for \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\"" Jan 28 02:07:07.852135 systemd-resolved[1512]: Under memory pressure, flushing caches. Jan 28 02:07:07.862323 systemd-journald[1180]: Under memory pressure, flushing caches. Jan 28 02:07:07.852330 systemd-resolved[1512]: Flushed all caches. Jan 28 02:07:08.285527 containerd[1627]: 2026-01-28 02:07:07.958 [INFO][4010] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Jan 28 02:07:08.285527 containerd[1627]: 2026-01-28 02:07:07.962 [INFO][4010] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" iface="eth0" netns="/var/run/netns/cni-8356f383-aa76-9400-dfed-96f3cab4e1ef" Jan 28 02:07:08.285527 containerd[1627]: 2026-01-28 02:07:07.963 [INFO][4010] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" iface="eth0" netns="/var/run/netns/cni-8356f383-aa76-9400-dfed-96f3cab4e1ef" Jan 28 02:07:08.285527 containerd[1627]: 2026-01-28 02:07:07.964 [INFO][4010] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" iface="eth0" netns="/var/run/netns/cni-8356f383-aa76-9400-dfed-96f3cab4e1ef" Jan 28 02:07:08.285527 containerd[1627]: 2026-01-28 02:07:07.964 [INFO][4010] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Jan 28 02:07:08.285527 containerd[1627]: 2026-01-28 02:07:07.965 [INFO][4010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Jan 28 02:07:08.285527 containerd[1627]: 2026-01-28 02:07:08.260 [INFO][4019] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" HandleID="k8s-pod-network.ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Workload="srv--rjxd2.gb1.brightbox.com-k8s-whisker--8f88cb457--rs5wl-eth0" Jan 28 02:07:08.285527 containerd[1627]: 2026-01-28 02:07:08.262 [INFO][4019] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:08.285527 containerd[1627]: 2026-01-28 02:07:08.263 [INFO][4019] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:08.285527 containerd[1627]: 2026-01-28 02:07:08.278 [WARNING][4019] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" HandleID="k8s-pod-network.ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Workload="srv--rjxd2.gb1.brightbox.com-k8s-whisker--8f88cb457--rs5wl-eth0" Jan 28 02:07:08.285527 containerd[1627]: 2026-01-28 02:07:08.278 [INFO][4019] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" HandleID="k8s-pod-network.ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Workload="srv--rjxd2.gb1.brightbox.com-k8s-whisker--8f88cb457--rs5wl-eth0" Jan 28 02:07:08.285527 containerd[1627]: 2026-01-28 02:07:08.280 [INFO][4019] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:08.285527 containerd[1627]: 2026-01-28 02:07:08.282 [INFO][4010] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Jan 28 02:07:08.289802 containerd[1627]: time="2026-01-28T02:07:08.285940821Z" level=info msg="TearDown network for sandbox \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\" successfully" Jan 28 02:07:08.289802 containerd[1627]: time="2026-01-28T02:07:08.286005865Z" level=info msg="StopPodSandbox for \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\" returns successfully" Jan 28 02:07:08.296077 systemd[1]: run-netns-cni\x2d8356f383\x2daa76\x2d9400\x2ddfed\x2d96f3cab4e1ef.mount: Deactivated successfully. Jan 28 02:07:08.408433 kubelet[2843]: I0128 02:07:08.407917 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/785cc68f-05e2-4db5-9c76-e2a7381de538-whisker-backend-key-pair\") pod \"785cc68f-05e2-4db5-9c76-e2a7381de538\" (UID: \"785cc68f-05e2-4db5-9c76-e2a7381de538\") " Jan 28 02:07:08.413849 kubelet[2843]: I0128 02:07:08.413201 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/785cc68f-05e2-4db5-9c76-e2a7381de538-whisker-ca-bundle\") pod \"785cc68f-05e2-4db5-9c76-e2a7381de538\" (UID: \"785cc68f-05e2-4db5-9c76-e2a7381de538\") " Jan 28 02:07:08.413849 kubelet[2843]: I0128 02:07:08.413274 2843 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsks6\" (UniqueName: \"kubernetes.io/projected/785cc68f-05e2-4db5-9c76-e2a7381de538-kube-api-access-dsks6\") pod \"785cc68f-05e2-4db5-9c76-e2a7381de538\" (UID: \"785cc68f-05e2-4db5-9c76-e2a7381de538\") " Jan 28 02:07:08.429698 kubelet[2843]: I0128 02:07:08.427449 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/785cc68f-05e2-4db5-9c76-e2a7381de538-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "785cc68f-05e2-4db5-9c76-e2a7381de538" (UID: "785cc68f-05e2-4db5-9c76-e2a7381de538"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 28 02:07:08.439812 kubelet[2843]: I0128 02:07:08.439762 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/785cc68f-05e2-4db5-9c76-e2a7381de538-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "785cc68f-05e2-4db5-9c76-e2a7381de538" (UID: "785cc68f-05e2-4db5-9c76-e2a7381de538"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 28 02:07:08.440590 kubelet[2843]: I0128 02:07:08.439778 2843 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/785cc68f-05e2-4db5-9c76-e2a7381de538-kube-api-access-dsks6" (OuterVolumeSpecName: "kube-api-access-dsks6") pod "785cc68f-05e2-4db5-9c76-e2a7381de538" (UID: "785cc68f-05e2-4db5-9c76-e2a7381de538"). InnerVolumeSpecName "kube-api-access-dsks6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 28 02:07:08.440494 systemd[1]: var-lib-kubelet-pods-785cc68f\x2d05e2\x2d4db5\x2d9c76\x2de2a7381de538-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddsks6.mount: Deactivated successfully. Jan 28 02:07:08.441217 systemd[1]: var-lib-kubelet-pods-785cc68f\x2d05e2\x2d4db5\x2d9c76\x2de2a7381de538-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 28 02:07:08.514197 kubelet[2843]: I0128 02:07:08.514100 2843 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dsks6\" (UniqueName: \"kubernetes.io/projected/785cc68f-05e2-4db5-9c76-e2a7381de538-kube-api-access-dsks6\") on node \"srv-rjxd2.gb1.brightbox.com\" DevicePath \"\"" Jan 28 02:07:08.514197 kubelet[2843]: I0128 02:07:08.514144 2843 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/785cc68f-05e2-4db5-9c76-e2a7381de538-whisker-backend-key-pair\") on node \"srv-rjxd2.gb1.brightbox.com\" DevicePath \"\"" Jan 28 02:07:08.514197 kubelet[2843]: I0128 02:07:08.514165 2843 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/785cc68f-05e2-4db5-9c76-e2a7381de538-whisker-ca-bundle\") on node \"srv-rjxd2.gb1.brightbox.com\" DevicePath \"\"" Jan 28 02:07:08.997585 kubelet[2843]: I0128 02:07:08.988975 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ghssf" podStartSLOduration=3.103341748 podStartE2EDuration="25.958376675s" podCreationTimestamp="2026-01-28 02:06:43 +0000 UTC" firstStartedPulling="2026-01-28 02:06:43.957140952 +0000 UTC m=+25.685041575" lastFinishedPulling="2026-01-28 02:07:06.812175885 +0000 UTC m=+48.540076502" observedRunningTime="2026-01-28 02:07:07.986342291 +0000 UTC m=+49.714242919" watchObservedRunningTime="2026-01-28 02:07:08.958376675 +0000 UTC m=+50.686277290" Jan 28 02:07:09.118315 kubelet[2843]: I0128 02:07:09.118118 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n6cj\" (UniqueName: \"kubernetes.io/projected/39b2c588-693e-480c-a4f1-3808ca50200d-kube-api-access-7n6cj\") pod \"whisker-77894fccbf-hf9dn\" (UID: \"39b2c588-693e-480c-a4f1-3808ca50200d\") " pod="calico-system/whisker-77894fccbf-hf9dn" Jan 28 02:07:09.118315 kubelet[2843]: I0128 02:07:09.118182 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/39b2c588-693e-480c-a4f1-3808ca50200d-whisker-ca-bundle\") pod \"whisker-77894fccbf-hf9dn\" (UID: \"39b2c588-693e-480c-a4f1-3808ca50200d\") " pod="calico-system/whisker-77894fccbf-hf9dn" Jan 28 02:07:09.118315 kubelet[2843]: I0128 02:07:09.118214 2843 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/39b2c588-693e-480c-a4f1-3808ca50200d-whisker-backend-key-pair\") pod \"whisker-77894fccbf-hf9dn\" (UID: \"39b2c588-693e-480c-a4f1-3808ca50200d\") " pod="calico-system/whisker-77894fccbf-hf9dn" Jan 28 02:07:09.138390 systemd[1]: run-containerd-runc-k8s.io-9ffd77077e3a2d99fef42bfe4e1d18e57add42ba677aac6c41ce36732cc1519e-runc.cgaGGi.mount: Deactivated successfully. Jan 28 02:07:09.414156 containerd[1627]: time="2026-01-28T02:07:09.413215436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77894fccbf-hf9dn,Uid:39b2c588-693e-480c-a4f1-3808ca50200d,Namespace:calico-system,Attempt:0,}" Jan 28 02:07:09.458706 containerd[1627]: time="2026-01-28T02:07:09.458275139Z" level=info msg="StopPodSandbox for \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\"" Jan 28 02:07:09.462598 containerd[1627]: time="2026-01-28T02:07:09.462530561Z" level=info msg="StopPodSandbox for \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\"" Jan 28 02:07:10.009082 containerd[1627]: 2026-01-28 02:07:09.784 [INFO][4131] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Jan 28 02:07:10.009082 containerd[1627]: 2026-01-28 02:07:09.786 [INFO][4131] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" iface="eth0" netns="/var/run/netns/cni-b9c1a7ae-51a7-31ea-598c-ed4a8c41b8b2" Jan 28 02:07:10.009082 containerd[1627]: 2026-01-28 02:07:09.787 [INFO][4131] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" iface="eth0" netns="/var/run/netns/cni-b9c1a7ae-51a7-31ea-598c-ed4a8c41b8b2" Jan 28 02:07:10.009082 containerd[1627]: 2026-01-28 02:07:09.787 [INFO][4131] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" iface="eth0" netns="/var/run/netns/cni-b9c1a7ae-51a7-31ea-598c-ed4a8c41b8b2" Jan 28 02:07:10.009082 containerd[1627]: 2026-01-28 02:07:09.788 [INFO][4131] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Jan 28 02:07:10.009082 containerd[1627]: 2026-01-28 02:07:09.788 [INFO][4131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Jan 28 02:07:10.009082 containerd[1627]: 2026-01-28 02:07:09.922 [INFO][4182] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" HandleID="k8s-pod-network.26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:10.009082 containerd[1627]: 2026-01-28 02:07:09.922 [INFO][4182] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:10.009082 containerd[1627]: 2026-01-28 02:07:09.967 [INFO][4182] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:10.009082 containerd[1627]: 2026-01-28 02:07:09.984 [WARNING][4182] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" HandleID="k8s-pod-network.26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:10.009082 containerd[1627]: 2026-01-28 02:07:09.984 [INFO][4182] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" HandleID="k8s-pod-network.26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:10.009082 containerd[1627]: 2026-01-28 02:07:09.988 [INFO][4182] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:10.009082 containerd[1627]: 2026-01-28 02:07:09.995 [INFO][4131] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Jan 28 02:07:10.033120 containerd[1627]: time="2026-01-28T02:07:10.032705093Z" level=info msg="TearDown network for sandbox \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\" successfully" Jan 28 02:07:10.033473 containerd[1627]: time="2026-01-28T02:07:10.033434116Z" level=info msg="StopPodSandbox for \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\" returns successfully" Jan 28 02:07:10.035184 containerd[1627]: time="2026-01-28T02:07:10.035155159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c95698587-q576f,Uid:1159a5fb-a1ac-4f76-832d-c5be127c9405,Namespace:calico-system,Attempt:1,}" Jan 28 02:07:10.060256 systemd-networkd[1258]: cali9540b3b1c5c: Link UP Jan 28 02:07:10.062248 systemd-networkd[1258]: cali9540b3b1c5c: Gained carrier Jan 28 02:07:10.118284 containerd[1627]: 2026-01-28 02:07:09.802 [INFO][4137] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Jan 28 02:07:10.118284 containerd[1627]: 2026-01-28 02:07:09.803 [INFO][4137] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" iface="eth0" netns="/var/run/netns/cni-5934a2db-d43c-17a1-9f82-215b574211a3" Jan 28 02:07:10.118284 containerd[1627]: 2026-01-28 02:07:09.814 [INFO][4137] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" iface="eth0" netns="/var/run/netns/cni-5934a2db-d43c-17a1-9f82-215b574211a3" Jan 28 02:07:10.118284 containerd[1627]: 2026-01-28 02:07:09.815 [INFO][4137] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" iface="eth0" netns="/var/run/netns/cni-5934a2db-d43c-17a1-9f82-215b574211a3" Jan 28 02:07:10.118284 containerd[1627]: 2026-01-28 02:07:09.815 [INFO][4137] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Jan 28 02:07:10.118284 containerd[1627]: 2026-01-28 02:07:09.816 [INFO][4137] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Jan 28 02:07:10.118284 containerd[1627]: 2026-01-28 02:07:09.963 [INFO][4187] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" HandleID="k8s-pod-network.4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:10.118284 containerd[1627]: 2026-01-28 02:07:09.964 [INFO][4187] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:10.118284 containerd[1627]: 2026-01-28 02:07:09.988 [INFO][4187] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:10.118284 containerd[1627]: 2026-01-28 02:07:10.022 [WARNING][4187] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" HandleID="k8s-pod-network.4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:10.118284 containerd[1627]: 2026-01-28 02:07:10.024 [INFO][4187] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" HandleID="k8s-pod-network.4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:10.118284 containerd[1627]: 2026-01-28 02:07:10.041 [INFO][4187] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:10.118284 containerd[1627]: 2026-01-28 02:07:10.078 [INFO][4137] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Jan 28 02:07:10.127913 containerd[1627]: time="2026-01-28T02:07:10.119708711Z" level=info msg="TearDown network for sandbox \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\" successfully" Jan 28 02:07:10.127913 containerd[1627]: time="2026-01-28T02:07:10.120980083Z" level=info msg="StopPodSandbox for \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\" returns successfully" Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.669 [INFO][4102] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.729 [INFO][4102] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--rjxd2.gb1.brightbox.com-k8s-whisker--77894fccbf--hf9dn-eth0 whisker-77894fccbf- calico-system 39b2c588-693e-480c-a4f1-3808ca50200d 905 0 2026-01-28 02:07:09 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77894fccbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-rjxd2.gb1.brightbox.com whisker-77894fccbf-hf9dn eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9540b3b1c5c [] [] }} ContainerID="aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" Namespace="calico-system" Pod="whisker-77894fccbf-hf9dn" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-whisker--77894fccbf--hf9dn-" Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.729 [INFO][4102] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" Namespace="calico-system" Pod="whisker-77894fccbf-hf9dn" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-whisker--77894fccbf--hf9dn-eth0" Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.889 [INFO][4175] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" HandleID="k8s-pod-network.aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" Workload="srv--rjxd2.gb1.brightbox.com-k8s-whisker--77894fccbf--hf9dn-eth0" Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.891 [INFO][4175] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" HandleID="k8s-pod-network.aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" Workload="srv--rjxd2.gb1.brightbox.com-k8s-whisker--77894fccbf--hf9dn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f050), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-rjxd2.gb1.brightbox.com", "pod":"whisker-77894fccbf-hf9dn", "timestamp":"2026-01-28 02:07:09.889941023 +0000 UTC"}, Hostname:"srv-rjxd2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.891 [INFO][4175] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.891 [INFO][4175] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.891 [INFO][4175] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-rjxd2.gb1.brightbox.com' Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.910 [INFO][4175] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.929 [INFO][4175] ipam/ipam.go 394: Looking up existing affinities for host host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.939 [INFO][4175] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.943 [INFO][4175] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.948 [INFO][4175] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.948 [INFO][4175] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.951 [INFO][4175] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47 Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.957 [INFO][4175] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.967 [INFO][4175] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.1/26] block=192.168.115.0/26 handle="k8s-pod-network.aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.967 [INFO][4175] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.1/26] handle="k8s-pod-network.aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.967 [INFO][4175] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:10.138024 containerd[1627]: 2026-01-28 02:07:09.967 [INFO][4175] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.1/26] IPv6=[] ContainerID="aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" HandleID="k8s-pod-network.aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" Workload="srv--rjxd2.gb1.brightbox.com-k8s-whisker--77894fccbf--hf9dn-eth0" Jan 28 02:07:10.148117 containerd[1627]: 2026-01-28 02:07:09.979 [INFO][4102] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" Namespace="calico-system" Pod="whisker-77894fccbf-hf9dn" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-whisker--77894fccbf--hf9dn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-whisker--77894fccbf--hf9dn-eth0", GenerateName:"whisker-77894fccbf-", Namespace:"calico-system", SelfLink:"", UID:"39b2c588-693e-480c-a4f1-3808ca50200d", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 7, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77894fccbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"", Pod:"whisker-77894fccbf-hf9dn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.115.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9540b3b1c5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:10.148117 containerd[1627]: 2026-01-28 02:07:09.980 [INFO][4102] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.1/32] ContainerID="aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" Namespace="calico-system" Pod="whisker-77894fccbf-hf9dn" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-whisker--77894fccbf--hf9dn-eth0" Jan 28 02:07:10.148117 containerd[1627]: 2026-01-28 02:07:09.981 [INFO][4102] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9540b3b1c5c ContainerID="aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" Namespace="calico-system" Pod="whisker-77894fccbf-hf9dn" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-whisker--77894fccbf--hf9dn-eth0" Jan 28 02:07:10.148117 containerd[1627]: 2026-01-28 02:07:10.065 [INFO][4102] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" Namespace="calico-system" Pod="whisker-77894fccbf-hf9dn" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-whisker--77894fccbf--hf9dn-eth0" Jan 28 02:07:10.148117 containerd[1627]: 2026-01-28 02:07:10.065 [INFO][4102] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" Namespace="calico-system" Pod="whisker-77894fccbf-hf9dn" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-whisker--77894fccbf--hf9dn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-whisker--77894fccbf--hf9dn-eth0", GenerateName:"whisker-77894fccbf-", Namespace:"calico-system", SelfLink:"", UID:"39b2c588-693e-480c-a4f1-3808ca50200d", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 7, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77894fccbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47", Pod:"whisker-77894fccbf-hf9dn", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.115.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9540b3b1c5c", MAC:"a6:b4:4a:53:37:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:10.148117 containerd[1627]: 2026-01-28 02:07:10.095 [INFO][4102] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47" Namespace="calico-system" Pod="whisker-77894fccbf-hf9dn" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-whisker--77894fccbf--hf9dn-eth0" Jan 28 02:07:10.148117 containerd[1627]: time="2026-01-28T02:07:10.136603755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66dfb7f7f9-w9b4h,Uid:91d0c55e-7a98-404e-a4c2-3c6f8edba99c,Namespace:calico-apiserver,Attempt:1,}" Jan 28 02:07:10.145331 systemd[1]: run-netns-cni\x2db9c1a7ae\x2d51a7\x2d31ea\x2d598c\x2ded4a8c41b8b2.mount: Deactivated successfully. Jan 28 02:07:10.147370 systemd[1]: run-netns-cni\x2d5934a2db\x2dd43c\x2d17a1\x2d9f82\x2d215b574211a3.mount: Deactivated successfully. Jan 28 02:07:10.339852 containerd[1627]: time="2026-01-28T02:07:10.338666663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:07:10.339852 containerd[1627]: time="2026-01-28T02:07:10.338824338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:07:10.339852 containerd[1627]: time="2026-01-28T02:07:10.338850161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:10.339852 containerd[1627]: time="2026-01-28T02:07:10.339114547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:10.471056 containerd[1627]: time="2026-01-28T02:07:10.463040516Z" level=info msg="StopPodSandbox for \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\"" Jan 28 02:07:10.473767 containerd[1627]: time="2026-01-28T02:07:10.473136379Z" level=info msg="StopPodSandbox for \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\"" Jan 28 02:07:10.499775 kubelet[2843]: I0128 02:07:10.499705 2843 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="785cc68f-05e2-4db5-9c76-e2a7381de538" path="/var/lib/kubelet/pods/785cc68f-05e2-4db5-9c76-e2a7381de538/volumes" Jan 28 02:07:10.930016 containerd[1627]: time="2026-01-28T02:07:10.929476020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77894fccbf-hf9dn,Uid:39b2c588-693e-480c-a4f1-3808ca50200d,Namespace:calico-system,Attempt:0,} returns sandbox id \"aef5c86c0beae78527ecafd9bb5ff5d242a7b1ba5eaadf67075475cf3e9b5d47\"" Jan 28 02:07:10.953178 containerd[1627]: time="2026-01-28T02:07:10.953043770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 02:07:11.053043 systemd-networkd[1258]: calide44890f632: Link UP Jan 28 02:07:11.063459 systemd-networkd[1258]: calide44890f632: Gained carrier Jan 28 02:07:11.123186 containerd[1627]: 2026-01-28 02:07:10.790 [INFO][4309] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Jan 28 02:07:11.123186 containerd[1627]: 2026-01-28 02:07:10.792 [INFO][4309] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" iface="eth0" netns="/var/run/netns/cni-48e4914e-26a9-310d-b168-bb26227bc06f" Jan 28 02:07:11.123186 containerd[1627]: 2026-01-28 02:07:10.799 [INFO][4309] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" iface="eth0" netns="/var/run/netns/cni-48e4914e-26a9-310d-b168-bb26227bc06f" Jan 28 02:07:11.123186 containerd[1627]: 2026-01-28 02:07:10.800 [INFO][4309] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" iface="eth0" netns="/var/run/netns/cni-48e4914e-26a9-310d-b168-bb26227bc06f" Jan 28 02:07:11.123186 containerd[1627]: 2026-01-28 02:07:10.801 [INFO][4309] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Jan 28 02:07:11.123186 containerd[1627]: 2026-01-28 02:07:10.803 [INFO][4309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Jan 28 02:07:11.123186 containerd[1627]: 2026-01-28 02:07:10.980 [INFO][4328] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" HandleID="k8s-pod-network.39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:11.123186 containerd[1627]: 2026-01-28 02:07:10.983 [INFO][4328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:11.123186 containerd[1627]: 2026-01-28 02:07:10.983 [INFO][4328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:11.123186 containerd[1627]: 2026-01-28 02:07:11.035 [WARNING][4328] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" HandleID="k8s-pod-network.39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:11.123186 containerd[1627]: 2026-01-28 02:07:11.040 [INFO][4328] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" HandleID="k8s-pod-network.39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:11.123186 containerd[1627]: 2026-01-28 02:07:11.057 [INFO][4328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:11.123186 containerd[1627]: 2026-01-28 02:07:11.090 [INFO][4309] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Jan 28 02:07:11.134882 systemd[1]: run-netns-cni\x2d48e4914e\x2d26a9\x2d310d\x2db168\x2dbb26227bc06f.mount: Deactivated successfully. Jan 28 02:07:11.136132 containerd[1627]: time="2026-01-28T02:07:11.135140163Z" level=info msg="TearDown network for sandbox \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\" successfully" Jan 28 02:07:11.136132 containerd[1627]: time="2026-01-28T02:07:11.135180877Z" level=info msg="StopPodSandbox for \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\" returns successfully" Jan 28 02:07:11.140372 containerd[1627]: time="2026-01-28T02:07:11.139438147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-clcr9,Uid:8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4,Namespace:kube-system,Attempt:1,}" Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.250 [INFO][4214] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.320 [INFO][4214] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0 calico-kube-controllers-5c95698587- calico-system 1159a5fb-a1ac-4f76-832d-c5be127c9405 910 0 2026-01-28 02:06:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c95698587 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-rjxd2.gb1.brightbox.com calico-kube-controllers-5c95698587-q576f eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calide44890f632 [] [] }} ContainerID="cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" Namespace="calico-system" Pod="calico-kube-controllers-5c95698587-q576f" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-" Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.321 [INFO][4214] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" Namespace="calico-system" Pod="calico-kube-controllers-5c95698587-q576f" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.763 [INFO][4261] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" HandleID="k8s-pod-network.cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.763 [INFO][4261] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" HandleID="k8s-pod-network.cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325380), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-rjxd2.gb1.brightbox.com", "pod":"calico-kube-controllers-5c95698587-q576f", "timestamp":"2026-01-28 02:07:10.763267167 +0000 UTC"}, Hostname:"srv-rjxd2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.769 [INFO][4261] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.770 [INFO][4261] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.770 [INFO][4261] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-rjxd2.gb1.brightbox.com' Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.828 [INFO][4261] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.855 [INFO][4261] ipam/ipam.go 394: Looking up existing affinities for host host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.879 [INFO][4261] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.897 [INFO][4261] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.926 [INFO][4261] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.927 [INFO][4261] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.935 [INFO][4261] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285 Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.962 [INFO][4261] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.980 [INFO][4261] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.2/26] block=192.168.115.0/26 handle="k8s-pod-network.cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.981 [INFO][4261] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.2/26] handle="k8s-pod-network.cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.984 [INFO][4261] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:11.158982 containerd[1627]: 2026-01-28 02:07:10.984 [INFO][4261] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.2/26] IPv6=[] ContainerID="cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" HandleID="k8s-pod-network.cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:11.164994 containerd[1627]: 2026-01-28 02:07:10.997 [INFO][4214] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" Namespace="calico-system" Pod="calico-kube-controllers-5c95698587-q576f" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0", GenerateName:"calico-kube-controllers-5c95698587-", Namespace:"calico-system", SelfLink:"", UID:"1159a5fb-a1ac-4f76-832d-c5be127c9405", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c95698587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-5c95698587-q576f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide44890f632", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:11.164994 containerd[1627]: 2026-01-28 02:07:10.997 [INFO][4214] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.2/32] ContainerID="cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" Namespace="calico-system" Pod="calico-kube-controllers-5c95698587-q576f" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:11.164994 containerd[1627]: 2026-01-28 02:07:10.997 [INFO][4214] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide44890f632 ContainerID="cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" Namespace="calico-system" Pod="calico-kube-controllers-5c95698587-q576f" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:11.164994 containerd[1627]: 2026-01-28 02:07:11.070 [INFO][4214] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" Namespace="calico-system" Pod="calico-kube-controllers-5c95698587-q576f" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:11.164994 containerd[1627]: 2026-01-28 02:07:11.075 [INFO][4214] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" Namespace="calico-system" Pod="calico-kube-controllers-5c95698587-q576f" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0", GenerateName:"calico-kube-controllers-5c95698587-", Namespace:"calico-system", SelfLink:"", UID:"1159a5fb-a1ac-4f76-832d-c5be127c9405", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c95698587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285", Pod:"calico-kube-controllers-5c95698587-q576f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide44890f632", MAC:"1e:53:15:00:a9:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:11.164994 containerd[1627]: 2026-01-28 02:07:11.132 [INFO][4214] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285" Namespace="calico-system" Pod="calico-kube-controllers-5c95698587-q576f" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:11.301746 systemd-networkd[1258]: calid639c675f2c: Link UP Jan 28 02:07:11.303859 systemd-networkd[1258]: calid639c675f2c: Gained carrier Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:10.397 [INFO][4228] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:10.516 [INFO][4228] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0 calico-apiserver-66dfb7f7f9- calico-apiserver 91d0c55e-7a98-404e-a4c2-3c6f8edba99c 911 0 2026-01-28 02:06:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66dfb7f7f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-rjxd2.gb1.brightbox.com calico-apiserver-66dfb7f7f9-w9b4h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid639c675f2c [] [] }} ContainerID="05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-w9b4h" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-" Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:10.516 [INFO][4228] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-w9b4h" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:10.819 [INFO][4302] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" HandleID="k8s-pod-network.05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:10.819 [INFO][4302] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" HandleID="k8s-pod-network.05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001237c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-rjxd2.gb1.brightbox.com", "pod":"calico-apiserver-66dfb7f7f9-w9b4h", "timestamp":"2026-01-28 02:07:10.819235047 +0000 UTC"}, Hostname:"srv-rjxd2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:10.820 [INFO][4302] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:11.058 [INFO][4302] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:11.064 [INFO][4302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-rjxd2.gb1.brightbox.com' Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:11.115 [INFO][4302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:11.161 [INFO][4302] ipam/ipam.go 394: Looking up existing affinities for host host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:11.184 [INFO][4302] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:11.192 [INFO][4302] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:11.199 [INFO][4302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:11.200 [INFO][4302] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:11.204 [INFO][4302] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62 Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:11.245 [INFO][4302] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:11.262 [INFO][4302] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.3/26] block=192.168.115.0/26 handle="k8s-pod-network.05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:11.262 [INFO][4302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.3/26] handle="k8s-pod-network.05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:11.263 [INFO][4302] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:11.350359 containerd[1627]: 2026-01-28 02:07:11.263 [INFO][4302] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.3/26] IPv6=[] ContainerID="05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" HandleID="k8s-pod-network.05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:11.352087 containerd[1627]: 2026-01-28 02:07:11.283 [INFO][4228] cni-plugin/k8s.go 418: Populated endpoint ContainerID="05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-w9b4h" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0", GenerateName:"calico-apiserver-66dfb7f7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"91d0c55e-7a98-404e-a4c2-3c6f8edba99c", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66dfb7f7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-66dfb7f7f9-w9b4h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid639c675f2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:11.352087 containerd[1627]: 2026-01-28 02:07:11.288 [INFO][4228] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.3/32] ContainerID="05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-w9b4h" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:11.352087 containerd[1627]: 2026-01-28 02:07:11.288 [INFO][4228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid639c675f2c ContainerID="05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-w9b4h" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:11.352087 containerd[1627]: 2026-01-28 02:07:11.307 [INFO][4228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-w9b4h" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:11.352087 containerd[1627]: 2026-01-28 02:07:11.313 [INFO][4228] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-w9b4h" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0", GenerateName:"calico-apiserver-66dfb7f7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"91d0c55e-7a98-404e-a4c2-3c6f8edba99c", ResourceVersion:"911", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66dfb7f7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62", Pod:"calico-apiserver-66dfb7f7f9-w9b4h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid639c675f2c", MAC:"6a:69:53:0c:77:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:11.352087 containerd[1627]: 2026-01-28 02:07:11.345 [INFO][4228] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-w9b4h" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:11.386760 kernel: bpftool[4411]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 28 02:07:11.410663 containerd[1627]: 2026-01-28 02:07:11.045 [INFO][4296] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Jan 28 02:07:11.410663 containerd[1627]: 2026-01-28 02:07:11.052 [INFO][4296] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" iface="eth0" netns="/var/run/netns/cni-a19e9554-645d-600d-4d00-74c335f4a42f" Jan 28 02:07:11.410663 containerd[1627]: 2026-01-28 02:07:11.060 [INFO][4296] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" iface="eth0" netns="/var/run/netns/cni-a19e9554-645d-600d-4d00-74c335f4a42f" Jan 28 02:07:11.410663 containerd[1627]: 2026-01-28 02:07:11.061 [INFO][4296] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" iface="eth0" netns="/var/run/netns/cni-a19e9554-645d-600d-4d00-74c335f4a42f" Jan 28 02:07:11.410663 containerd[1627]: 2026-01-28 02:07:11.061 [INFO][4296] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Jan 28 02:07:11.410663 containerd[1627]: 2026-01-28 02:07:11.061 [INFO][4296] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Jan 28 02:07:11.410663 containerd[1627]: 2026-01-28 02:07:11.301 [INFO][4364] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" HandleID="k8s-pod-network.478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:11.410663 containerd[1627]: 2026-01-28 02:07:11.306 [INFO][4364] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:11.410663 containerd[1627]: 2026-01-28 02:07:11.306 [INFO][4364] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:11.410663 containerd[1627]: 2026-01-28 02:07:11.362 [WARNING][4364] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" HandleID="k8s-pod-network.478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:11.410663 containerd[1627]: 2026-01-28 02:07:11.363 [INFO][4364] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" HandleID="k8s-pod-network.478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:11.410663 containerd[1627]: 2026-01-28 02:07:11.368 [INFO][4364] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:11.410663 containerd[1627]: 2026-01-28 02:07:11.382 [INFO][4296] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Jan 28 02:07:11.416250 containerd[1627]: time="2026-01-28T02:07:11.414190137Z" level=info msg="TearDown network for sandbox \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\" successfully" Jan 28 02:07:11.416250 containerd[1627]: time="2026-01-28T02:07:11.414233383Z" level=info msg="StopPodSandbox for \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\" returns successfully" Jan 28 02:07:11.417444 systemd[1]: run-netns-cni\x2da19e9554\x2d645d\x2d600d\x2d4d00\x2d74c335f4a42f.mount: Deactivated successfully. Jan 28 02:07:11.420280 containerd[1627]: time="2026-01-28T02:07:11.420246065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-drn4k,Uid:9d134562-e8ff-432f-bfe2-7f69c1332017,Namespace:kube-system,Attempt:1,}" Jan 28 02:07:11.437788 containerd[1627]: time="2026-01-28T02:07:11.437694967Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:11.501606 systemd-networkd[1258]: cali9540b3b1c5c: Gained IPv6LL Jan 28 02:07:11.569685 containerd[1627]: time="2026-01-28T02:07:11.447234060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:07:11.569685 containerd[1627]: time="2026-01-28T02:07:11.447326006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:07:11.569685 containerd[1627]: time="2026-01-28T02:07:11.447576024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:11.569685 containerd[1627]: time="2026-01-28T02:07:11.448152225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:11.569685 containerd[1627]: time="2026-01-28T02:07:11.443654446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 02:07:11.595574 containerd[1627]: time="2026-01-28T02:07:11.444095221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 02:07:11.595574 containerd[1627]: time="2026-01-28T02:07:11.569343216Z" level=info msg="StopPodSandbox for \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\"" Jan 28 02:07:11.602181 kubelet[2843]: E0128 02:07:11.601610 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:07:11.603156 kubelet[2843]: E0128 02:07:11.603110 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:07:11.603520 containerd[1627]: time="2026-01-28T02:07:11.569474424Z" level=info msg="StopPodSandbox for \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\"" Jan 28 02:07:11.621538 kubelet[2843]: E0128 02:07:11.621341 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cb1d52d8e569481480e80c2c7a6f1cce,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7n6cj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77894fccbf-hf9dn_calico-system(39b2c588-693e-480c-a4f1-3808ca50200d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:11.644597 containerd[1627]: time="2026-01-28T02:07:11.602966128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:07:11.644597 containerd[1627]: time="2026-01-28T02:07:11.603037451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:07:11.644597 containerd[1627]: time="2026-01-28T02:07:11.603080279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:11.644597 containerd[1627]: time="2026-01-28T02:07:11.604741754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:11.687245 containerd[1627]: time="2026-01-28T02:07:11.687110314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 02:07:11.697663 containerd[1627]: time="2026-01-28T02:07:11.696380024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66dfb7f7f9-w9b4h,Uid:91d0c55e-7a98-404e-a4c2-3c6f8edba99c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62\"" Jan 28 02:07:11.963039 containerd[1627]: time="2026-01-28T02:07:11.962328428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c95698587-q576f,Uid:1159a5fb-a1ac-4f76-832d-c5be127c9405,Namespace:calico-system,Attempt:1,} returns sandbox id \"cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285\"" Jan 28 02:07:12.118026 systemd-networkd[1258]: calic2bb499d0e6: Link UP Jan 28 02:07:12.126191 systemd-networkd[1258]: calic2bb499d0e6: Gained carrier Jan 28 02:07:12.155129 containerd[1627]: time="2026-01-28T02:07:12.154951361Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:12.157590 containerd[1627]: time="2026-01-28T02:07:12.157272930Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 02:07:12.157590 containerd[1627]: time="2026-01-28T02:07:12.157470989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 02:07:12.161218 kubelet[2843]: E0128 02:07:12.158831 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:07:12.161218 kubelet[2843]: E0128 02:07:12.158894 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:07:12.161364 containerd[1627]: time="2026-01-28T02:07:12.159809081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:07:12.161442 kubelet[2843]: E0128 02:07:12.159183 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n6cj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77894fccbf-hf9dn_calico-system(39b2c588-693e-480c-a4f1-3808ca50200d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:11.631 [INFO][4379] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0 coredns-668d6bf9bc- kube-system 8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4 918 0 2026-01-28 02:06:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-rjxd2.gb1.brightbox.com coredns-668d6bf9bc-clcr9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic2bb499d0e6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-clcr9" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-" Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:11.632 [INFO][4379] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-clcr9" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:11.974 [INFO][4516] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" HandleID="k8s-pod-network.27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:11.975 [INFO][4516] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" HandleID="k8s-pod-network.27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fcb0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-rjxd2.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-clcr9", "timestamp":"2026-01-28 02:07:11.974017617 +0000 UTC"}, Hostname:"srv-rjxd2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:11.975 [INFO][4516] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:11.975 [INFO][4516] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:11.975 [INFO][4516] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-rjxd2.gb1.brightbox.com' Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:11.997 [INFO][4516] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:12.009 [INFO][4516] ipam/ipam.go 394: Looking up existing affinities for host host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:12.020 [INFO][4516] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:12.031 [INFO][4516] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:12.044 [INFO][4516] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:12.044 [INFO][4516] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:12.052 [INFO][4516] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6 Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:12.061 [INFO][4516] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:12.075 [INFO][4516] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.4/26] block=192.168.115.0/26 handle="k8s-pod-network.27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:12.075 [INFO][4516] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.4/26] handle="k8s-pod-network.27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:12.075 [INFO][4516] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:12.178184 containerd[1627]: 2026-01-28 02:07:12.077 [INFO][4516] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.4/26] IPv6=[] ContainerID="27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" HandleID="k8s-pod-network.27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:12.181971 containerd[1627]: 2026-01-28 02:07:12.094 [INFO][4379] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-clcr9" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-clcr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2bb499d0e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:12.181971 containerd[1627]: 2026-01-28 02:07:12.095 [INFO][4379] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.4/32] ContainerID="27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-clcr9" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:12.181971 containerd[1627]: 2026-01-28 02:07:12.095 [INFO][4379] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2bb499d0e6 ContainerID="27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-clcr9" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:12.181971 containerd[1627]: 2026-01-28 02:07:12.144 [INFO][4379] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-clcr9" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:12.181971 containerd[1627]: 2026-01-28 02:07:12.146 [INFO][4379] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-clcr9" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6", Pod:"coredns-668d6bf9bc-clcr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2bb499d0e6", MAC:"22:af:f7:fd:91:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:12.181971 containerd[1627]: 2026-01-28 02:07:12.164 [INFO][4379] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6" Namespace="kube-system" Pod="coredns-668d6bf9bc-clcr9" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:12.184210 kubelet[2843]: E0128 02:07:12.181498 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77894fccbf-hf9dn" podUID="39b2c588-693e-480c-a4f1-3808ca50200d" Jan 28 02:07:12.261798 systemd-networkd[1258]: vxlan.calico: Link UP Jan 28 02:07:12.262115 systemd-networkd[1258]: vxlan.calico: Gained carrier Jan 28 02:07:12.340166 containerd[1627]: 2026-01-28 02:07:11.995 [INFO][4527] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Jan 28 02:07:12.340166 containerd[1627]: 2026-01-28 02:07:12.010 [INFO][4527] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" iface="eth0" netns="/var/run/netns/cni-4f6a19f4-5503-ff1f-9d59-43aee872d8dd" Jan 28 02:07:12.340166 containerd[1627]: 2026-01-28 02:07:12.010 [INFO][4527] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" iface="eth0" netns="/var/run/netns/cni-4f6a19f4-5503-ff1f-9d59-43aee872d8dd" Jan 28 02:07:12.340166 containerd[1627]: 2026-01-28 02:07:12.013 [INFO][4527] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" iface="eth0" netns="/var/run/netns/cni-4f6a19f4-5503-ff1f-9d59-43aee872d8dd" Jan 28 02:07:12.340166 containerd[1627]: 2026-01-28 02:07:12.013 [INFO][4527] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Jan 28 02:07:12.340166 containerd[1627]: 2026-01-28 02:07:12.013 [INFO][4527] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Jan 28 02:07:12.340166 containerd[1627]: 2026-01-28 02:07:12.270 [INFO][4578] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" HandleID="k8s-pod-network.b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Workload="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:12.340166 containerd[1627]: 2026-01-28 02:07:12.272 [INFO][4578] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:12.340166 containerd[1627]: 2026-01-28 02:07:12.272 [INFO][4578] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:12.340166 containerd[1627]: 2026-01-28 02:07:12.309 [WARNING][4578] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" HandleID="k8s-pod-network.b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Workload="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:12.340166 containerd[1627]: 2026-01-28 02:07:12.309 [INFO][4578] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" HandleID="k8s-pod-network.b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Workload="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:12.340166 containerd[1627]: 2026-01-28 02:07:12.312 [INFO][4578] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:12.340166 containerd[1627]: 2026-01-28 02:07:12.324 [INFO][4527] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Jan 28 02:07:12.343291 containerd[1627]: time="2026-01-28T02:07:12.341056231Z" level=info msg="TearDown network for sandbox \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\" successfully" Jan 28 02:07:12.343291 containerd[1627]: time="2026-01-28T02:07:12.342685654Z" level=info msg="StopPodSandbox for \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\" returns successfully" Jan 28 02:07:12.347846 systemd[1]: run-netns-cni\x2d4f6a19f4\x2d5503\x2dff1f\x2d9d59\x2d43aee872d8dd.mount: Deactivated successfully. Jan 28 02:07:12.358958 containerd[1627]: time="2026-01-28T02:07:12.358639706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dgqzm,Uid:11c042ea-f3ed-451b-a4e5-0f06212804a3,Namespace:calico-system,Attempt:1,}" Jan 28 02:07:12.462258 containerd[1627]: time="2026-01-28T02:07:12.461050343Z" level=info msg="StopPodSandbox for \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\"" Jan 28 02:07:12.478879 containerd[1627]: time="2026-01-28T02:07:12.449302411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:07:12.478879 containerd[1627]: time="2026-01-28T02:07:12.449409078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:07:12.478879 containerd[1627]: time="2026-01-28T02:07:12.449439900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:12.478879 containerd[1627]: time="2026-01-28T02:07:12.450675864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:12.595950 containerd[1627]: 2026-01-28 02:07:12.036 [INFO][4523] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Jan 28 02:07:12.595950 containerd[1627]: 2026-01-28 02:07:12.043 [INFO][4523] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" iface="eth0" netns="/var/run/netns/cni-c8a0c74d-eaf0-bb80-37cd-f65f71415ed7" Jan 28 02:07:12.595950 containerd[1627]: 2026-01-28 02:07:12.043 [INFO][4523] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" iface="eth0" netns="/var/run/netns/cni-c8a0c74d-eaf0-bb80-37cd-f65f71415ed7" Jan 28 02:07:12.595950 containerd[1627]: 2026-01-28 02:07:12.051 [INFO][4523] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" iface="eth0" netns="/var/run/netns/cni-c8a0c74d-eaf0-bb80-37cd-f65f71415ed7" Jan 28 02:07:12.595950 containerd[1627]: 2026-01-28 02:07:12.052 [INFO][4523] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Jan 28 02:07:12.595950 containerd[1627]: 2026-01-28 02:07:12.052 [INFO][4523] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Jan 28 02:07:12.595950 containerd[1627]: 2026-01-28 02:07:12.403 [INFO][4584] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" HandleID="k8s-pod-network.653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Workload="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:12.595950 containerd[1627]: 2026-01-28 02:07:12.433 [INFO][4584] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:12.595950 containerd[1627]: 2026-01-28 02:07:12.539 [INFO][4584] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:12.595950 containerd[1627]: 2026-01-28 02:07:12.556 [WARNING][4584] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" HandleID="k8s-pod-network.653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Workload="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:12.595950 containerd[1627]: 2026-01-28 02:07:12.556 [INFO][4584] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" HandleID="k8s-pod-network.653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Workload="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:12.595950 containerd[1627]: 2026-01-28 02:07:12.561 [INFO][4584] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:12.595950 containerd[1627]: 2026-01-28 02:07:12.575 [INFO][4523] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Jan 28 02:07:12.595950 containerd[1627]: time="2026-01-28T02:07:12.594904881Z" level=info msg="TearDown network for sandbox \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\" successfully" Jan 28 02:07:12.595950 containerd[1627]: time="2026-01-28T02:07:12.594949628Z" level=info msg="StopPodSandbox for \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\" returns successfully" Jan 28 02:07:12.596835 systemd-networkd[1258]: cali26dc98cc3fe: Link UP Jan 28 02:07:12.609753 containerd[1627]: time="2026-01-28T02:07:12.608699176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qjnq7,Uid:8b2fed1f-0989-4c8f-98b5-dfc06958e7db,Namespace:calico-system,Attempt:1,}" Jan 28 02:07:12.610396 systemd-networkd[1258]: cali26dc98cc3fe: Gained carrier Jan 28 02:07:12.617702 containerd[1627]: time="2026-01-28T02:07:12.612223850Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:12.630398 containerd[1627]: time="2026-01-28T02:07:12.629884475Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:07:12.631926 containerd[1627]: time="2026-01-28T02:07:12.631137726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 02:07:12.633470 kubelet[2843]: E0128 02:07:12.632804 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:07:12.633470 kubelet[2843]: E0128 02:07:12.632940 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:07:12.633470 kubelet[2843]: E0128 02:07:12.633381 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhs92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66dfb7f7f9-w9b4h_calico-apiserver(91d0c55e-7a98-404e-a4c2-3c6f8edba99c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:12.636698 kubelet[2843]: E0128 02:07:12.634808 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" podUID="91d0c55e-7a98-404e-a4c2-3c6f8edba99c" Jan 28 02:07:12.640803 containerd[1627]: time="2026-01-28T02:07:12.640748089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.146 [INFO][4541] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0 coredns-668d6bf9bc- kube-system 9d134562-e8ff-432f-bfe2-7f69c1332017 922 0 2026-01-28 02:06:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-rjxd2.gb1.brightbox.com coredns-668d6bf9bc-drn4k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali26dc98cc3fe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-drn4k" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-" Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.149 [INFO][4541] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-drn4k" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.360 [INFO][4595] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" HandleID="k8s-pod-network.352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.360 [INFO][4595] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" HandleID="k8s-pod-network.352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001029a0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-rjxd2.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-drn4k", "timestamp":"2026-01-28 02:07:12.360030756 +0000 UTC"}, Hostname:"srv-rjxd2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.360 [INFO][4595] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.360 [INFO][4595] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.360 [INFO][4595] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-rjxd2.gb1.brightbox.com' Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.441 [INFO][4595] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.479 [INFO][4595] ipam/ipam.go 394: Looking up existing affinities for host host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.497 [INFO][4595] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.500 [INFO][4595] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.504 [INFO][4595] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.504 [INFO][4595] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.506 [INFO][4595] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1 Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.521 [INFO][4595] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.539 [INFO][4595] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.5/26] block=192.168.115.0/26 handle="k8s-pod-network.352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.539 [INFO][4595] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.5/26] handle="k8s-pod-network.352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.539 [INFO][4595] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:12.693738 containerd[1627]: 2026-01-28 02:07:12.539 [INFO][4595] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.5/26] IPv6=[] ContainerID="352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" HandleID="k8s-pod-network.352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:12.696863 containerd[1627]: 2026-01-28 02:07:12.559 [INFO][4541] cni-plugin/k8s.go 418: Populated endpoint ContainerID="352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-drn4k" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9d134562-e8ff-432f-bfe2-7f69c1332017", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-drn4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26dc98cc3fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:12.696863 containerd[1627]: 2026-01-28 02:07:12.563 [INFO][4541] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.5/32] ContainerID="352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-drn4k" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:12.696863 containerd[1627]: 2026-01-28 02:07:12.563 [INFO][4541] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26dc98cc3fe ContainerID="352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-drn4k" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:12.696863 containerd[1627]: 2026-01-28 02:07:12.628 [INFO][4541] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-drn4k" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:12.696863 containerd[1627]: 2026-01-28 02:07:12.654 [INFO][4541] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-drn4k" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9d134562-e8ff-432f-bfe2-7f69c1332017", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1", Pod:"coredns-668d6bf9bc-drn4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26dc98cc3fe", MAC:"0a:ad:a2:79:5f:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:12.696863 containerd[1627]: 2026-01-28 02:07:12.683 [INFO][4541] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1" Namespace="kube-system" Pod="coredns-668d6bf9bc-drn4k" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:12.713990 systemd-networkd[1258]: calid639c675f2c: Gained IPv6LL Jan 28 02:07:12.739812 containerd[1627]: time="2026-01-28T02:07:12.739769218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-clcr9,Uid:8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4,Namespace:kube-system,Attempt:1,} returns sandbox id \"27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6\"" Jan 28 02:07:12.768056 containerd[1627]: time="2026-01-28T02:07:12.766666138Z" level=info msg="CreateContainer within sandbox \"27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 02:07:12.857288 containerd[1627]: time="2026-01-28T02:07:12.857244964Z" level=info msg="CreateContainer within sandbox \"27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bee7030daad834cece2452f2146943d8dd8830d2c48a70321933ecb4c7cdeb49\"" Jan 28 02:07:12.861515 containerd[1627]: time="2026-01-28T02:07:12.860870375Z" level=info msg="StartContainer for \"bee7030daad834cece2452f2146943d8dd8830d2c48a70321933ecb4c7cdeb49\"" Jan 28 02:07:12.917017 containerd[1627]: time="2026-01-28T02:07:12.911281645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:07:12.918708 containerd[1627]: time="2026-01-28T02:07:12.916547938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:07:12.919420 containerd[1627]: time="2026-01-28T02:07:12.919358274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:12.920689 containerd[1627]: time="2026-01-28T02:07:12.920486030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:13.005582 kubelet[2843]: E0128 02:07:12.998525 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" podUID="91d0c55e-7a98-404e-a4c2-3c6f8edba99c" Jan 28 02:07:13.005582 kubelet[2843]: E0128 02:07:12.999257 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77894fccbf-hf9dn" podUID="39b2c588-693e-480c-a4f1-3808ca50200d" Jan 28 02:07:13.033786 systemd-networkd[1258]: calide44890f632: Gained IPv6LL Jan 28 02:07:13.042370 containerd[1627]: time="2026-01-28T02:07:13.041296489Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:13.065449 containerd[1627]: time="2026-01-28T02:07:13.064734823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 02:07:13.068891 containerd[1627]: time="2026-01-28T02:07:13.065372467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 02:07:13.071232 kubelet[2843]: E0128 02:07:13.070447 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:07:13.071232 kubelet[2843]: E0128 02:07:13.070522 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:07:13.071232 kubelet[2843]: E0128 02:07:13.070776 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zb9wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c95698587-q576f_calico-system(1159a5fb-a1ac-4f76-832d-c5be127c9405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:13.072445 kubelet[2843]: E0128 02:07:13.072154 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c95698587-q576f" podUID="1159a5fb-a1ac-4f76-832d-c5be127c9405" Jan 28 02:07:13.146455 systemd[1]: run-netns-cni\x2dc8a0c74d\x2deaf0\x2dbb80\x2d37cd\x2df65f71415ed7.mount: Deactivated successfully. Jan 28 02:07:13.407264 containerd[1627]: time="2026-01-28T02:07:13.406722676Z" level=info msg="StartContainer for \"bee7030daad834cece2452f2146943d8dd8830d2c48a70321933ecb4c7cdeb49\" returns successfully" Jan 28 02:07:13.423399 containerd[1627]: time="2026-01-28T02:07:13.423251569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-drn4k,Uid:9d134562-e8ff-432f-bfe2-7f69c1332017,Namespace:kube-system,Attempt:1,} returns sandbox id \"352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1\"" Jan 28 02:07:13.438065 containerd[1627]: time="2026-01-28T02:07:13.437323120Z" level=info msg="CreateContainer within sandbox \"352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 28 02:07:13.483510 containerd[1627]: time="2026-01-28T02:07:13.483111168Z" level=info msg="CreateContainer within sandbox \"352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7a872eb1f980c417549e16b6344f1e3da330b971d96028b2bdf8c1445396353f\"" Jan 28 02:07:13.487046 containerd[1627]: time="2026-01-28T02:07:13.486839096Z" level=info msg="StartContainer for \"7a872eb1f980c417549e16b6344f1e3da330b971d96028b2bdf8c1445396353f\"" Jan 28 02:07:13.519745 systemd-networkd[1258]: cali38057c275ad: Link UP Jan 28 02:07:13.537431 systemd-networkd[1258]: cali38057c275ad: Gained carrier Jan 28 02:07:13.565657 containerd[1627]: 2026-01-28 02:07:12.906 [INFO][4676] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Jan 28 02:07:13.565657 containerd[1627]: 2026-01-28 02:07:12.908 [INFO][4676] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" iface="eth0" netns="/var/run/netns/cni-2a9431a8-2ad8-0168-dbb4-56e895842329" Jan 28 02:07:13.565657 containerd[1627]: 2026-01-28 02:07:12.910 [INFO][4676] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" iface="eth0" netns="/var/run/netns/cni-2a9431a8-2ad8-0168-dbb4-56e895842329" Jan 28 02:07:13.565657 containerd[1627]: 2026-01-28 02:07:12.911 [INFO][4676] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" iface="eth0" netns="/var/run/netns/cni-2a9431a8-2ad8-0168-dbb4-56e895842329" Jan 28 02:07:13.565657 containerd[1627]: 2026-01-28 02:07:12.911 [INFO][4676] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Jan 28 02:07:13.565657 containerd[1627]: 2026-01-28 02:07:12.911 [INFO][4676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Jan 28 02:07:13.565657 containerd[1627]: 2026-01-28 02:07:13.355 [INFO][4747] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" HandleID="k8s-pod-network.dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:13.565657 containerd[1627]: 2026-01-28 02:07:13.369 [INFO][4747] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:13.565657 containerd[1627]: 2026-01-28 02:07:13.447 [INFO][4747] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:13.565657 containerd[1627]: 2026-01-28 02:07:13.476 [WARNING][4747] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" HandleID="k8s-pod-network.dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:13.565657 containerd[1627]: 2026-01-28 02:07:13.478 [INFO][4747] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" HandleID="k8s-pod-network.dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:13.565657 containerd[1627]: 2026-01-28 02:07:13.487 [INFO][4747] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:13.565657 containerd[1627]: 2026-01-28 02:07:13.547 [INFO][4676] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Jan 28 02:07:13.569837 containerd[1627]: time="2026-01-28T02:07:13.568256295Z" level=info msg="TearDown network for sandbox \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\" successfully" Jan 28 02:07:13.569837 containerd[1627]: time="2026-01-28T02:07:13.568336255Z" level=info msg="StopPodSandbox for \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\" returns successfully" Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:12.801 [INFO][4657] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0 csi-node-driver- calico-system 11c042ea-f3ed-451b-a4e5-0f06212804a3 939 0 2026-01-28 02:06:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-rjxd2.gb1.brightbox.com csi-node-driver-dgqzm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali38057c275ad [] [] }} ContainerID="9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" Namespace="calico-system" Pod="csi-node-driver-dgqzm" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-" Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:12.810 [INFO][4657] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" Namespace="calico-system" Pod="csi-node-driver-dgqzm" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.195 [INFO][4732] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" HandleID="k8s-pod-network.9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" Workload="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.196 [INFO][4732] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" HandleID="k8s-pod-network.9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" Workload="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332400), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-rjxd2.gb1.brightbox.com", "pod":"csi-node-driver-dgqzm", "timestamp":"2026-01-28 02:07:13.195040586 +0000 UTC"}, Hostname:"srv-rjxd2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.197 [INFO][4732] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.197 [INFO][4732] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.197 [INFO][4732] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-rjxd2.gb1.brightbox.com' Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.268 [INFO][4732] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.322 [INFO][4732] ipam/ipam.go 394: Looking up existing affinities for host host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.365 [INFO][4732] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.371 [INFO][4732] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.405 [INFO][4732] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.405 [INFO][4732] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.410 [INFO][4732] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.430 [INFO][4732] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.447 [INFO][4732] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.6/26] block=192.168.115.0/26 handle="k8s-pod-network.9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.447 [INFO][4732] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.6/26] handle="k8s-pod-network.9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.447 [INFO][4732] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:13.574093 containerd[1627]: 2026-01-28 02:07:13.447 [INFO][4732] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.6/26] IPv6=[] ContainerID="9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" HandleID="k8s-pod-network.9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" Workload="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:13.575199 containerd[1627]: 2026-01-28 02:07:13.461 [INFO][4657] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" Namespace="calico-system" Pod="csi-node-driver-dgqzm" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11c042ea-f3ed-451b-a4e5-0f06212804a3", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-dgqzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38057c275ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:13.575199 containerd[1627]: 2026-01-28 02:07:13.469 [INFO][4657] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.6/32] ContainerID="9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" Namespace="calico-system" Pod="csi-node-driver-dgqzm" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:13.575199 containerd[1627]: 2026-01-28 02:07:13.470 [INFO][4657] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38057c275ad ContainerID="9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" Namespace="calico-system" Pod="csi-node-driver-dgqzm" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:13.575199 containerd[1627]: 2026-01-28 02:07:13.532 [INFO][4657] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" Namespace="calico-system" Pod="csi-node-driver-dgqzm" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:13.575199 containerd[1627]: 2026-01-28 02:07:13.541 [INFO][4657] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" Namespace="calico-system" Pod="csi-node-driver-dgqzm" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11c042ea-f3ed-451b-a4e5-0f06212804a3", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db", Pod:"csi-node-driver-dgqzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38057c275ad", MAC:"ee:c4:09:61:99:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:13.575199 containerd[1627]: 2026-01-28 02:07:13.559 [INFO][4657] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db" Namespace="calico-system" Pod="csi-node-driver-dgqzm" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:13.576589 containerd[1627]: time="2026-01-28T02:07:13.576528101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66dfb7f7f9-v4czv,Uid:9a4d006c-455e-43f1-8c29-a9bee0e4e963,Namespace:calico-apiserver,Attempt:1,}" Jan 28 02:07:13.579760 systemd[1]: run-netns-cni\x2d2a9431a8\x2d2ad8\x2d0168\x2ddbb4\x2d56e895842329.mount: Deactivated successfully. Jan 28 02:07:13.745312 systemd-networkd[1258]: calic2bb499d0e6: Gained IPv6LL Jan 28 02:07:13.745860 systemd-networkd[1258]: cali26dc98cc3fe: Gained IPv6LL Jan 28 02:07:13.807584 systemd-journald[1180]: Under memory pressure, flushing caches. Jan 28 02:07:13.801982 systemd-networkd[1258]: vxlan.calico: Gained IPv6LL Jan 28 02:07:13.804127 systemd-resolved[1512]: Under memory pressure, flushing caches. Jan 28 02:07:13.804162 systemd-resolved[1512]: Flushed all caches. Jan 28 02:07:13.841814 systemd-networkd[1258]: cali67092e05a7b: Link UP Jan 28 02:07:13.847761 systemd-networkd[1258]: cali67092e05a7b: Gained carrier Jan 28 02:07:13.919913 containerd[1627]: time="2026-01-28T02:07:13.914959075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:07:13.919913 containerd[1627]: time="2026-01-28T02:07:13.915055903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:07:13.919913 containerd[1627]: time="2026-01-28T02:07:13.915079531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:13.919913 containerd[1627]: time="2026-01-28T02:07:13.915266213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.090 [INFO][4693] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0 goldmane-666569f655- calico-system 8b2fed1f-0989-4c8f-98b5-dfc06958e7db 940 0 2026-01-28 02:06:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-rjxd2.gb1.brightbox.com goldmane-666569f655-qjnq7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali67092e05a7b [] [] }} ContainerID="0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" Namespace="calico-system" Pod="goldmane-666569f655-qjnq7" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-" Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.090 [INFO][4693] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" Namespace="calico-system" Pod="goldmane-666569f655-qjnq7" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.364 [INFO][4792] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" HandleID="k8s-pod-network.0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" Workload="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.369 [INFO][4792] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" HandleID="k8s-pod-network.0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" Workload="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004eb50), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-rjxd2.gb1.brightbox.com", "pod":"goldmane-666569f655-qjnq7", "timestamp":"2026-01-28 02:07:13.364046393 +0000 UTC"}, Hostname:"srv-rjxd2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.370 [INFO][4792] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.499 [INFO][4792] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.500 [INFO][4792] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-rjxd2.gb1.brightbox.com' Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.555 [INFO][4792] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.589 [INFO][4792] ipam/ipam.go 394: Looking up existing affinities for host host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.646 [INFO][4792] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.660 [INFO][4792] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.700 [INFO][4792] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.709 [INFO][4792] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.715 [INFO][4792] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519 Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.765 [INFO][4792] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.779 [INFO][4792] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.7/26] block=192.168.115.0/26 handle="k8s-pod-network.0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.782 [INFO][4792] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.7/26] handle="k8s-pod-network.0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.782 [INFO][4792] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:13.975613 containerd[1627]: 2026-01-28 02:07:13.782 [INFO][4792] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.7/26] IPv6=[] ContainerID="0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" HandleID="k8s-pod-network.0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" Workload="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:13.976791 containerd[1627]: 2026-01-28 02:07:13.805 [INFO][4693] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" Namespace="calico-system" Pod="goldmane-666569f655-qjnq7" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8b2fed1f-0989-4c8f-98b5-dfc06958e7db", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-qjnq7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.115.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali67092e05a7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:13.976791 containerd[1627]: 2026-01-28 02:07:13.806 [INFO][4693] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.7/32] ContainerID="0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" Namespace="calico-system" Pod="goldmane-666569f655-qjnq7" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:13.976791 containerd[1627]: 2026-01-28 02:07:13.808 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67092e05a7b ContainerID="0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" Namespace="calico-system" Pod="goldmane-666569f655-qjnq7" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:13.976791 containerd[1627]: 2026-01-28 02:07:13.856 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" Namespace="calico-system" Pod="goldmane-666569f655-qjnq7" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:13.976791 containerd[1627]: 2026-01-28 02:07:13.863 [INFO][4693] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" Namespace="calico-system" Pod="goldmane-666569f655-qjnq7" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8b2fed1f-0989-4c8f-98b5-dfc06958e7db", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519", Pod:"goldmane-666569f655-qjnq7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.115.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali67092e05a7b", MAC:"2e:7e:2a:55:1c:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:13.976791 containerd[1627]: 2026-01-28 02:07:13.935 [INFO][4693] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519" Namespace="calico-system" Pod="goldmane-666569f655-qjnq7" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:14.003871 containerd[1627]: time="2026-01-28T02:07:14.003638262Z" level=info msg="StartContainer for \"7a872eb1f980c417549e16b6344f1e3da330b971d96028b2bdf8c1445396353f\" returns successfully" Jan 28 02:07:14.044424 kubelet[2843]: E0128 02:07:14.043863 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c95698587-q576f" podUID="1159a5fb-a1ac-4f76-832d-c5be127c9405" Jan 28 02:07:14.106345 kubelet[2843]: I0128 02:07:14.099062 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-clcr9" podStartSLOduration=50.099022864 podStartE2EDuration="50.099022864s" podCreationTimestamp="2026-01-28 02:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:07:14.098189617 +0000 UTC m=+55.826090265" watchObservedRunningTime="2026-01-28 02:07:14.099022864 +0000 UTC m=+55.826923488" Jan 28 02:07:14.239028 containerd[1627]: time="2026-01-28T02:07:14.238673688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:07:14.240428 containerd[1627]: time="2026-01-28T02:07:14.240082423Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:07:14.240428 containerd[1627]: time="2026-01-28T02:07:14.240125321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:14.242791 containerd[1627]: time="2026-01-28T02:07:14.241844686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:14.315669 containerd[1627]: time="2026-01-28T02:07:14.315391288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dgqzm,Uid:11c042ea-f3ed-451b-a4e5-0f06212804a3,Namespace:calico-system,Attempt:1,} returns sandbox id \"9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db\"" Jan 28 02:07:14.324309 containerd[1627]: time="2026-01-28T02:07:14.324113552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 02:07:14.486949 containerd[1627]: time="2026-01-28T02:07:14.486750170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qjnq7,Uid:8b2fed1f-0989-4c8f-98b5-dfc06958e7db,Namespace:calico-system,Attempt:1,} returns sandbox id \"0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519\"" Jan 28 02:07:14.562664 systemd-networkd[1258]: cali6fb43c128f9: Link UP Jan 28 02:07:14.563011 systemd-networkd[1258]: cali6fb43c128f9: Gained carrier Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.171 [INFO][4888] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0 calico-apiserver-66dfb7f7f9- calico-apiserver 9a4d006c-455e-43f1-8c29-a9bee0e4e963 954 0 2026-01-28 02:06:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66dfb7f7f9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-rjxd2.gb1.brightbox.com calico-apiserver-66dfb7f7f9-v4czv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6fb43c128f9 [] [] }} ContainerID="3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-v4czv" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-" Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.190 [INFO][4888] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-v4czv" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.411 [INFO][4984] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" HandleID="k8s-pod-network.3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.412 [INFO][4984] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" HandleID="k8s-pod-network.3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003861d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-rjxd2.gb1.brightbox.com", "pod":"calico-apiserver-66dfb7f7f9-v4czv", "timestamp":"2026-01-28 02:07:14.411958591 +0000 UTC"}, Hostname:"srv-rjxd2.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.412 [INFO][4984] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.412 [INFO][4984] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.412 [INFO][4984] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-rjxd2.gb1.brightbox.com' Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.434 [INFO][4984] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.446 [INFO][4984] ipam/ipam.go 394: Looking up existing affinities for host host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.462 [INFO][4984] ipam/ipam.go 511: Trying affinity for 192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.468 [INFO][4984] ipam/ipam.go 158: Attempting to load block cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.487 [INFO][4984] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.115.0/26 host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.487 [INFO][4984] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.115.0/26 handle="k8s-pod-network.3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.497 [INFO][4984] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7 Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.507 [INFO][4984] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.115.0/26 handle="k8s-pod-network.3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.537 [INFO][4984] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.115.8/26] block=192.168.115.0/26 handle="k8s-pod-network.3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.537 [INFO][4984] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.115.8/26] handle="k8s-pod-network.3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" host="srv-rjxd2.gb1.brightbox.com" Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.539 [INFO][4984] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:14.622682 containerd[1627]: 2026-01-28 02:07:14.539 [INFO][4984] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.115.8/26] IPv6=[] ContainerID="3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" HandleID="k8s-pod-network.3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:14.627074 containerd[1627]: 2026-01-28 02:07:14.550 [INFO][4888] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-v4czv" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0", GenerateName:"calico-apiserver-66dfb7f7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a4d006c-455e-43f1-8c29-a9bee0e4e963", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66dfb7f7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-66dfb7f7f9-v4czv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6fb43c128f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:14.627074 containerd[1627]: 2026-01-28 02:07:14.550 [INFO][4888] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.115.8/32] ContainerID="3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-v4czv" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:14.627074 containerd[1627]: 2026-01-28 02:07:14.550 [INFO][4888] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6fb43c128f9 ContainerID="3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-v4czv" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:14.627074 containerd[1627]: 2026-01-28 02:07:14.567 [INFO][4888] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-v4czv" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:14.627074 containerd[1627]: 2026-01-28 02:07:14.571 [INFO][4888] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-v4czv" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0", GenerateName:"calico-apiserver-66dfb7f7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a4d006c-455e-43f1-8c29-a9bee0e4e963", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66dfb7f7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7", Pod:"calico-apiserver-66dfb7f7f9-v4czv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6fb43c128f9", MAC:"ba:67:61:0a:3c:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:14.627074 containerd[1627]: 2026-01-28 02:07:14.610 [INFO][4888] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7" Namespace="calico-apiserver" Pod="calico-apiserver-66dfb7f7f9-v4czv" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:14.634009 systemd-networkd[1258]: cali38057c275ad: Gained IPv6LL Jan 28 02:07:14.680305 containerd[1627]: time="2026-01-28T02:07:14.678048079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 28 02:07:14.680305 containerd[1627]: time="2026-01-28T02:07:14.678136603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 28 02:07:14.680305 containerd[1627]: time="2026-01-28T02:07:14.678188259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:14.680305 containerd[1627]: time="2026-01-28T02:07:14.678362646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 28 02:07:14.684052 containerd[1627]: time="2026-01-28T02:07:14.683696218Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:14.692036 containerd[1627]: time="2026-01-28T02:07:14.691649164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 02:07:14.692408 containerd[1627]: time="2026-01-28T02:07:14.691980128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 02:07:14.698651 kubelet[2843]: E0128 02:07:14.696425 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:07:14.705964 kubelet[2843]: E0128 02:07:14.698925 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:07:14.709638 kubelet[2843]: E0128 02:07:14.706648 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l69f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dgqzm_calico-system(11c042ea-f3ed-451b-a4e5-0f06212804a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:14.710900 containerd[1627]: time="2026-01-28T02:07:14.710851814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 02:07:14.860064 containerd[1627]: time="2026-01-28T02:07:14.859998325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66dfb7f7f9-v4czv,Uid:9a4d006c-455e-43f1-8c29-a9bee0e4e963,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7\"" Jan 28 02:07:15.079644 kubelet[2843]: I0128 02:07:15.078421 2843 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-drn4k" podStartSLOduration=51.078393308 podStartE2EDuration="51.078393308s" podCreationTimestamp="2026-01-28 02:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 02:07:15.07781277 +0000 UTC m=+56.805713401" watchObservedRunningTime="2026-01-28 02:07:15.078393308 +0000 UTC m=+56.806293925" Jan 28 02:07:15.137371 systemd[1]: run-containerd-runc-k8s.io-3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7-runc.UNUqh2.mount: Deactivated successfully. Jan 28 02:07:15.312083 containerd[1627]: time="2026-01-28T02:07:15.312026846Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:15.314239 containerd[1627]: time="2026-01-28T02:07:15.313211945Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 02:07:15.314239 containerd[1627]: time="2026-01-28T02:07:15.313259826Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 02:07:15.314239 containerd[1627]: time="2026-01-28T02:07:15.314032435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 02:07:15.314432 kubelet[2843]: E0128 02:07:15.313514 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:07:15.314432 kubelet[2843]: E0128 02:07:15.313612 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:07:15.314432 kubelet[2843]: E0128 02:07:15.314020 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdd2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qjnq7_calico-system(8b2fed1f-0989-4c8f-98b5-dfc06958e7db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:15.316971 kubelet[2843]: E0128 02:07:15.315856 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qjnq7" podUID="8b2fed1f-0989-4c8f-98b5-dfc06958e7db" Jan 28 02:07:15.402011 systemd-networkd[1258]: cali67092e05a7b: Gained IPv6LL Jan 28 02:07:15.637128 containerd[1627]: time="2026-01-28T02:07:15.637036905Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:15.638155 containerd[1627]: time="2026-01-28T02:07:15.638107181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 02:07:15.638269 containerd[1627]: time="2026-01-28T02:07:15.638218447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 02:07:15.638671 kubelet[2843]: E0128 02:07:15.638485 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:07:15.638671 kubelet[2843]: E0128 02:07:15.638583 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:07:15.640814 kubelet[2843]: E0128 02:07:15.639608 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l69f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dgqzm_calico-system(11c042ea-f3ed-451b-a4e5-0f06212804a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:15.640975 containerd[1627]: time="2026-01-28T02:07:15.639012335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:07:15.641651 kubelet[2843]: E0128 02:07:15.641483 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:07:15.954090 containerd[1627]: time="2026-01-28T02:07:15.953818528Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:15.955452 containerd[1627]: time="2026-01-28T02:07:15.955291787Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:07:15.955592 containerd[1627]: time="2026-01-28T02:07:15.955421409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 02:07:15.955793 kubelet[2843]: E0128 02:07:15.955733 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:07:15.955927 kubelet[2843]: E0128 02:07:15.955827 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:07:15.956168 kubelet[2843]: E0128 02:07:15.956098 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znfnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66dfb7f7f9-v4czv_calico-apiserver(9a4d006c-455e-43f1-8c29-a9bee0e4e963): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:15.957629 kubelet[2843]: E0128 02:07:15.957574 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" podUID="9a4d006c-455e-43f1-8c29-a9bee0e4e963" Jan 28 02:07:15.977785 systemd-networkd[1258]: cali6fb43c128f9: Gained IPv6LL Jan 28 02:07:16.063985 kubelet[2843]: E0128 02:07:16.063756 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qjnq7" podUID="8b2fed1f-0989-4c8f-98b5-dfc06958e7db" Jan 28 02:07:16.065349 kubelet[2843]: E0128 02:07:16.065102 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" podUID="9a4d006c-455e-43f1-8c29-a9bee0e4e963" Jan 28 02:07:16.066222 kubelet[2843]: E0128 02:07:16.066175 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:07:18.453622 containerd[1627]: time="2026-01-28T02:07:18.453007809Z" level=info msg="StopPodSandbox for \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\"" Jan 28 02:07:18.620281 containerd[1627]: 2026-01-28 02:07:18.532 [WARNING][5091] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0", GenerateName:"calico-apiserver-66dfb7f7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a4d006c-455e-43f1-8c29-a9bee0e4e963", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66dfb7f7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7", Pod:"calico-apiserver-66dfb7f7f9-v4czv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6fb43c128f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:18.620281 containerd[1627]: 2026-01-28 02:07:18.532 [INFO][5091] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Jan 28 02:07:18.620281 containerd[1627]: 2026-01-28 02:07:18.532 [INFO][5091] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" iface="eth0" netns="" Jan 28 02:07:18.620281 containerd[1627]: 2026-01-28 02:07:18.532 [INFO][5091] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Jan 28 02:07:18.620281 containerd[1627]: 2026-01-28 02:07:18.532 [INFO][5091] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Jan 28 02:07:18.620281 containerd[1627]: 2026-01-28 02:07:18.586 [INFO][5101] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" HandleID="k8s-pod-network.dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:18.620281 containerd[1627]: 2026-01-28 02:07:18.588 [INFO][5101] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:18.620281 containerd[1627]: 2026-01-28 02:07:18.588 [INFO][5101] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:18.620281 containerd[1627]: 2026-01-28 02:07:18.609 [WARNING][5101] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" HandleID="k8s-pod-network.dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:18.620281 containerd[1627]: 2026-01-28 02:07:18.609 [INFO][5101] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" HandleID="k8s-pod-network.dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:18.620281 containerd[1627]: 2026-01-28 02:07:18.610 [INFO][5101] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:18.620281 containerd[1627]: 2026-01-28 02:07:18.616 [INFO][5091] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Jan 28 02:07:18.620281 containerd[1627]: time="2026-01-28T02:07:18.620000189Z" level=info msg="TearDown network for sandbox \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\" successfully" Jan 28 02:07:18.620281 containerd[1627]: time="2026-01-28T02:07:18.620051413Z" level=info msg="StopPodSandbox for \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\" returns successfully" Jan 28 02:07:18.623217 containerd[1627]: time="2026-01-28T02:07:18.621011571Z" level=info msg="RemovePodSandbox for \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\"" Jan 28 02:07:18.623217 containerd[1627]: time="2026-01-28T02:07:18.621062651Z" level=info msg="Forcibly stopping sandbox \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\"" Jan 28 02:07:18.752338 containerd[1627]: 2026-01-28 02:07:18.684 [WARNING][5116] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0", GenerateName:"calico-apiserver-66dfb7f7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a4d006c-455e-43f1-8c29-a9bee0e4e963", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66dfb7f7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"3e50aea67ef6e77b8c0bea9c9a3fe9d3b9675dde88e5377eee04a8c94f98d2d7", Pod:"calico-apiserver-66dfb7f7f9-v4czv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6fb43c128f9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:18.752338 containerd[1627]: 2026-01-28 02:07:18.685 [INFO][5116] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Jan 28 02:07:18.752338 containerd[1627]: 2026-01-28 02:07:18.685 [INFO][5116] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" iface="eth0" netns="" Jan 28 02:07:18.752338 containerd[1627]: 2026-01-28 02:07:18.685 [INFO][5116] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Jan 28 02:07:18.752338 containerd[1627]: 2026-01-28 02:07:18.685 [INFO][5116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Jan 28 02:07:18.752338 containerd[1627]: 2026-01-28 02:07:18.734 [INFO][5125] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" HandleID="k8s-pod-network.dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:18.752338 containerd[1627]: 2026-01-28 02:07:18.735 [INFO][5125] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:18.752338 containerd[1627]: 2026-01-28 02:07:18.735 [INFO][5125] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:18.752338 containerd[1627]: 2026-01-28 02:07:18.743 [WARNING][5125] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" HandleID="k8s-pod-network.dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:18.752338 containerd[1627]: 2026-01-28 02:07:18.743 [INFO][5125] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" HandleID="k8s-pod-network.dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--v4czv-eth0" Jan 28 02:07:18.752338 containerd[1627]: 2026-01-28 02:07:18.746 [INFO][5125] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:18.752338 containerd[1627]: 2026-01-28 02:07:18.748 [INFO][5116] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd" Jan 28 02:07:18.752338 containerd[1627]: time="2026-01-28T02:07:18.752266174Z" level=info msg="TearDown network for sandbox \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\" successfully" Jan 28 02:07:18.763319 containerd[1627]: time="2026-01-28T02:07:18.763266910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:07:18.763461 containerd[1627]: time="2026-01-28T02:07:18.763366168Z" level=info msg="RemovePodSandbox \"dde5553584beed3515021ad81c31fe725bbf49582e17190a96805f43bf1460bd\" returns successfully" Jan 28 02:07:18.764148 containerd[1627]: time="2026-01-28T02:07:18.764089880Z" level=info msg="StopPodSandbox for \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\"" Jan 28 02:07:18.890741 containerd[1627]: 2026-01-28 02:07:18.817 [WARNING][5139] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8b2fed1f-0989-4c8f-98b5-dfc06958e7db", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519", Pod:"goldmane-666569f655-qjnq7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.115.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali67092e05a7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:18.890741 containerd[1627]: 2026-01-28 02:07:18.817 [INFO][5139] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Jan 28 02:07:18.890741 containerd[1627]: 2026-01-28 02:07:18.817 [INFO][5139] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" iface="eth0" netns="" Jan 28 02:07:18.890741 containerd[1627]: 2026-01-28 02:07:18.817 [INFO][5139] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Jan 28 02:07:18.890741 containerd[1627]: 2026-01-28 02:07:18.817 [INFO][5139] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Jan 28 02:07:18.890741 containerd[1627]: 2026-01-28 02:07:18.871 [INFO][5146] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" HandleID="k8s-pod-network.653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Workload="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:18.890741 containerd[1627]: 2026-01-28 02:07:18.871 [INFO][5146] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:18.890741 containerd[1627]: 2026-01-28 02:07:18.871 [INFO][5146] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:18.890741 containerd[1627]: 2026-01-28 02:07:18.882 [WARNING][5146] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" HandleID="k8s-pod-network.653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Workload="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:18.890741 containerd[1627]: 2026-01-28 02:07:18.882 [INFO][5146] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" HandleID="k8s-pod-network.653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Workload="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:18.890741 containerd[1627]: 2026-01-28 02:07:18.884 [INFO][5146] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:18.890741 containerd[1627]: 2026-01-28 02:07:18.887 [INFO][5139] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Jan 28 02:07:18.891895 containerd[1627]: time="2026-01-28T02:07:18.890803751Z" level=info msg="TearDown network for sandbox \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\" successfully" Jan 28 02:07:18.891895 containerd[1627]: time="2026-01-28T02:07:18.890863932Z" level=info msg="StopPodSandbox for \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\" returns successfully" Jan 28 02:07:18.892727 containerd[1627]: time="2026-01-28T02:07:18.892253500Z" level=info msg="RemovePodSandbox for \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\"" Jan 28 02:07:18.892727 containerd[1627]: time="2026-01-28T02:07:18.892359378Z" level=info msg="Forcibly stopping sandbox \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\"" Jan 28 02:07:19.008727 containerd[1627]: 2026-01-28 02:07:18.959 [WARNING][5161] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"8b2fed1f-0989-4c8f-98b5-dfc06958e7db", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"0651d7931c13cd6ec123aedd9a84ea0d89c783eec412072783771cb5e9692519", Pod:"goldmane-666569f655-qjnq7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.115.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali67092e05a7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:19.008727 containerd[1627]: 2026-01-28 02:07:18.959 [INFO][5161] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Jan 28 02:07:19.008727 containerd[1627]: 2026-01-28 02:07:18.959 [INFO][5161] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" iface="eth0" netns="" Jan 28 02:07:19.008727 containerd[1627]: 2026-01-28 02:07:18.959 [INFO][5161] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Jan 28 02:07:19.008727 containerd[1627]: 2026-01-28 02:07:18.959 [INFO][5161] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Jan 28 02:07:19.008727 containerd[1627]: 2026-01-28 02:07:18.992 [INFO][5168] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" HandleID="k8s-pod-network.653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Workload="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:19.008727 containerd[1627]: 2026-01-28 02:07:18.992 [INFO][5168] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:19.008727 containerd[1627]: 2026-01-28 02:07:18.993 [INFO][5168] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:19.008727 containerd[1627]: 2026-01-28 02:07:19.002 [WARNING][5168] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" HandleID="k8s-pod-network.653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Workload="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:19.008727 containerd[1627]: 2026-01-28 02:07:19.002 [INFO][5168] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" HandleID="k8s-pod-network.653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Workload="srv--rjxd2.gb1.brightbox.com-k8s-goldmane--666569f655--qjnq7-eth0" Jan 28 02:07:19.008727 containerd[1627]: 2026-01-28 02:07:19.004 [INFO][5168] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:19.008727 containerd[1627]: 2026-01-28 02:07:19.006 [INFO][5161] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4" Jan 28 02:07:19.010708 containerd[1627]: time="2026-01-28T02:07:19.008723206Z" level=info msg="TearDown network for sandbox \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\" successfully" Jan 28 02:07:19.014470 containerd[1627]: time="2026-01-28T02:07:19.014332409Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:07:19.014470 containerd[1627]: time="2026-01-28T02:07:19.014407052Z" level=info msg="RemovePodSandbox \"653f27aa9f9242ece6830fc05c454e64b4b337517884befea8485e4f2dec95c4\" returns successfully" Jan 28 02:07:19.015507 containerd[1627]: time="2026-01-28T02:07:19.015474962Z" level=info msg="StopPodSandbox for \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\"" Jan 28 02:07:19.133368 containerd[1627]: 2026-01-28 02:07:19.065 [WARNING][5182] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9d134562-e8ff-432f-bfe2-7f69c1332017", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1", Pod:"coredns-668d6bf9bc-drn4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26dc98cc3fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:19.133368 containerd[1627]: 2026-01-28 02:07:19.066 [INFO][5182] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Jan 28 02:07:19.133368 containerd[1627]: 2026-01-28 02:07:19.066 [INFO][5182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" iface="eth0" netns="" Jan 28 02:07:19.133368 containerd[1627]: 2026-01-28 02:07:19.066 [INFO][5182] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Jan 28 02:07:19.133368 containerd[1627]: 2026-01-28 02:07:19.066 [INFO][5182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Jan 28 02:07:19.133368 containerd[1627]: 2026-01-28 02:07:19.115 [INFO][5190] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" HandleID="k8s-pod-network.478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:19.133368 containerd[1627]: 2026-01-28 02:07:19.115 [INFO][5190] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:19.133368 containerd[1627]: 2026-01-28 02:07:19.115 [INFO][5190] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:19.133368 containerd[1627]: 2026-01-28 02:07:19.124 [WARNING][5190] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" HandleID="k8s-pod-network.478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:19.133368 containerd[1627]: 2026-01-28 02:07:19.125 [INFO][5190] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" HandleID="k8s-pod-network.478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:19.133368 containerd[1627]: 2026-01-28 02:07:19.129 [INFO][5190] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:19.133368 containerd[1627]: 2026-01-28 02:07:19.131 [INFO][5182] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Jan 28 02:07:19.134153 containerd[1627]: time="2026-01-28T02:07:19.133440620Z" level=info msg="TearDown network for sandbox \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\" successfully" Jan 28 02:07:19.134153 containerd[1627]: time="2026-01-28T02:07:19.133472861Z" level=info msg="StopPodSandbox for \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\" returns successfully" Jan 28 02:07:19.135500 containerd[1627]: time="2026-01-28T02:07:19.135386138Z" level=info msg="RemovePodSandbox for \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\"" Jan 28 02:07:19.135500 containerd[1627]: time="2026-01-28T02:07:19.135492961Z" level=info msg="Forcibly stopping sandbox \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\"" Jan 28 02:07:19.266984 containerd[1627]: 2026-01-28 02:07:19.183 [WARNING][5204] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9d134562-e8ff-432f-bfe2-7f69c1332017", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"352ac4d3adf5c5ca76b3da78c692e6b9b1537460a0cc88148ef4eca9f1ec9cd1", Pod:"coredns-668d6bf9bc-drn4k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali26dc98cc3fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:19.266984 containerd[1627]: 2026-01-28 02:07:19.184 [INFO][5204] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Jan 28 02:07:19.266984 containerd[1627]: 2026-01-28 02:07:19.184 [INFO][5204] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" iface="eth0" netns="" Jan 28 02:07:19.266984 containerd[1627]: 2026-01-28 02:07:19.184 [INFO][5204] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Jan 28 02:07:19.266984 containerd[1627]: 2026-01-28 02:07:19.184 [INFO][5204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Jan 28 02:07:19.266984 containerd[1627]: 2026-01-28 02:07:19.239 [INFO][5211] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" HandleID="k8s-pod-network.478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:19.266984 containerd[1627]: 2026-01-28 02:07:19.241 [INFO][5211] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:19.266984 containerd[1627]: 2026-01-28 02:07:19.242 [INFO][5211] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:19.266984 containerd[1627]: 2026-01-28 02:07:19.256 [WARNING][5211] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" HandleID="k8s-pod-network.478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:19.266984 containerd[1627]: 2026-01-28 02:07:19.256 [INFO][5211] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" HandleID="k8s-pod-network.478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--drn4k-eth0" Jan 28 02:07:19.266984 containerd[1627]: 2026-01-28 02:07:19.259 [INFO][5211] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:19.266984 containerd[1627]: 2026-01-28 02:07:19.263 [INFO][5204] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0" Jan 28 02:07:19.266984 containerd[1627]: time="2026-01-28T02:07:19.266911306Z" level=info msg="TearDown network for sandbox \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\" successfully" Jan 28 02:07:19.274904 containerd[1627]: time="2026-01-28T02:07:19.274871445Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:07:19.275549 containerd[1627]: time="2026-01-28T02:07:19.275093874Z" level=info msg="RemovePodSandbox \"478f7f9d89f9659f2a7831ad43c7bb4388b47b025478b1606a6988c896c0cda0\" returns successfully" Jan 28 02:07:19.276317 containerd[1627]: time="2026-01-28T02:07:19.275785418Z" level=info msg="StopPodSandbox for \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\"" Jan 28 02:07:19.398586 containerd[1627]: 2026-01-28 02:07:19.345 [WARNING][5228] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6", Pod:"coredns-668d6bf9bc-clcr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2bb499d0e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:19.398586 containerd[1627]: 2026-01-28 02:07:19.345 [INFO][5228] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Jan 28 02:07:19.398586 containerd[1627]: 2026-01-28 02:07:19.345 [INFO][5228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" iface="eth0" netns="" Jan 28 02:07:19.398586 containerd[1627]: 2026-01-28 02:07:19.345 [INFO][5228] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Jan 28 02:07:19.398586 containerd[1627]: 2026-01-28 02:07:19.345 [INFO][5228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Jan 28 02:07:19.398586 containerd[1627]: 2026-01-28 02:07:19.375 [INFO][5235] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" HandleID="k8s-pod-network.39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:19.398586 containerd[1627]: 2026-01-28 02:07:19.375 [INFO][5235] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:19.398586 containerd[1627]: 2026-01-28 02:07:19.376 [INFO][5235] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:19.398586 containerd[1627]: 2026-01-28 02:07:19.389 [WARNING][5235] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" HandleID="k8s-pod-network.39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:19.398586 containerd[1627]: 2026-01-28 02:07:19.389 [INFO][5235] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" HandleID="k8s-pod-network.39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:19.398586 containerd[1627]: 2026-01-28 02:07:19.392 [INFO][5235] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:19.398586 containerd[1627]: 2026-01-28 02:07:19.395 [INFO][5228] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Jan 28 02:07:19.401042 containerd[1627]: time="2026-01-28T02:07:19.399141642Z" level=info msg="TearDown network for sandbox \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\" successfully" Jan 28 02:07:19.401042 containerd[1627]: time="2026-01-28T02:07:19.399188509Z" level=info msg="StopPodSandbox for \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\" returns successfully" Jan 28 02:07:19.401878 containerd[1627]: time="2026-01-28T02:07:19.401368612Z" level=info msg="RemovePodSandbox for \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\"" Jan 28 02:07:19.401878 containerd[1627]: time="2026-01-28T02:07:19.401432738Z" level=info msg="Forcibly stopping sandbox \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\"" Jan 28 02:07:19.494436 containerd[1627]: 2026-01-28 02:07:19.450 [WARNING][5249] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8e364ecd-a3cc-4f7c-be3d-6fe6eb9941b4", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"27e886badd2be349e3d837a4471da6e53393f1036daa20e0271b26c6c3cb34e6", Pod:"coredns-668d6bf9bc-clcr9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic2bb499d0e6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:19.494436 containerd[1627]: 2026-01-28 02:07:19.450 [INFO][5249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Jan 28 02:07:19.494436 containerd[1627]: 2026-01-28 02:07:19.450 [INFO][5249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" iface="eth0" netns="" Jan 28 02:07:19.494436 containerd[1627]: 2026-01-28 02:07:19.450 [INFO][5249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Jan 28 02:07:19.494436 containerd[1627]: 2026-01-28 02:07:19.450 [INFO][5249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Jan 28 02:07:19.494436 containerd[1627]: 2026-01-28 02:07:19.477 [INFO][5256] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" HandleID="k8s-pod-network.39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:19.494436 containerd[1627]: 2026-01-28 02:07:19.477 [INFO][5256] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:19.494436 containerd[1627]: 2026-01-28 02:07:19.477 [INFO][5256] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:19.494436 containerd[1627]: 2026-01-28 02:07:19.488 [WARNING][5256] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" HandleID="k8s-pod-network.39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:19.494436 containerd[1627]: 2026-01-28 02:07:19.488 [INFO][5256] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" HandleID="k8s-pod-network.39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Workload="srv--rjxd2.gb1.brightbox.com-k8s-coredns--668d6bf9bc--clcr9-eth0" Jan 28 02:07:19.494436 containerd[1627]: 2026-01-28 02:07:19.491 [INFO][5256] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:19.494436 containerd[1627]: 2026-01-28 02:07:19.492 [INFO][5249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467" Jan 28 02:07:19.496337 containerd[1627]: time="2026-01-28T02:07:19.494497071Z" level=info msg="TearDown network for sandbox \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\" successfully" Jan 28 02:07:19.497975 containerd[1627]: time="2026-01-28T02:07:19.497929103Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:07:19.498057 containerd[1627]: time="2026-01-28T02:07:19.497989822Z" level=info msg="RemovePodSandbox \"39c6bfed6366bd1e6bcfa4e537ad659bfbba4e5cd6dfe18e34b70bd4f0586467\" returns successfully" Jan 28 02:07:19.499185 containerd[1627]: time="2026-01-28T02:07:19.498787410Z" level=info msg="StopPodSandbox for \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\"" Jan 28 02:07:19.610285 containerd[1627]: 2026-01-28 02:07:19.552 [WARNING][5271] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0", GenerateName:"calico-kube-controllers-5c95698587-", Namespace:"calico-system", SelfLink:"", UID:"1159a5fb-a1ac-4f76-832d-c5be127c9405", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c95698587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285", Pod:"calico-kube-controllers-5c95698587-q576f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide44890f632", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:19.610285 containerd[1627]: 2026-01-28 02:07:19.552 [INFO][5271] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Jan 28 02:07:19.610285 containerd[1627]: 2026-01-28 02:07:19.552 [INFO][5271] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" iface="eth0" netns="" Jan 28 02:07:19.610285 containerd[1627]: 2026-01-28 02:07:19.553 [INFO][5271] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Jan 28 02:07:19.610285 containerd[1627]: 2026-01-28 02:07:19.553 [INFO][5271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Jan 28 02:07:19.610285 containerd[1627]: 2026-01-28 02:07:19.592 [INFO][5278] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" HandleID="k8s-pod-network.26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:19.610285 containerd[1627]: 2026-01-28 02:07:19.592 [INFO][5278] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:19.610285 containerd[1627]: 2026-01-28 02:07:19.592 [INFO][5278] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:19.610285 containerd[1627]: 2026-01-28 02:07:19.604 [WARNING][5278] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" HandleID="k8s-pod-network.26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:19.610285 containerd[1627]: 2026-01-28 02:07:19.604 [INFO][5278] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" HandleID="k8s-pod-network.26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:19.610285 containerd[1627]: 2026-01-28 02:07:19.606 [INFO][5278] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:19.610285 containerd[1627]: 2026-01-28 02:07:19.608 [INFO][5271] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Jan 28 02:07:19.610285 containerd[1627]: time="2026-01-28T02:07:19.610119247Z" level=info msg="TearDown network for sandbox \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\" successfully" Jan 28 02:07:19.610285 containerd[1627]: time="2026-01-28T02:07:19.610163092Z" level=info msg="StopPodSandbox for \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\" returns successfully" Jan 28 02:07:19.612301 containerd[1627]: time="2026-01-28T02:07:19.611129605Z" level=info msg="RemovePodSandbox for \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\"" Jan 28 02:07:19.612301 containerd[1627]: time="2026-01-28T02:07:19.611162547Z" level=info msg="Forcibly stopping sandbox \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\"" Jan 28 02:07:19.721633 containerd[1627]: 2026-01-28 02:07:19.662 [WARNING][5292] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0", GenerateName:"calico-kube-controllers-5c95698587-", Namespace:"calico-system", SelfLink:"", UID:"1159a5fb-a1ac-4f76-832d-c5be127c9405", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c95698587", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"cf389c08bf5cf67090e7a87dd6bb5b02c52691f3796139dc1264f64c63e4c285", Pod:"calico-kube-controllers-5c95698587-q576f", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calide44890f632", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:19.721633 containerd[1627]: 2026-01-28 02:07:19.664 [INFO][5292] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Jan 28 02:07:19.721633 containerd[1627]: 2026-01-28 02:07:19.665 [INFO][5292] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" iface="eth0" netns="" Jan 28 02:07:19.721633 containerd[1627]: 2026-01-28 02:07:19.665 [INFO][5292] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Jan 28 02:07:19.721633 containerd[1627]: 2026-01-28 02:07:19.665 [INFO][5292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Jan 28 02:07:19.721633 containerd[1627]: 2026-01-28 02:07:19.705 [INFO][5299] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" HandleID="k8s-pod-network.26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:19.721633 containerd[1627]: 2026-01-28 02:07:19.705 [INFO][5299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:19.721633 containerd[1627]: 2026-01-28 02:07:19.705 [INFO][5299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:19.721633 containerd[1627]: 2026-01-28 02:07:19.714 [WARNING][5299] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" HandleID="k8s-pod-network.26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:19.721633 containerd[1627]: 2026-01-28 02:07:19.714 [INFO][5299] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" HandleID="k8s-pod-network.26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--kube--controllers--5c95698587--q576f-eth0" Jan 28 02:07:19.721633 containerd[1627]: 2026-01-28 02:07:19.716 [INFO][5299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:19.721633 containerd[1627]: 2026-01-28 02:07:19.718 [INFO][5292] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d" Jan 28 02:07:19.721633 containerd[1627]: time="2026-01-28T02:07:19.720940816Z" level=info msg="TearDown network for sandbox \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\" successfully" Jan 28 02:07:19.725482 containerd[1627]: time="2026-01-28T02:07:19.725125653Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:07:19.725482 containerd[1627]: time="2026-01-28T02:07:19.725221320Z" level=info msg="RemovePodSandbox \"26f5ba689d9ff927ebad6716cbdbfeca26f7e0fac50c0b7119b566096043c06d\" returns successfully" Jan 28 02:07:19.726609 containerd[1627]: time="2026-01-28T02:07:19.726313362Z" level=info msg="StopPodSandbox for \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\"" Jan 28 02:07:19.865761 containerd[1627]: 2026-01-28 02:07:19.805 [WARNING][5313] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0", GenerateName:"calico-apiserver-66dfb7f7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"91d0c55e-7a98-404e-a4c2-3c6f8edba99c", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66dfb7f7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62", Pod:"calico-apiserver-66dfb7f7f9-w9b4h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid639c675f2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:19.865761 containerd[1627]: 2026-01-28 02:07:19.805 [INFO][5313] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Jan 28 02:07:19.865761 containerd[1627]: 2026-01-28 02:07:19.805 [INFO][5313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" iface="eth0" netns="" Jan 28 02:07:19.865761 containerd[1627]: 2026-01-28 02:07:19.805 [INFO][5313] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Jan 28 02:07:19.865761 containerd[1627]: 2026-01-28 02:07:19.806 [INFO][5313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Jan 28 02:07:19.865761 containerd[1627]: 2026-01-28 02:07:19.844 [INFO][5320] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" HandleID="k8s-pod-network.4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:19.865761 containerd[1627]: 2026-01-28 02:07:19.845 [INFO][5320] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:19.865761 containerd[1627]: 2026-01-28 02:07:19.845 [INFO][5320] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:19.865761 containerd[1627]: 2026-01-28 02:07:19.854 [WARNING][5320] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" HandleID="k8s-pod-network.4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:19.865761 containerd[1627]: 2026-01-28 02:07:19.854 [INFO][5320] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" HandleID="k8s-pod-network.4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:19.865761 containerd[1627]: 2026-01-28 02:07:19.858 [INFO][5320] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:19.865761 containerd[1627]: 2026-01-28 02:07:19.861 [INFO][5313] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Jan 28 02:07:19.868141 containerd[1627]: time="2026-01-28T02:07:19.865757032Z" level=info msg="TearDown network for sandbox \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\" successfully" Jan 28 02:07:19.868141 containerd[1627]: time="2026-01-28T02:07:19.865827707Z" level=info msg="StopPodSandbox for \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\" returns successfully" Jan 28 02:07:19.873312 containerd[1627]: time="2026-01-28T02:07:19.873248891Z" level=info msg="RemovePodSandbox for \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\"" Jan 28 02:07:19.873579 containerd[1627]: time="2026-01-28T02:07:19.873416185Z" level=info msg="Forcibly stopping sandbox \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\"" Jan 28 02:07:19.983366 containerd[1627]: 2026-01-28 02:07:19.927 [WARNING][5336] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0", GenerateName:"calico-apiserver-66dfb7f7f9-", Namespace:"calico-apiserver", SelfLink:"", UID:"91d0c55e-7a98-404e-a4c2-3c6f8edba99c", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66dfb7f7f9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"05cd0a937e22e0854d6d4a2087aafb225c8b10a73f8a055891da91e22b07cf62", Pod:"calico-apiserver-66dfb7f7f9-w9b4h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid639c675f2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:19.983366 containerd[1627]: 2026-01-28 02:07:19.928 [INFO][5336] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Jan 28 02:07:19.983366 containerd[1627]: 2026-01-28 02:07:19.928 [INFO][5336] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" iface="eth0" netns="" Jan 28 02:07:19.983366 containerd[1627]: 2026-01-28 02:07:19.928 [INFO][5336] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Jan 28 02:07:19.983366 containerd[1627]: 2026-01-28 02:07:19.928 [INFO][5336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Jan 28 02:07:19.983366 containerd[1627]: 2026-01-28 02:07:19.961 [INFO][5344] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" HandleID="k8s-pod-network.4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:19.983366 containerd[1627]: 2026-01-28 02:07:19.961 [INFO][5344] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:19.983366 containerd[1627]: 2026-01-28 02:07:19.961 [INFO][5344] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:19.983366 containerd[1627]: 2026-01-28 02:07:19.971 [WARNING][5344] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" HandleID="k8s-pod-network.4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:19.983366 containerd[1627]: 2026-01-28 02:07:19.971 [INFO][5344] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" HandleID="k8s-pod-network.4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Workload="srv--rjxd2.gb1.brightbox.com-k8s-calico--apiserver--66dfb7f7f9--w9b4h-eth0" Jan 28 02:07:19.983366 containerd[1627]: 2026-01-28 02:07:19.974 [INFO][5344] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:19.983366 containerd[1627]: 2026-01-28 02:07:19.979 [INFO][5336] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6" Jan 28 02:07:19.986135 containerd[1627]: time="2026-01-28T02:07:19.983423537Z" level=info msg="TearDown network for sandbox \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\" successfully" Jan 28 02:07:19.995474 containerd[1627]: time="2026-01-28T02:07:19.995437518Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:07:19.995598 containerd[1627]: time="2026-01-28T02:07:19.995497539Z" level=info msg="RemovePodSandbox \"4e828e4b806926d4db7e913850aadff2fd5fc27b80b123216d4d07a62754e1e6\" returns successfully" Jan 28 02:07:20.000023 containerd[1627]: time="2026-01-28T02:07:19.999763139Z" level=info msg="StopPodSandbox for \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\"" Jan 28 02:07:20.113858 containerd[1627]: 2026-01-28 02:07:20.050 [WARNING][5358] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-whisker--8f88cb457--rs5wl-eth0" Jan 28 02:07:20.113858 containerd[1627]: 2026-01-28 02:07:20.050 [INFO][5358] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Jan 28 02:07:20.113858 containerd[1627]: 2026-01-28 02:07:20.050 [INFO][5358] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" iface="eth0" netns="" Jan 28 02:07:20.113858 containerd[1627]: 2026-01-28 02:07:20.050 [INFO][5358] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Jan 28 02:07:20.113858 containerd[1627]: 2026-01-28 02:07:20.051 [INFO][5358] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Jan 28 02:07:20.113858 containerd[1627]: 2026-01-28 02:07:20.091 [INFO][5365] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" HandleID="k8s-pod-network.ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Workload="srv--rjxd2.gb1.brightbox.com-k8s-whisker--8f88cb457--rs5wl-eth0" Jan 28 02:07:20.113858 containerd[1627]: 2026-01-28 02:07:20.092 [INFO][5365] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:20.113858 containerd[1627]: 2026-01-28 02:07:20.092 [INFO][5365] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:20.113858 containerd[1627]: 2026-01-28 02:07:20.105 [WARNING][5365] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" HandleID="k8s-pod-network.ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Workload="srv--rjxd2.gb1.brightbox.com-k8s-whisker--8f88cb457--rs5wl-eth0" Jan 28 02:07:20.113858 containerd[1627]: 2026-01-28 02:07:20.105 [INFO][5365] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" HandleID="k8s-pod-network.ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Workload="srv--rjxd2.gb1.brightbox.com-k8s-whisker--8f88cb457--rs5wl-eth0" Jan 28 02:07:20.113858 containerd[1627]: 2026-01-28 02:07:20.109 [INFO][5365] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:20.113858 containerd[1627]: 2026-01-28 02:07:20.111 [INFO][5358] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Jan 28 02:07:20.114800 containerd[1627]: time="2026-01-28T02:07:20.113915399Z" level=info msg="TearDown network for sandbox \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\" successfully" Jan 28 02:07:20.114800 containerd[1627]: time="2026-01-28T02:07:20.113966826Z" level=info msg="StopPodSandbox for \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\" returns successfully" Jan 28 02:07:20.114800 containerd[1627]: time="2026-01-28T02:07:20.114396085Z" level=info msg="RemovePodSandbox for \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\"" Jan 28 02:07:20.114800 containerd[1627]: time="2026-01-28T02:07:20.114438923Z" level=info msg="Forcibly stopping sandbox \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\"" Jan 28 02:07:20.221531 containerd[1627]: 2026-01-28 02:07:20.175 [WARNING][5379] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" WorkloadEndpoint="srv--rjxd2.gb1.brightbox.com-k8s-whisker--8f88cb457--rs5wl-eth0" Jan 28 02:07:20.221531 containerd[1627]: 2026-01-28 02:07:20.175 [INFO][5379] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Jan 28 02:07:20.221531 containerd[1627]: 2026-01-28 02:07:20.175 [INFO][5379] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" iface="eth0" netns="" Jan 28 02:07:20.221531 containerd[1627]: 2026-01-28 02:07:20.175 [INFO][5379] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Jan 28 02:07:20.221531 containerd[1627]: 2026-01-28 02:07:20.175 [INFO][5379] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Jan 28 02:07:20.221531 containerd[1627]: 2026-01-28 02:07:20.204 [INFO][5386] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" HandleID="k8s-pod-network.ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Workload="srv--rjxd2.gb1.brightbox.com-k8s-whisker--8f88cb457--rs5wl-eth0" Jan 28 02:07:20.221531 containerd[1627]: 2026-01-28 02:07:20.204 [INFO][5386] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:20.221531 containerd[1627]: 2026-01-28 02:07:20.204 [INFO][5386] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:20.221531 containerd[1627]: 2026-01-28 02:07:20.215 [WARNING][5386] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" HandleID="k8s-pod-network.ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Workload="srv--rjxd2.gb1.brightbox.com-k8s-whisker--8f88cb457--rs5wl-eth0" Jan 28 02:07:20.221531 containerd[1627]: 2026-01-28 02:07:20.215 [INFO][5386] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" HandleID="k8s-pod-network.ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Workload="srv--rjxd2.gb1.brightbox.com-k8s-whisker--8f88cb457--rs5wl-eth0" Jan 28 02:07:20.221531 containerd[1627]: 2026-01-28 02:07:20.217 [INFO][5386] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:20.221531 containerd[1627]: 2026-01-28 02:07:20.219 [INFO][5379] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5" Jan 28 02:07:20.222310 containerd[1627]: time="2026-01-28T02:07:20.221525197Z" level=info msg="TearDown network for sandbox \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\" successfully" Jan 28 02:07:20.226057 containerd[1627]: time="2026-01-28T02:07:20.226014152Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:07:20.226145 containerd[1627]: time="2026-01-28T02:07:20.226114778Z" level=info msg="RemovePodSandbox \"ab273fc0c396014c63e424dbf5cf5d61983cde11d827837ceb9aab60a64494b5\" returns successfully" Jan 28 02:07:20.226831 containerd[1627]: time="2026-01-28T02:07:20.226763617Z" level=info msg="StopPodSandbox for \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\"" Jan 28 02:07:20.321541 containerd[1627]: 2026-01-28 02:07:20.274 [WARNING][5400] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11c042ea-f3ed-451b-a4e5-0f06212804a3", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db", Pod:"csi-node-driver-dgqzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38057c275ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:20.321541 containerd[1627]: 2026-01-28 02:07:20.274 [INFO][5400] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Jan 28 02:07:20.321541 containerd[1627]: 2026-01-28 02:07:20.275 [INFO][5400] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" iface="eth0" netns="" Jan 28 02:07:20.321541 containerd[1627]: 2026-01-28 02:07:20.275 [INFO][5400] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Jan 28 02:07:20.321541 containerd[1627]: 2026-01-28 02:07:20.275 [INFO][5400] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Jan 28 02:07:20.321541 containerd[1627]: 2026-01-28 02:07:20.305 [INFO][5407] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" HandleID="k8s-pod-network.b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Workload="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:20.321541 containerd[1627]: 2026-01-28 02:07:20.305 [INFO][5407] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:20.321541 containerd[1627]: 2026-01-28 02:07:20.305 [INFO][5407] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:20.321541 containerd[1627]: 2026-01-28 02:07:20.314 [WARNING][5407] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" HandleID="k8s-pod-network.b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Workload="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:20.321541 containerd[1627]: 2026-01-28 02:07:20.314 [INFO][5407] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" HandleID="k8s-pod-network.b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Workload="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:20.321541 containerd[1627]: 2026-01-28 02:07:20.316 [INFO][5407] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:20.321541 containerd[1627]: 2026-01-28 02:07:20.318 [INFO][5400] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Jan 28 02:07:20.322984 containerd[1627]: time="2026-01-28T02:07:20.321529870Z" level=info msg="TearDown network for sandbox \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\" successfully" Jan 28 02:07:20.322984 containerd[1627]: time="2026-01-28T02:07:20.321616155Z" level=info msg="StopPodSandbox for \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\" returns successfully" Jan 28 02:07:20.322984 containerd[1627]: time="2026-01-28T02:07:20.322787040Z" level=info msg="RemovePodSandbox for \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\"" Jan 28 02:07:20.322984 containerd[1627]: time="2026-01-28T02:07:20.322853976Z" level=info msg="Forcibly stopping sandbox \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\"" Jan 28 02:07:20.425050 containerd[1627]: 2026-01-28 02:07:20.368 [WARNING][5421] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11c042ea-f3ed-451b-a4e5-0f06212804a3", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 28, 2, 6, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-rjxd2.gb1.brightbox.com", ContainerID:"9444fd4d123e7d026d55692fff7b7d9c7a8cfad61949056ca8a343602cbad2db", Pod:"csi-node-driver-dgqzm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38057c275ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 28 02:07:20.425050 containerd[1627]: 2026-01-28 02:07:20.369 [INFO][5421] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Jan 28 02:07:20.425050 containerd[1627]: 2026-01-28 02:07:20.369 [INFO][5421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" iface="eth0" netns="" Jan 28 02:07:20.425050 containerd[1627]: 2026-01-28 02:07:20.369 [INFO][5421] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Jan 28 02:07:20.425050 containerd[1627]: 2026-01-28 02:07:20.369 [INFO][5421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Jan 28 02:07:20.425050 containerd[1627]: 2026-01-28 02:07:20.409 [INFO][5428] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" HandleID="k8s-pod-network.b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Workload="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:20.425050 containerd[1627]: 2026-01-28 02:07:20.409 [INFO][5428] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 28 02:07:20.425050 containerd[1627]: 2026-01-28 02:07:20.409 [INFO][5428] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 28 02:07:20.425050 containerd[1627]: 2026-01-28 02:07:20.418 [WARNING][5428] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" HandleID="k8s-pod-network.b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Workload="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:20.425050 containerd[1627]: 2026-01-28 02:07:20.419 [INFO][5428] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" HandleID="k8s-pod-network.b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Workload="srv--rjxd2.gb1.brightbox.com-k8s-csi--node--driver--dgqzm-eth0" Jan 28 02:07:20.425050 containerd[1627]: 2026-01-28 02:07:20.421 [INFO][5428] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 28 02:07:20.425050 containerd[1627]: 2026-01-28 02:07:20.423 [INFO][5421] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791" Jan 28 02:07:20.426226 containerd[1627]: time="2026-01-28T02:07:20.425143634Z" level=info msg="TearDown network for sandbox \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\" successfully" Jan 28 02:07:20.429752 containerd[1627]: time="2026-01-28T02:07:20.429685040Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 28 02:07:20.429869 containerd[1627]: time="2026-01-28T02:07:20.429780059Z" level=info msg="RemovePodSandbox \"b73d412be14fa1f8d6f0f4abf71a620ea73b63e2898078ef4b826e4da4185791\" returns successfully" Jan 28 02:07:25.453664 containerd[1627]: time="2026-01-28T02:07:25.453142174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 02:07:25.782705 containerd[1627]: time="2026-01-28T02:07:25.782396434Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:25.784018 containerd[1627]: time="2026-01-28T02:07:25.783828524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 02:07:25.784018 containerd[1627]: time="2026-01-28T02:07:25.783879646Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 02:07:25.784396 kubelet[2843]: E0128 02:07:25.784115 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:07:25.784396 kubelet[2843]: E0128 02:07:25.784226 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:07:25.786514 kubelet[2843]: E0128 02:07:25.784522 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cb1d52d8e569481480e80c2c7a6f1cce,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7n6cj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77894fccbf-hf9dn_calico-system(39b2c588-693e-480c-a4f1-3808ca50200d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:25.786687 containerd[1627]: time="2026-01-28T02:07:25.785353031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 02:07:26.105658 containerd[1627]: time="2026-01-28T02:07:26.105599856Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:26.106930 containerd[1627]: time="2026-01-28T02:07:26.106792818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 02:07:26.107029 containerd[1627]: time="2026-01-28T02:07:26.106942296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 02:07:26.107519 kubelet[2843]: E0128 02:07:26.107210 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:07:26.107519 kubelet[2843]: E0128 02:07:26.107273 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:07:26.108015 containerd[1627]: time="2026-01-28T02:07:26.107815586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 02:07:26.108207 kubelet[2843]: E0128 02:07:26.107906 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zb9wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c95698587-q576f_calico-system(1159a5fb-a1ac-4f76-832d-c5be127c9405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:26.109331 kubelet[2843]: E0128 02:07:26.109299 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c95698587-q576f" podUID="1159a5fb-a1ac-4f76-832d-c5be127c9405" Jan 28 02:07:26.421297 containerd[1627]: time="2026-01-28T02:07:26.421110240Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:26.422603 containerd[1627]: time="2026-01-28T02:07:26.422515962Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 02:07:26.422721 containerd[1627]: time="2026-01-28T02:07:26.422670533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 02:07:26.422968 kubelet[2843]: E0128 02:07:26.422913 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:07:26.423089 kubelet[2843]: E0128 02:07:26.422980 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:07:26.423245 kubelet[2843]: E0128 02:07:26.423172 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n6cj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77894fccbf-hf9dn_calico-system(39b2c588-693e-480c-a4f1-3808ca50200d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:26.424626 kubelet[2843]: E0128 02:07:26.424570 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77894fccbf-hf9dn" podUID="39b2c588-693e-480c-a4f1-3808ca50200d" Jan 28 02:07:26.456705 containerd[1627]: time="2026-01-28T02:07:26.455063087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:07:26.775930 containerd[1627]: time="2026-01-28T02:07:26.775474311Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:26.776862 containerd[1627]: time="2026-01-28T02:07:26.776744409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:07:26.776862 containerd[1627]: time="2026-01-28T02:07:26.776834534Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 02:07:26.778226 kubelet[2843]: E0128 02:07:26.777220 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:07:26.778226 kubelet[2843]: E0128 02:07:26.777322 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:07:26.778226 kubelet[2843]: E0128 02:07:26.777623 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhs92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66dfb7f7f9-w9b4h_calico-apiserver(91d0c55e-7a98-404e-a4c2-3c6f8edba99c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:26.779434 kubelet[2843]: E0128 02:07:26.779379 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" podUID="91d0c55e-7a98-404e-a4c2-3c6f8edba99c" Jan 28 02:07:27.451865 containerd[1627]: time="2026-01-28T02:07:27.451352514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 02:07:27.760641 containerd[1627]: time="2026-01-28T02:07:27.760275771Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:27.762122 containerd[1627]: time="2026-01-28T02:07:27.761969750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 02:07:27.762122 containerd[1627]: time="2026-01-28T02:07:27.762035148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 02:07:27.762313 kubelet[2843]: E0128 02:07:27.762236 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:07:27.764675 kubelet[2843]: E0128 02:07:27.762303 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:07:27.764675 kubelet[2843]: E0128 02:07:27.762525 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdd2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qjnq7_calico-system(8b2fed1f-0989-4c8f-98b5-dfc06958e7db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:27.764675 kubelet[2843]: E0128 02:07:27.764055 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qjnq7" podUID="8b2fed1f-0989-4c8f-98b5-dfc06958e7db" Jan 28 02:07:28.455412 containerd[1627]: time="2026-01-28T02:07:28.454437186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:07:28.774347 containerd[1627]: time="2026-01-28T02:07:28.774149493Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:28.779964 containerd[1627]: time="2026-01-28T02:07:28.779896345Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:07:28.780102 containerd[1627]: time="2026-01-28T02:07:28.780033946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 02:07:28.780425 kubelet[2843]: E0128 02:07:28.780366 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:07:28.780878 kubelet[2843]: E0128 02:07:28.780449 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:07:28.780878 kubelet[2843]: E0128 02:07:28.780723 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znfnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66dfb7f7f9-v4czv_calico-apiserver(9a4d006c-455e-43f1-8c29-a9bee0e4e963): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:28.782494 kubelet[2843]: E0128 02:07:28.782440 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" podUID="9a4d006c-455e-43f1-8c29-a9bee0e4e963" Jan 28 02:07:30.452203 containerd[1627]: time="2026-01-28T02:07:30.452125138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 02:07:30.773924 containerd[1627]: time="2026-01-28T02:07:30.773505348Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:30.774717 containerd[1627]: time="2026-01-28T02:07:30.774669025Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 02:07:30.774819 containerd[1627]: time="2026-01-28T02:07:30.774775203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 02:07:30.775017 kubelet[2843]: E0128 02:07:30.774966 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:07:30.775715 kubelet[2843]: E0128 02:07:30.775035 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:07:30.775715 kubelet[2843]: E0128 02:07:30.775200 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l69f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dgqzm_calico-system(11c042ea-f3ed-451b-a4e5-0f06212804a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:30.779167 containerd[1627]: time="2026-01-28T02:07:30.778911267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 02:07:31.098022 containerd[1627]: time="2026-01-28T02:07:31.097676383Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:31.099237 containerd[1627]: time="2026-01-28T02:07:31.099181457Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 02:07:31.099357 containerd[1627]: time="2026-01-28T02:07:31.099297616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 02:07:31.099795 kubelet[2843]: E0128 02:07:31.099521 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:07:31.099795 kubelet[2843]: E0128 02:07:31.099594 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:07:31.100021 kubelet[2843]: E0128 02:07:31.099951 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l69f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dgqzm_calico-system(11c042ea-f3ed-451b-a4e5-0f06212804a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:31.101502 kubelet[2843]: E0128 02:07:31.101441 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:07:38.457356 kubelet[2843]: E0128 02:07:38.456509 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c95698587-q576f" podUID="1159a5fb-a1ac-4f76-832d-c5be127c9405" Jan 28 02:07:39.862938 systemd[1]: Started sshd@7-10.230.50.62:22-68.220.241.50:55082.service - OpenSSH per-connection server daemon (68.220.241.50:55082). Jan 28 02:07:40.456088 kubelet[2843]: E0128 02:07:40.456026 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77894fccbf-hf9dn" podUID="39b2c588-693e-480c-a4f1-3808ca50200d" Jan 28 02:07:40.457115 kubelet[2843]: E0128 02:07:40.456153 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qjnq7" podUID="8b2fed1f-0989-4c8f-98b5-dfc06958e7db" Jan 28 02:07:40.512073 sshd[5458]: Accepted publickey for core from 68.220.241.50 port 55082 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:07:40.513814 sshd[5458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:07:40.542069 systemd-logind[1601]: New session 10 of user core. Jan 28 02:07:40.553037 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 28 02:07:41.456681 kubelet[2843]: E0128 02:07:41.456601 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" podUID="91d0c55e-7a98-404e-a4c2-3c6f8edba99c" Jan 28 02:07:41.628102 sshd[5458]: pam_unix(sshd:session): session closed for user core Jan 28 02:07:41.638043 systemd[1]: sshd@7-10.230.50.62:22-68.220.241.50:55082.service: Deactivated successfully. Jan 28 02:07:41.646867 systemd-logind[1601]: Session 10 logged out. Waiting for processes to exit. Jan 28 02:07:41.647898 systemd[1]: session-10.scope: Deactivated successfully. Jan 28 02:07:41.653727 systemd-logind[1601]: Removed session 10. Jan 28 02:07:42.460770 kubelet[2843]: E0128 02:07:42.460662 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" podUID="9a4d006c-455e-43f1-8c29-a9bee0e4e963" Jan 28 02:07:42.463107 kubelet[2843]: E0128 02:07:42.462032 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:07:46.729935 systemd[1]: Started sshd@8-10.230.50.62:22-68.220.241.50:38890.service - OpenSSH per-connection server daemon (68.220.241.50:38890). Jan 28 02:07:47.381958 sshd[5496]: Accepted publickey for core from 68.220.241.50 port 38890 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:07:47.384899 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:07:47.394189 systemd-logind[1601]: New session 11 of user core. Jan 28 02:07:47.401052 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 28 02:07:48.065887 sshd[5496]: pam_unix(sshd:session): session closed for user core Jan 28 02:07:48.070194 systemd[1]: sshd@8-10.230.50.62:22-68.220.241.50:38890.service: Deactivated successfully. Jan 28 02:07:48.075889 systemd-logind[1601]: Session 11 logged out. Waiting for processes to exit. Jan 28 02:07:48.076965 systemd[1]: session-11.scope: Deactivated successfully. Jan 28 02:07:48.078728 systemd-logind[1601]: Removed session 11. Jan 28 02:07:51.455030 containerd[1627]: time="2026-01-28T02:07:51.452643272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 02:07:51.804614 containerd[1627]: time="2026-01-28T02:07:51.804291902Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:51.806170 containerd[1627]: time="2026-01-28T02:07:51.806086570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 02:07:51.806336 containerd[1627]: time="2026-01-28T02:07:51.806117036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 02:07:51.807581 kubelet[2843]: E0128 02:07:51.806618 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:07:51.807581 kubelet[2843]: E0128 02:07:51.806737 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:07:51.807581 kubelet[2843]: E0128 02:07:51.807134 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zb9wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c95698587-q576f_calico-system(1159a5fb-a1ac-4f76-832d-c5be127c9405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:51.809570 kubelet[2843]: E0128 02:07:51.808960 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c95698587-q576f" podUID="1159a5fb-a1ac-4f76-832d-c5be127c9405" Jan 28 02:07:52.451686 containerd[1627]: time="2026-01-28T02:07:52.451629059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:07:52.793537 containerd[1627]: time="2026-01-28T02:07:52.793307197Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:52.794515 containerd[1627]: time="2026-01-28T02:07:52.794459414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:07:52.794643 containerd[1627]: time="2026-01-28T02:07:52.794573688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 02:07:52.795607 kubelet[2843]: E0128 02:07:52.794870 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:07:52.795607 kubelet[2843]: E0128 02:07:52.794953 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:07:52.796725 kubelet[2843]: E0128 02:07:52.796631 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhs92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66dfb7f7f9-w9b4h_calico-apiserver(91d0c55e-7a98-404e-a4c2-3c6f8edba99c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:52.798186 kubelet[2843]: E0128 02:07:52.798058 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" podUID="91d0c55e-7a98-404e-a4c2-3c6f8edba99c" Jan 28 02:07:53.159943 systemd[1]: Started sshd@9-10.230.50.62:22-68.220.241.50:33058.service - OpenSSH per-connection server daemon (68.220.241.50:33058). Jan 28 02:07:53.463460 containerd[1627]: time="2026-01-28T02:07:53.462005879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 02:07:53.734295 sshd[5510]: Accepted publickey for core from 68.220.241.50 port 33058 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:07:53.738356 sshd[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:07:53.751048 systemd-logind[1601]: New session 12 of user core. Jan 28 02:07:53.757341 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 28 02:07:53.796725 containerd[1627]: time="2026-01-28T02:07:53.796647311Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:53.798459 containerd[1627]: time="2026-01-28T02:07:53.798403005Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 02:07:53.798615 containerd[1627]: time="2026-01-28T02:07:53.798511077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 02:07:53.801018 kubelet[2843]: E0128 02:07:53.798984 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:07:53.801018 kubelet[2843]: E0128 02:07:53.799096 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:07:53.801018 kubelet[2843]: E0128 02:07:53.799410 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdd2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qjnq7_calico-system(8b2fed1f-0989-4c8f-98b5-dfc06958e7db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:53.802599 kubelet[2843]: E0128 02:07:53.802320 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qjnq7" podUID="8b2fed1f-0989-4c8f-98b5-dfc06958e7db" Jan 28 02:07:54.232674 sshd[5510]: pam_unix(sshd:session): session closed for user core Jan 28 02:07:54.238397 systemd[1]: sshd@9-10.230.50.62:22-68.220.241.50:33058.service: Deactivated successfully. Jan 28 02:07:54.243776 systemd[1]: session-12.scope: Deactivated successfully. Jan 28 02:07:54.245734 systemd-logind[1601]: Session 12 logged out. Waiting for processes to exit. Jan 28 02:07:54.248038 systemd-logind[1601]: Removed session 12. Jan 28 02:07:54.331938 systemd[1]: Started sshd@10-10.230.50.62:22-68.220.241.50:33064.service - OpenSSH per-connection server daemon (68.220.241.50:33064). Jan 28 02:07:54.457505 containerd[1627]: time="2026-01-28T02:07:54.457131582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 02:07:54.794485 containerd[1627]: time="2026-01-28T02:07:54.794418179Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:54.796098 containerd[1627]: time="2026-01-28T02:07:54.795970027Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 02:07:54.796098 containerd[1627]: time="2026-01-28T02:07:54.796046417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 02:07:54.798304 kubelet[2843]: E0128 02:07:54.796282 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:07:54.798304 kubelet[2843]: E0128 02:07:54.796389 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:07:54.798304 kubelet[2843]: E0128 02:07:54.796685 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l69f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dgqzm_calico-system(11c042ea-f3ed-451b-a4e5-0f06212804a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:54.806110 containerd[1627]: time="2026-01-28T02:07:54.806078468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 02:07:54.904394 sshd[5530]: Accepted publickey for core from 68.220.241.50 port 33064 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:07:54.907066 sshd[5530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:07:54.919279 systemd-logind[1601]: New session 13 of user core. Jan 28 02:07:54.926017 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 28 02:07:55.207045 containerd[1627]: time="2026-01-28T02:07:55.206908648Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:55.208507 containerd[1627]: time="2026-01-28T02:07:55.208455464Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 02:07:55.208703 containerd[1627]: time="2026-01-28T02:07:55.208646613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 02:07:55.209609 kubelet[2843]: E0128 02:07:55.209146 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:07:55.209609 kubelet[2843]: E0128 02:07:55.209267 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:07:55.212949 kubelet[2843]: E0128 02:07:55.209698 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l69f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dgqzm_calico-system(11c042ea-f3ed-451b-a4e5-0f06212804a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:55.212949 kubelet[2843]: E0128 02:07:55.212815 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:07:55.456118 containerd[1627]: time="2026-01-28T02:07:55.455998605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 02:07:55.510357 sshd[5530]: pam_unix(sshd:session): session closed for user core Jan 28 02:07:55.516621 systemd[1]: sshd@10-10.230.50.62:22-68.220.241.50:33064.service: Deactivated successfully. Jan 28 02:07:55.523766 systemd[1]: session-13.scope: Deactivated successfully. Jan 28 02:07:55.525547 systemd-logind[1601]: Session 13 logged out. Waiting for processes to exit. Jan 28 02:07:55.527117 systemd-logind[1601]: Removed session 13. Jan 28 02:07:55.614893 systemd[1]: Started sshd@11-10.230.50.62:22-68.220.241.50:33072.service - OpenSSH per-connection server daemon (68.220.241.50:33072). Jan 28 02:07:55.767986 containerd[1627]: time="2026-01-28T02:07:55.767720931Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:55.768913 containerd[1627]: time="2026-01-28T02:07:55.768851610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 02:07:55.769083 containerd[1627]: time="2026-01-28T02:07:55.768888769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 02:07:55.769250 kubelet[2843]: E0128 02:07:55.769129 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:07:55.769250 kubelet[2843]: E0128 02:07:55.769222 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:07:55.769457 kubelet[2843]: E0128 02:07:55.769397 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cb1d52d8e569481480e80c2c7a6f1cce,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7n6cj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77894fccbf-hf9dn_calico-system(39b2c588-693e-480c-a4f1-3808ca50200d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:55.773005 containerd[1627]: time="2026-01-28T02:07:55.772792543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 02:07:56.119413 containerd[1627]: time="2026-01-28T02:07:56.119113062Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:56.121114 containerd[1627]: time="2026-01-28T02:07:56.120922361Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 02:07:56.121114 containerd[1627]: time="2026-01-28T02:07:56.121056718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 02:07:56.121422 kubelet[2843]: E0128 02:07:56.121366 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:07:56.122121 kubelet[2843]: E0128 02:07:56.121430 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:07:56.122121 kubelet[2843]: E0128 02:07:56.121654 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n6cj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77894fccbf-hf9dn_calico-system(39b2c588-693e-480c-a4f1-3808ca50200d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:56.123654 kubelet[2843]: E0128 02:07:56.123537 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77894fccbf-hf9dn" podUID="39b2c588-693e-480c-a4f1-3808ca50200d" Jan 28 02:07:56.257254 sshd[5544]: Accepted publickey for core from 68.220.241.50 port 33072 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:07:56.260343 sshd[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:07:56.268232 systemd-logind[1601]: New session 14 of user core. Jan 28 02:07:56.275244 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 28 02:07:56.452732 containerd[1627]: time="2026-01-28T02:07:56.451444791Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:07:56.815318 sshd[5544]: pam_unix(sshd:session): session closed for user core Jan 28 02:07:56.820928 systemd[1]: sshd@11-10.230.50.62:22-68.220.241.50:33072.service: Deactivated successfully. Jan 28 02:07:56.826689 systemd-logind[1601]: Session 14 logged out. Waiting for processes to exit. Jan 28 02:07:56.827228 systemd[1]: session-14.scope: Deactivated successfully. Jan 28 02:07:56.830942 systemd-logind[1601]: Removed session 14. Jan 28 02:07:56.838833 containerd[1627]: time="2026-01-28T02:07:56.838706307Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:07:56.840320 containerd[1627]: time="2026-01-28T02:07:56.840246518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:07:56.840404 containerd[1627]: time="2026-01-28T02:07:56.840355645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 02:07:56.841577 kubelet[2843]: E0128 02:07:56.840681 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:07:56.841577 kubelet[2843]: E0128 02:07:56.840775 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:07:56.841577 kubelet[2843]: E0128 02:07:56.840986 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znfnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66dfb7f7f9-v4czv_calico-apiserver(9a4d006c-455e-43f1-8c29-a9bee0e4e963): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:07:56.842743 kubelet[2843]: E0128 02:07:56.842652 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" podUID="9a4d006c-455e-43f1-8c29-a9bee0e4e963" Jan 28 02:08:01.924902 systemd[1]: Started sshd@12-10.230.50.62:22-68.220.241.50:33074.service - OpenSSH per-connection server daemon (68.220.241.50:33074). Jan 28 02:08:02.558146 sshd[5560]: Accepted publickey for core from 68.220.241.50 port 33074 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:08:02.560240 sshd[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:08:02.567196 systemd-logind[1601]: New session 15 of user core. Jan 28 02:08:02.571968 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 28 02:08:03.072642 sshd[5560]: pam_unix(sshd:session): session closed for user core Jan 28 02:08:03.077984 systemd[1]: sshd@12-10.230.50.62:22-68.220.241.50:33074.service: Deactivated successfully. Jan 28 02:08:03.083647 systemd[1]: session-15.scope: Deactivated successfully. Jan 28 02:08:03.084954 systemd-logind[1601]: Session 15 logged out. Waiting for processes to exit. Jan 28 02:08:03.086704 systemd-logind[1601]: Removed session 15. Jan 28 02:08:04.454990 kubelet[2843]: E0128 02:08:04.454914 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c95698587-q576f" podUID="1159a5fb-a1ac-4f76-832d-c5be127c9405" Jan 28 02:08:05.452913 kubelet[2843]: E0128 02:08:05.452781 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" podUID="91d0c55e-7a98-404e-a4c2-3c6f8edba99c" Jan 28 02:08:07.452923 kubelet[2843]: E0128 02:08:07.452739 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qjnq7" podUID="8b2fed1f-0989-4c8f-98b5-dfc06958e7db" Jan 28 02:08:08.165865 systemd[1]: Started sshd@13-10.230.50.62:22-68.220.241.50:55410.service - OpenSSH per-connection server daemon (68.220.241.50:55410). Jan 28 02:08:08.456145 kubelet[2843]: E0128 02:08:08.455731 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:08:08.456145 kubelet[2843]: E0128 02:08:08.455934 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77894fccbf-hf9dn" podUID="39b2c588-693e-480c-a4f1-3808ca50200d" Jan 28 02:08:08.749761 sshd[5574]: Accepted publickey for core from 68.220.241.50 port 55410 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:08:08.753264 sshd[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:08:08.761286 systemd-logind[1601]: New session 16 of user core. Jan 28 02:08:08.770102 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 28 02:08:09.246125 sshd[5574]: pam_unix(sshd:session): session closed for user core Jan 28 02:08:09.251364 systemd-logind[1601]: Session 16 logged out. Waiting for processes to exit. Jan 28 02:08:09.252499 systemd[1]: sshd@13-10.230.50.62:22-68.220.241.50:55410.service: Deactivated successfully. Jan 28 02:08:09.259542 systemd[1]: session-16.scope: Deactivated successfully. Jan 28 02:08:09.261406 systemd-logind[1601]: Removed session 16. Jan 28 02:08:11.453153 kubelet[2843]: E0128 02:08:11.452859 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" podUID="9a4d006c-455e-43f1-8c29-a9bee0e4e963" Jan 28 02:08:14.347872 systemd[1]: Started sshd@14-10.230.50.62:22-68.220.241.50:39660.service - OpenSSH per-connection server daemon (68.220.241.50:39660). Jan 28 02:08:14.950621 sshd[5614]: Accepted publickey for core from 68.220.241.50 port 39660 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:08:14.952885 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:08:14.962806 systemd-logind[1601]: New session 17 of user core. Jan 28 02:08:14.967116 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 28 02:08:15.509208 sshd[5614]: pam_unix(sshd:session): session closed for user core Jan 28 02:08:15.518443 systemd[1]: sshd@14-10.230.50.62:22-68.220.241.50:39660.service: Deactivated successfully. Jan 28 02:08:15.522898 systemd[1]: session-17.scope: Deactivated successfully. Jan 28 02:08:15.524639 systemd-logind[1601]: Session 17 logged out. Waiting for processes to exit. Jan 28 02:08:15.526755 systemd-logind[1601]: Removed session 17. Jan 28 02:08:16.452375 kubelet[2843]: E0128 02:08:16.452288 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" podUID="91d0c55e-7a98-404e-a4c2-3c6f8edba99c" Jan 28 02:08:18.451876 kubelet[2843]: E0128 02:08:18.451345 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c95698587-q576f" podUID="1159a5fb-a1ac-4f76-832d-c5be127c9405" Jan 28 02:08:20.605908 systemd[1]: Started sshd@15-10.230.50.62:22-68.220.241.50:39670.service - OpenSSH per-connection server daemon (68.220.241.50:39670). Jan 28 02:08:21.200511 sshd[5630]: Accepted publickey for core from 68.220.241.50 port 39670 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:08:21.202860 sshd[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:08:21.211353 systemd-logind[1601]: New session 18 of user core. Jan 28 02:08:21.221021 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 28 02:08:21.453159 kubelet[2843]: E0128 02:08:21.451854 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qjnq7" podUID="8b2fed1f-0989-4c8f-98b5-dfc06958e7db" Jan 28 02:08:21.702033 sshd[5630]: pam_unix(sshd:session): session closed for user core Jan 28 02:08:21.706662 systemd[1]: sshd@15-10.230.50.62:22-68.220.241.50:39670.service: Deactivated successfully. Jan 28 02:08:21.711013 systemd-logind[1601]: Session 18 logged out. Waiting for processes to exit. Jan 28 02:08:21.712112 systemd[1]: session-18.scope: Deactivated successfully. Jan 28 02:08:21.714990 systemd-logind[1601]: Removed session 18. Jan 28 02:08:23.455243 kubelet[2843]: E0128 02:08:23.455191 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77894fccbf-hf9dn" podUID="39b2c588-693e-480c-a4f1-3808ca50200d" Jan 28 02:08:23.456869 kubelet[2843]: E0128 02:08:23.455314 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" podUID="9a4d006c-455e-43f1-8c29-a9bee0e4e963" Jan 28 02:08:23.460681 kubelet[2843]: E0128 02:08:23.458928 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:08:26.801972 systemd[1]: Started sshd@16-10.230.50.62:22-68.220.241.50:43408.service - OpenSSH per-connection server daemon (68.220.241.50:43408). Jan 28 02:08:27.377653 sshd[5645]: Accepted publickey for core from 68.220.241.50 port 43408 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:08:27.380727 sshd[5645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:08:27.387669 systemd-logind[1601]: New session 19 of user core. Jan 28 02:08:27.396673 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 28 02:08:27.876102 sshd[5645]: pam_unix(sshd:session): session closed for user core Jan 28 02:08:27.881881 systemd[1]: sshd@16-10.230.50.62:22-68.220.241.50:43408.service: Deactivated successfully. Jan 28 02:08:27.886570 systemd[1]: session-19.scope: Deactivated successfully. Jan 28 02:08:27.887919 systemd-logind[1601]: Session 19 logged out. Waiting for processes to exit. Jan 28 02:08:27.889410 systemd-logind[1601]: Removed session 19. Jan 28 02:08:29.449692 kubelet[2843]: E0128 02:08:29.449616 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c95698587-q576f" podUID="1159a5fb-a1ac-4f76-832d-c5be127c9405" Jan 28 02:08:31.450684 kubelet[2843]: E0128 02:08:31.450597 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" podUID="91d0c55e-7a98-404e-a4c2-3c6f8edba99c" Jan 28 02:08:32.979937 systemd[1]: Started sshd@17-10.230.50.62:22-68.220.241.50:34924.service - OpenSSH per-connection server daemon (68.220.241.50:34924). Jan 28 02:08:33.563974 sshd[5659]: Accepted publickey for core from 68.220.241.50 port 34924 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:08:33.564917 sshd[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:08:33.571982 systemd-logind[1601]: New session 20 of user core. Jan 28 02:08:33.578035 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 28 02:08:34.076032 sshd[5659]: pam_unix(sshd:session): session closed for user core Jan 28 02:08:34.082775 systemd[1]: sshd@17-10.230.50.62:22-68.220.241.50:34924.service: Deactivated successfully. Jan 28 02:08:34.089466 systemd[1]: session-20.scope: Deactivated successfully. Jan 28 02:08:34.091663 systemd-logind[1601]: Session 20 logged out. Waiting for processes to exit. Jan 28 02:08:34.095673 systemd-logind[1601]: Removed session 20. Jan 28 02:08:34.454269 kubelet[2843]: E0128 02:08:34.454140 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" podUID="9a4d006c-455e-43f1-8c29-a9bee0e4e963" Jan 28 02:08:34.456260 kubelet[2843]: E0128 02:08:34.456119 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77894fccbf-hf9dn" podUID="39b2c588-693e-480c-a4f1-3808ca50200d" Jan 28 02:08:35.454759 containerd[1627]: time="2026-01-28T02:08:35.453145445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 28 02:08:35.799043 containerd[1627]: time="2026-01-28T02:08:35.798662880Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:08:35.800582 containerd[1627]: time="2026-01-28T02:08:35.800482317Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 28 02:08:35.800756 containerd[1627]: time="2026-01-28T02:08:35.800661173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 28 02:08:35.801699 kubelet[2843]: E0128 02:08:35.801053 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:08:35.801699 kubelet[2843]: E0128 02:08:35.801204 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 28 02:08:35.801699 kubelet[2843]: E0128 02:08:35.801615 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l69f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dgqzm_calico-system(11c042ea-f3ed-451b-a4e5-0f06212804a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 28 02:08:35.804094 containerd[1627]: time="2026-01-28T02:08:35.804020895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 28 02:08:36.136117 containerd[1627]: time="2026-01-28T02:08:36.135964346Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:08:36.137713 containerd[1627]: time="2026-01-28T02:08:36.137510721Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 28 02:08:36.137713 containerd[1627]: time="2026-01-28T02:08:36.137611999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 28 02:08:36.138083 kubelet[2843]: E0128 02:08:36.137991 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:08:36.138244 kubelet[2843]: E0128 02:08:36.138152 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 28 02:08:36.138485 kubelet[2843]: E0128 02:08:36.138422 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9l69f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dgqzm_calico-system(11c042ea-f3ed-451b-a4e5-0f06212804a3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 28 02:08:36.140109 kubelet[2843]: E0128 02:08:36.140025 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:08:36.455691 containerd[1627]: time="2026-01-28T02:08:36.455233207Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 28 02:08:36.774139 containerd[1627]: time="2026-01-28T02:08:36.773878553Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:08:36.775742 containerd[1627]: time="2026-01-28T02:08:36.775680335Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 28 02:08:36.775885 containerd[1627]: time="2026-01-28T02:08:36.775836287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 28 02:08:36.776227 kubelet[2843]: E0128 02:08:36.776137 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:08:36.776388 kubelet[2843]: E0128 02:08:36.776243 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 28 02:08:36.777771 kubelet[2843]: E0128 02:08:36.776525 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wdd2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qjnq7_calico-system(8b2fed1f-0989-4c8f-98b5-dfc06958e7db): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 28 02:08:36.778488 kubelet[2843]: E0128 02:08:36.778453 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qjnq7" podUID="8b2fed1f-0989-4c8f-98b5-dfc06958e7db" Jan 28 02:08:39.180997 systemd[1]: Started sshd@18-10.230.50.62:22-68.220.241.50:34934.service - OpenSSH per-connection server daemon (68.220.241.50:34934). Jan 28 02:08:39.773714 sshd[5681]: Accepted publickey for core from 68.220.241.50 port 34934 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:08:39.777045 sshd[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:08:39.792716 systemd-logind[1601]: New session 21 of user core. Jan 28 02:08:39.798461 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 28 02:08:40.294982 sshd[5681]: pam_unix(sshd:session): session closed for user core Jan 28 02:08:40.301277 systemd[1]: sshd@18-10.230.50.62:22-68.220.241.50:34934.service: Deactivated successfully. Jan 28 02:08:40.306032 systemd-logind[1601]: Session 21 logged out. Waiting for processes to exit. Jan 28 02:08:40.306882 systemd[1]: session-21.scope: Deactivated successfully. Jan 28 02:08:40.310069 systemd-logind[1601]: Removed session 21. Jan 28 02:08:40.392046 systemd[1]: Started sshd@19-10.230.50.62:22-68.220.241.50:34950.service - OpenSSH per-connection server daemon (68.220.241.50:34950). Jan 28 02:08:40.977360 sshd[5695]: Accepted publickey for core from 68.220.241.50 port 34950 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:08:40.979862 sshd[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:08:40.987480 systemd-logind[1601]: New session 22 of user core. Jan 28 02:08:40.993005 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 28 02:08:41.452072 containerd[1627]: time="2026-01-28T02:08:41.451543763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 28 02:08:41.792227 containerd[1627]: time="2026-01-28T02:08:41.792058678Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:08:41.793333 containerd[1627]: time="2026-01-28T02:08:41.793244268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 28 02:08:41.793444 containerd[1627]: time="2026-01-28T02:08:41.793363719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 28 02:08:41.793827 kubelet[2843]: E0128 02:08:41.793690 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:08:41.794376 kubelet[2843]: E0128 02:08:41.793886 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 28 02:08:41.795620 kubelet[2843]: E0128 02:08:41.794849 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zb9wz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5c95698587-q576f_calico-system(1159a5fb-a1ac-4f76-832d-c5be127c9405): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 28 02:08:41.799670 kubelet[2843]: E0128 02:08:41.799122 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c95698587-q576f" podUID="1159a5fb-a1ac-4f76-832d-c5be127c9405" Jan 28 02:08:41.955017 sshd[5695]: pam_unix(sshd:session): session closed for user core Jan 28 02:08:41.964774 systemd[1]: sshd@19-10.230.50.62:22-68.220.241.50:34950.service: Deactivated successfully. Jan 28 02:08:41.970151 systemd[1]: session-22.scope: Deactivated successfully. Jan 28 02:08:41.970384 systemd-logind[1601]: Session 22 logged out. Waiting for processes to exit. Jan 28 02:08:41.973275 systemd-logind[1601]: Removed session 22. Jan 28 02:08:42.059921 systemd[1]: Started sshd@20-10.230.50.62:22-68.220.241.50:34964.service - OpenSSH per-connection server daemon (68.220.241.50:34964). Jan 28 02:08:42.661108 sshd[5729]: Accepted publickey for core from 68.220.241.50 port 34964 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:08:42.664188 sshd[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:08:42.672747 systemd-logind[1601]: New session 23 of user core. Jan 28 02:08:42.678167 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 28 02:08:44.046422 sshd[5729]: pam_unix(sshd:session): session closed for user core Jan 28 02:08:44.053220 systemd[1]: sshd@20-10.230.50.62:22-68.220.241.50:34964.service: Deactivated successfully. Jan 28 02:08:44.059919 systemd-logind[1601]: Session 23 logged out. Waiting for processes to exit. Jan 28 02:08:44.061093 systemd[1]: session-23.scope: Deactivated successfully. Jan 28 02:08:44.064835 systemd-logind[1601]: Removed session 23. Jan 28 02:08:44.148879 systemd[1]: Started sshd@21-10.230.50.62:22-68.220.241.50:42742.service - OpenSSH per-connection server daemon (68.220.241.50:42742). Jan 28 02:08:44.453136 containerd[1627]: time="2026-01-28T02:08:44.453060678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:08:44.780056 sshd[5748]: Accepted publickey for core from 68.220.241.50 port 42742 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:08:44.785575 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:08:44.799917 systemd-logind[1601]: New session 24 of user core. Jan 28 02:08:44.809697 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 28 02:08:44.823431 containerd[1627]: time="2026-01-28T02:08:44.822420153Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:08:44.825094 containerd[1627]: time="2026-01-28T02:08:44.825025527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:08:44.825391 containerd[1627]: time="2026-01-28T02:08:44.825055744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 02:08:44.826651 kubelet[2843]: E0128 02:08:44.825593 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:08:44.826651 kubelet[2843]: E0128 02:08:44.825705 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:08:44.826651 kubelet[2843]: E0128 02:08:44.825987 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhs92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66dfb7f7f9-w9b4h_calico-apiserver(91d0c55e-7a98-404e-a4c2-3c6f8edba99c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:08:44.827818 kubelet[2843]: E0128 02:08:44.827365 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" podUID="91d0c55e-7a98-404e-a4c2-3c6f8edba99c" Jan 28 02:08:45.627018 sshd[5748]: pam_unix(sshd:session): session closed for user core Jan 28 02:08:45.638881 systemd[1]: sshd@21-10.230.50.62:22-68.220.241.50:42742.service: Deactivated successfully. Jan 28 02:08:45.645127 systemd-logind[1601]: Session 24 logged out. Waiting for processes to exit. Jan 28 02:08:45.645837 systemd[1]: session-24.scope: Deactivated successfully. Jan 28 02:08:45.649004 systemd-logind[1601]: Removed session 24. Jan 28 02:08:45.742258 systemd[1]: Started sshd@22-10.230.50.62:22-68.220.241.50:42750.service - OpenSSH per-connection server daemon (68.220.241.50:42750). Jan 28 02:08:46.416644 sshd[5760]: Accepted publickey for core from 68.220.241.50 port 42750 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:08:46.418344 sshd[5760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:08:46.431260 systemd-logind[1601]: New session 25 of user core. Jan 28 02:08:46.437724 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 28 02:08:46.454640 containerd[1627]: time="2026-01-28T02:08:46.454226252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 28 02:08:46.832691 containerd[1627]: time="2026-01-28T02:08:46.831861315Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:08:46.834190 containerd[1627]: time="2026-01-28T02:08:46.833742424Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 28 02:08:46.834190 containerd[1627]: time="2026-01-28T02:08:46.833815522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 28 02:08:46.834725 kubelet[2843]: E0128 02:08:46.834281 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:08:46.834725 kubelet[2843]: E0128 02:08:46.834635 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 28 02:08:46.835422 kubelet[2843]: E0128 02:08:46.834961 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znfnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66dfb7f7f9-v4czv_calico-apiserver(9a4d006c-455e-43f1-8c29-a9bee0e4e963): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 28 02:08:46.839603 kubelet[2843]: E0128 02:08:46.838864 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" podUID="9a4d006c-455e-43f1-8c29-a9bee0e4e963" Jan 28 02:08:47.084016 sshd[5760]: pam_unix(sshd:session): session closed for user core Jan 28 02:08:47.097592 systemd[1]: sshd@22-10.230.50.62:22-68.220.241.50:42750.service: Deactivated successfully. Jan 28 02:08:47.102902 systemd-logind[1601]: Session 25 logged out. Waiting for processes to exit. Jan 28 02:08:47.110234 systemd[1]: session-25.scope: Deactivated successfully. Jan 28 02:08:47.112257 systemd-logind[1601]: Removed session 25. Jan 28 02:08:49.452954 kubelet[2843]: E0128 02:08:49.452775 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qjnq7" podUID="8b2fed1f-0989-4c8f-98b5-dfc06958e7db" Jan 28 02:08:49.459087 containerd[1627]: time="2026-01-28T02:08:49.455113211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 28 02:08:49.480611 kubelet[2843]: E0128 02:08:49.478228 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:08:49.836001 containerd[1627]: time="2026-01-28T02:08:49.835304913Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:08:49.838048 containerd[1627]: time="2026-01-28T02:08:49.837486462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 28 02:08:49.838048 containerd[1627]: time="2026-01-28T02:08:49.837527082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 28 02:08:49.838216 kubelet[2843]: E0128 02:08:49.838024 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:08:49.838319 kubelet[2843]: E0128 02:08:49.838204 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 28 02:08:49.839626 kubelet[2843]: E0128 02:08:49.838545 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:cb1d52d8e569481480e80c2c7a6f1cce,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7n6cj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77894fccbf-hf9dn_calico-system(39b2c588-693e-480c-a4f1-3808ca50200d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 28 02:08:49.841007 containerd[1627]: time="2026-01-28T02:08:49.840953257Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 28 02:08:50.190874 containerd[1627]: time="2026-01-28T02:08:50.190793629Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 28 02:08:50.193589 containerd[1627]: time="2026-01-28T02:08:50.193164536Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 28 02:08:50.193589 containerd[1627]: time="2026-01-28T02:08:50.193347427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 28 02:08:50.195840 kubelet[2843]: E0128 02:08:50.194044 2843 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:08:50.195840 kubelet[2843]: E0128 02:08:50.194116 2843 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 28 02:08:50.195840 kubelet[2843]: E0128 02:08:50.194277 2843 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7n6cj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-77894fccbf-hf9dn_calico-system(39b2c588-693e-480c-a4f1-3808ca50200d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 28 02:08:50.196469 kubelet[2843]: E0128 02:08:50.195720 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77894fccbf-hf9dn" podUID="39b2c588-693e-480c-a4f1-3808ca50200d" Jan 28 02:08:52.182870 systemd[1]: Started sshd@23-10.230.50.62:22-68.220.241.50:42760.service - OpenSSH per-connection server daemon (68.220.241.50:42760). Jan 28 02:08:52.788684 sshd[5794]: Accepted publickey for core from 68.220.241.50 port 42760 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:08:52.791826 sshd[5794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:08:52.804484 systemd-logind[1601]: New session 26 of user core. Jan 28 02:08:52.808134 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 28 02:08:53.489947 sshd[5794]: pam_unix(sshd:session): session closed for user core Jan 28 02:08:53.502196 systemd[1]: sshd@23-10.230.50.62:22-68.220.241.50:42760.service: Deactivated successfully. Jan 28 02:08:53.509179 systemd-logind[1601]: Session 26 logged out. Waiting for processes to exit. Jan 28 02:08:53.510227 systemd[1]: session-26.scope: Deactivated successfully. Jan 28 02:08:53.514482 systemd-logind[1601]: Removed session 26. Jan 28 02:08:54.461512 kubelet[2843]: E0128 02:08:54.453652 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c95698587-q576f" podUID="1159a5fb-a1ac-4f76-832d-c5be127c9405" Jan 28 02:08:57.455377 kubelet[2843]: E0128 02:08:57.453563 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" podUID="91d0c55e-7a98-404e-a4c2-3c6f8edba99c" Jan 28 02:08:58.592082 systemd[1]: Started sshd@24-10.230.50.62:22-68.220.241.50:44328.service - OpenSSH per-connection server daemon (68.220.241.50:44328). Jan 28 02:08:59.216134 sshd[5813]: Accepted publickey for core from 68.220.241.50 port 44328 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:08:59.218707 sshd[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:08:59.231350 systemd-logind[1601]: New session 27 of user core. Jan 28 02:08:59.239233 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 28 02:08:59.451696 kubelet[2843]: E0128 02:08:59.451197 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" podUID="9a4d006c-455e-43f1-8c29-a9bee0e4e963" Jan 28 02:09:00.021028 sshd[5813]: pam_unix(sshd:session): session closed for user core Jan 28 02:09:00.034410 systemd[1]: sshd@24-10.230.50.62:22-68.220.241.50:44328.service: Deactivated successfully. Jan 28 02:09:00.046690 systemd[1]: session-27.scope: Deactivated successfully. Jan 28 02:09:00.053764 systemd-logind[1601]: Session 27 logged out. Waiting for processes to exit. Jan 28 02:09:00.061383 systemd-logind[1601]: Removed session 27. Jan 28 02:09:03.456820 kubelet[2843]: E0128 02:09:03.456611 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qjnq7" podUID="8b2fed1f-0989-4c8f-98b5-dfc06958e7db" Jan 28 02:09:03.461065 kubelet[2843]: E0128 02:09:03.460738 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3" Jan 28 02:09:04.453049 kubelet[2843]: E0128 02:09:04.452683 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77894fccbf-hf9dn" podUID="39b2c588-693e-480c-a4f1-3808ca50200d" Jan 28 02:09:05.117061 systemd[1]: Started sshd@25-10.230.50.62:22-68.220.241.50:36010.service - OpenSSH per-connection server daemon (68.220.241.50:36010). Jan 28 02:09:05.700804 sshd[5829]: Accepted publickey for core from 68.220.241.50 port 36010 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:09:05.705231 sshd[5829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:09:05.722152 systemd-logind[1601]: New session 28 of user core. Jan 28 02:09:05.732063 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 28 02:09:06.312075 sshd[5829]: pam_unix(sshd:session): session closed for user core Jan 28 02:09:06.323953 systemd[1]: sshd@25-10.230.50.62:22-68.220.241.50:36010.service: Deactivated successfully. Jan 28 02:09:06.333002 systemd-logind[1601]: Session 28 logged out. Waiting for processes to exit. Jan 28 02:09:06.334804 systemd[1]: session-28.scope: Deactivated successfully. Jan 28 02:09:06.341965 systemd-logind[1601]: Removed session 28. Jan 28 02:09:09.452932 kubelet[2843]: E0128 02:09:09.452809 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5c95698587-q576f" podUID="1159a5fb-a1ac-4f76-832d-c5be127c9405" Jan 28 02:09:10.457151 kubelet[2843]: E0128 02:09:10.456244 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-v4czv" podUID="9a4d006c-455e-43f1-8c29-a9bee0e4e963" Jan 28 02:09:10.473607 kubelet[2843]: E0128 02:09:10.457004 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66dfb7f7f9-w9b4h" podUID="91d0c55e-7a98-404e-a4c2-3c6f8edba99c" Jan 28 02:09:11.422917 systemd[1]: Started sshd@26-10.230.50.62:22-68.220.241.50:36012.service - OpenSSH per-connection server daemon (68.220.241.50:36012). Jan 28 02:09:12.061060 sshd[5849]: Accepted publickey for core from 68.220.241.50 port 36012 ssh2: RSA SHA256:frmsa0hE1R2N3hYBQNo8mi4qiotKrmive+Tbm4AOdPY Jan 28 02:09:12.065632 sshd[5849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 28 02:09:12.095005 systemd-logind[1601]: New session 29 of user core. Jan 28 02:09:12.099688 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 28 02:09:12.655214 sshd[5849]: pam_unix(sshd:session): session closed for user core Jan 28 02:09:12.665580 systemd[1]: sshd@26-10.230.50.62:22-68.220.241.50:36012.service: Deactivated successfully. Jan 28 02:09:12.674744 systemd-logind[1601]: Session 29 logged out. Waiting for processes to exit. Jan 28 02:09:12.675819 systemd[1]: session-29.scope: Deactivated successfully. Jan 28 02:09:12.678895 systemd-logind[1601]: Removed session 29. Jan 28 02:09:16.455050 kubelet[2843]: E0128 02:09:16.454907 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qjnq7" podUID="8b2fed1f-0989-4c8f-98b5-dfc06958e7db" Jan 28 02:09:16.461105 kubelet[2843]: E0128 02:09:16.459244 2843 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dgqzm" podUID="11c042ea-f3ed-451b-a4e5-0f06212804a3"