Nov 8 01:14:02.045140 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 01:14:02.045250 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 01:14:02.045265 kernel: BIOS-provided physical RAM map: Nov 8 01:14:02.045283 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 8 01:14:02.045293 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 8 01:14:02.045303 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 8 01:14:02.045315 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Nov 8 01:14:02.045325 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Nov 8 01:14:02.045336 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 8 01:14:02.045346 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 8 01:14:02.045356 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 8 01:14:02.045367 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 8 01:14:02.045382 kernel: NX (Execute Disable) protection: active Nov 8 01:14:02.045393 kernel: APIC: Static calls initialized Nov 8 01:14:02.045406 kernel: SMBIOS 2.8 present. Nov 8 01:14:02.045417 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Nov 8 01:14:02.045429 kernel: Hypervisor detected: KVM Nov 8 01:14:02.045444 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 8 01:14:02.045456 kernel: kvm-clock: using sched offset of 4439856092 cycles Nov 8 01:14:02.045468 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 8 01:14:02.045480 kernel: tsc: Detected 2499.998 MHz processor Nov 8 01:14:02.045491 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 01:14:02.045503 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 01:14:02.045514 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 8 01:14:02.045526 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 8 01:14:02.045537 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 01:14:02.045553 kernel: Using GB pages for direct mapping Nov 8 01:14:02.045565 kernel: ACPI: Early table checksum verification disabled Nov 8 01:14:02.045576 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Nov 8 01:14:02.045588 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 01:14:02.045599 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 01:14:02.045611 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 01:14:02.045622 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Nov 8 01:14:02.045634 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 01:14:02.045645 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 01:14:02.045661 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 01:14:02.045673 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 01:14:02.045685 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Nov 8 01:14:02.045709 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Nov 8 01:14:02.045722 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Nov 8 01:14:02.045740 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Nov 8 01:14:02.045752 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Nov 8 01:14:02.045769 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Nov 8 01:14:02.045781 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Nov 8 01:14:02.045793 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 01:14:02.045805 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 01:14:02.045816 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 8 01:14:02.045828 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Nov 8 01:14:02.045840 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 8 01:14:02.045856 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Nov 8 01:14:02.045868 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 8 01:14:02.045880 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Nov 8 01:14:02.045892 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 8 01:14:02.045904 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Nov 8 01:14:02.045915 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 8 01:14:02.045927 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Nov 8 01:14:02.045939 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 8 01:14:02.045951 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Nov 8 01:14:02.045962 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 8 01:14:02.045979 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Nov 8 01:14:02.045991 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 8 01:14:02.046003 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 8 01:14:02.046015 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Nov 8 01:14:02.046027 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Nov 8 01:14:02.046039 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Nov 8 01:14:02.046051 kernel: Zone ranges: Nov 8 01:14:02.046063 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 01:14:02.046075 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Nov 8 01:14:02.046092 kernel: Normal empty Nov 8 01:14:02.046104 kernel: Movable zone start for each node Nov 8 01:14:02.046116 kernel: Early memory node ranges Nov 8 01:14:02.046128 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 8 01:14:02.046140 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Nov 8 01:14:02.046152 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Nov 8 01:14:02.046164 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 01:14:02.046190 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 8 01:14:02.046202 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Nov 8 01:14:02.046214 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 8 01:14:02.046232 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 8 01:14:02.046244 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 8 01:14:02.046256 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 8 01:14:02.046268 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 8 01:14:02.046280 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 01:14:02.046292 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 8 01:14:02.046304 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 8 01:14:02.046316 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 01:14:02.046328 kernel: TSC deadline timer available Nov 8 01:14:02.046345 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Nov 8 01:14:02.046357 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 8 01:14:02.046369 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 8 01:14:02.046381 kernel: Booting paravirtualized kernel on KVM Nov 8 01:14:02.046393 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 01:14:02.046406 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 8 01:14:02.046418 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 8 01:14:02.046430 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 8 01:14:02.046442 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 8 01:14:02.046459 kernel: kvm-guest: PV spinlocks enabled Nov 8 01:14:02.046471 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 01:14:02.046485 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 01:14:02.046497 kernel: random: crng init done Nov 8 01:14:02.046509 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 01:14:02.046521 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 01:14:02.046533 kernel: Fallback order for Node 0: 0 Nov 8 01:14:02.046545 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Nov 8 01:14:02.046562 kernel: Policy zone: DMA32 Nov 8 01:14:02.046574 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 01:14:02.046586 kernel: software IO TLB: area num 16. Nov 8 01:14:02.046598 kernel: Memory: 1901536K/2096616K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 194820K reserved, 0K cma-reserved) Nov 8 01:14:02.046610 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 8 01:14:02.046623 kernel: Kernel/User page tables isolation: enabled Nov 8 01:14:02.046635 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 01:14:02.046646 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 01:14:02.046658 kernel: Dynamic Preempt: voluntary Nov 8 01:14:02.046676 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 01:14:02.046697 kernel: rcu: RCU event tracing is enabled. Nov 8 01:14:02.046711 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 8 01:14:02.046724 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 01:14:02.046736 kernel: Rude variant of Tasks RCU enabled. Nov 8 01:14:02.046762 kernel: Tracing variant of Tasks RCU enabled. Nov 8 01:14:02.046780 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 01:14:02.046793 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 8 01:14:02.046805 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Nov 8 01:14:02.046818 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 01:14:02.046830 kernel: Console: colour VGA+ 80x25 Nov 8 01:14:02.046843 kernel: printk: console [tty0] enabled Nov 8 01:14:02.046860 kernel: printk: console [ttyS0] enabled Nov 8 01:14:02.046873 kernel: ACPI: Core revision 20230628 Nov 8 01:14:02.046886 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 01:14:02.046898 kernel: x2apic enabled Nov 8 01:14:02.046911 kernel: APIC: Switched APIC routing to: physical x2apic Nov 8 01:14:02.046929 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 8 01:14:02.046942 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Nov 8 01:14:02.046955 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 8 01:14:02.046967 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 8 01:14:02.046980 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 8 01:14:02.046992 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 01:14:02.047004 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 01:14:02.047017 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 01:14:02.047030 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 8 01:14:02.047042 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 8 01:14:02.047060 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 8 01:14:02.047072 kernel: MDS: Mitigation: Clear CPU buffers Nov 8 01:14:02.047085 kernel: MMIO Stale Data: Unknown: No mitigations Nov 8 01:14:02.047097 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 8 01:14:02.047109 kernel: active return thunk: its_return_thunk Nov 8 01:14:02.047122 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 01:14:02.047134 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 01:14:02.047147 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 01:14:02.047159 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 01:14:02.050080 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 01:14:02.050099 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 8 01:14:02.050121 kernel: Freeing SMP alternatives memory: 32K Nov 8 01:14:02.050133 kernel: pid_max: default: 32768 minimum: 301 Nov 8 01:14:02.050146 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 01:14:02.050159 kernel: landlock: Up and running. Nov 8 01:14:02.050190 kernel: SELinux: Initializing. Nov 8 01:14:02.050204 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 01:14:02.050217 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 8 01:14:02.050230 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Nov 8 01:14:02.050243 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 01:14:02.050256 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 01:14:02.050275 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 8 01:14:02.050289 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Nov 8 01:14:02.050301 kernel: signal: max sigframe size: 1776 Nov 8 01:14:02.050314 kernel: rcu: Hierarchical SRCU implementation. Nov 8 01:14:02.050328 kernel: rcu: Max phase no-delay instances is 400. Nov 8 01:14:02.050341 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 01:14:02.050354 kernel: smp: Bringing up secondary CPUs ... Nov 8 01:14:02.050367 kernel: smpboot: x86: Booting SMP configuration: Nov 8 01:14:02.050379 kernel: .... node #0, CPUs: #1 Nov 8 01:14:02.050397 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 8 01:14:02.050410 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 01:14:02.050423 kernel: smpboot: Max logical packages: 16 Nov 8 01:14:02.050436 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Nov 8 01:14:02.050449 kernel: devtmpfs: initialized Nov 8 01:14:02.050462 kernel: x86/mm: Memory block size: 128MB Nov 8 01:14:02.050474 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 01:14:02.050487 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 8 01:14:02.050500 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 01:14:02.050513 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 01:14:02.050531 kernel: audit: initializing netlink subsys (disabled) Nov 8 01:14:02.050544 kernel: audit: type=2000 audit(1762564440.258:1): state=initialized audit_enabled=0 res=1 Nov 8 01:14:02.050557 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 01:14:02.050570 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 01:14:02.050583 kernel: cpuidle: using governor menu Nov 8 01:14:02.050595 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 01:14:02.050647 kernel: dca service started, version 1.12.1 Nov 8 01:14:02.050661 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 8 01:14:02.050680 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 8 01:14:02.050731 kernel: PCI: Using configuration type 1 for base access Nov 8 01:14:02.050746 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 01:14:02.050759 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 01:14:02.050772 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 01:14:02.050784 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 01:14:02.050797 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 01:14:02.050810 kernel: ACPI: Added _OSI(Module Device) Nov 8 01:14:02.050823 kernel: ACPI: Added _OSI(Processor Device) Nov 8 01:14:02.050842 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 01:14:02.050877 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 01:14:02.050891 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 01:14:02.050904 kernel: ACPI: Interpreter enabled Nov 8 01:14:02.050916 kernel: ACPI: PM: (supports S0 S5) Nov 8 01:14:02.050929 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 01:14:02.050942 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 01:14:02.050955 kernel: PCI: Using E820 reservations for host bridge windows Nov 8 01:14:02.050967 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 8 01:14:02.050986 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 01:14:02.051833 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 01:14:02.052029 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 8 01:14:02.052227 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 8 01:14:02.052247 kernel: PCI host bridge to bus 0000:00 Nov 8 01:14:02.052439 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 8 01:14:02.052601 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 8 01:14:02.052782 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 8 01:14:02.052945 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 8 01:14:02.053102 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 8 01:14:02.055553 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Nov 8 01:14:02.055738 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 01:14:02.055940 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 8 01:14:02.056139 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Nov 8 01:14:02.056368 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Nov 8 01:14:02.056544 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Nov 8 01:14:02.056732 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Nov 8 01:14:02.056911 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 8 01:14:02.057110 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 8 01:14:02.057311 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Nov 8 01:14:02.057506 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 8 01:14:02.057685 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Nov 8 01:14:02.057885 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 8 01:14:02.058063 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Nov 8 01:14:02.061389 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 8 01:14:02.061596 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Nov 8 01:14:02.061836 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 8 01:14:02.062032 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Nov 8 01:14:02.062306 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 8 01:14:02.062486 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Nov 8 01:14:02.062682 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 8 01:14:02.062907 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Nov 8 01:14:02.063099 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 8 01:14:02.065428 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Nov 8 01:14:02.065634 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 8 01:14:02.065827 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 8 01:14:02.066003 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Nov 8 01:14:02.066194 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 8 01:14:02.066373 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Nov 8 01:14:02.066569 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Nov 8 01:14:02.066761 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Nov 8 01:14:02.066936 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Nov 8 01:14:02.067110 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Nov 8 01:14:02.070731 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 8 01:14:02.070933 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 8 01:14:02.071120 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 8 01:14:02.071326 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Nov 8 01:14:02.071500 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Nov 8 01:14:02.071682 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 8 01:14:02.071867 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 8 01:14:02.072060 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Nov 8 01:14:02.072271 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Nov 8 01:14:02.072460 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 8 01:14:02.072639 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 8 01:14:02.072846 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 8 01:14:02.073095 kernel: pci_bus 0000:02: extended config space not accessible Nov 8 01:14:02.073337 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Nov 8 01:14:02.073545 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Nov 8 01:14:02.073766 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 8 01:14:02.073966 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 8 01:14:02.076187 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 8 01:14:02.076445 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Nov 8 01:14:02.076636 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 8 01:14:02.076831 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 8 01:14:02.077055 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 8 01:14:02.077445 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 8 01:14:02.077652 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 8 01:14:02.077844 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 8 01:14:02.078015 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 8 01:14:02.078201 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 8 01:14:02.078375 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 8 01:14:02.078547 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 8 01:14:02.078731 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 8 01:14:02.078916 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 8 01:14:02.079088 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 8 01:14:02.079598 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 8 01:14:02.079798 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 8 01:14:02.079971 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 8 01:14:02.080142 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 8 01:14:02.080333 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 8 01:14:02.080507 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 8 01:14:02.080699 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 8 01:14:02.080880 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 8 01:14:02.081053 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 8 01:14:02.081256 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 8 01:14:02.081277 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 8 01:14:02.081291 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 8 01:14:02.081304 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 8 01:14:02.081317 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 8 01:14:02.081330 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 8 01:14:02.081351 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 8 01:14:02.081364 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 8 01:14:02.081378 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 8 01:14:02.081390 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 8 01:14:02.081403 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 8 01:14:02.081416 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 8 01:14:02.081429 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 8 01:14:02.081442 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 8 01:14:02.081454 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 8 01:14:02.081472 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 8 01:14:02.081485 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 8 01:14:02.081498 kernel: iommu: Default domain type: Translated Nov 8 01:14:02.081511 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 01:14:02.081524 kernel: PCI: Using ACPI for IRQ routing Nov 8 01:14:02.081537 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 8 01:14:02.081549 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 8 01:14:02.081562 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Nov 8 01:14:02.081747 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 8 01:14:02.081929 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 8 01:14:02.082105 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 8 01:14:02.082125 kernel: vgaarb: loaded Nov 8 01:14:02.082138 kernel: clocksource: Switched to clocksource kvm-clock Nov 8 01:14:02.082151 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 01:14:02.082164 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 01:14:02.082226 kernel: pnp: PnP ACPI init Nov 8 01:14:02.082421 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 8 01:14:02.082450 kernel: pnp: PnP ACPI: found 5 devices Nov 8 01:14:02.082463 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 01:14:02.082476 kernel: NET: Registered PF_INET protocol family Nov 8 01:14:02.082489 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 01:14:02.082502 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 8 01:14:02.082515 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 01:14:02.082528 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 01:14:02.082541 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 8 01:14:02.082559 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 8 01:14:02.082572 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 01:14:02.082586 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 8 01:14:02.082599 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 01:14:02.082611 kernel: NET: Registered PF_XDP protocol family Nov 8 01:14:02.082795 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Nov 8 01:14:02.082968 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 8 01:14:02.083139 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 8 01:14:02.083352 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 8 01:14:02.083524 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 8 01:14:02.083707 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 8 01:14:02.083884 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 8 01:14:02.084056 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 8 01:14:02.084285 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Nov 8 01:14:02.084465 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Nov 8 01:14:02.084634 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Nov 8 01:14:02.084819 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Nov 8 01:14:02.084989 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Nov 8 01:14:02.085158 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Nov 8 01:14:02.085356 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Nov 8 01:14:02.085527 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Nov 8 01:14:02.085723 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 8 01:14:02.085932 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 8 01:14:02.086108 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 8 01:14:02.086321 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 8 01:14:02.086492 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 8 01:14:02.086660 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 8 01:14:02.086860 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 8 01:14:02.087032 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 8 01:14:02.087231 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 8 01:14:02.087406 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 8 01:14:02.087586 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 8 01:14:02.087777 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 8 01:14:02.087958 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 8 01:14:02.088142 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 8 01:14:02.088358 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 8 01:14:02.088538 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 8 01:14:02.088721 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 8 01:14:02.088893 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 8 01:14:02.089064 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 8 01:14:02.089264 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 8 01:14:02.089435 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 8 01:14:02.089615 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 8 01:14:02.089804 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 8 01:14:02.089977 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 8 01:14:02.090158 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 8 01:14:02.090377 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 8 01:14:02.090548 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 8 01:14:02.090730 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 8 01:14:02.090901 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 8 01:14:02.091079 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 8 01:14:02.091283 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 8 01:14:02.091455 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 8 01:14:02.091629 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 8 01:14:02.091813 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 8 01:14:02.091985 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 8 01:14:02.092143 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 8 01:14:02.092343 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 8 01:14:02.092499 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 8 01:14:02.092661 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 8 01:14:02.092829 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Nov 8 01:14:02.093004 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 8 01:14:02.093189 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Nov 8 01:14:02.093408 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 8 01:14:02.093584 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 8 01:14:02.093786 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Nov 8 01:14:02.093961 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 8 01:14:02.094126 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 8 01:14:02.094345 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Nov 8 01:14:02.094508 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 8 01:14:02.094668 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 8 01:14:02.094850 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 8 01:14:02.095022 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 8 01:14:02.095210 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 8 01:14:02.095405 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Nov 8 01:14:02.095571 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 8 01:14:02.095766 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 8 01:14:02.095938 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Nov 8 01:14:02.096103 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 8 01:14:02.096319 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 8 01:14:02.096490 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Nov 8 01:14:02.096651 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Nov 8 01:14:02.096890 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 8 01:14:02.097063 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Nov 8 01:14:02.097266 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 8 01:14:02.097446 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 8 01:14:02.097476 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 8 01:14:02.097490 kernel: PCI: CLS 0 bytes, default 64 Nov 8 01:14:02.097504 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 01:14:02.097518 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Nov 8 01:14:02.097532 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 01:14:02.097545 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 8 01:14:02.097559 kernel: Initialise system trusted keyrings Nov 8 01:14:02.097573 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 8 01:14:02.097592 kernel: Key type asymmetric registered Nov 8 01:14:02.097605 kernel: Asymmetric key parser 'x509' registered Nov 8 01:14:02.097618 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 01:14:02.097632 kernel: io scheduler mq-deadline registered Nov 8 01:14:02.097646 kernel: io scheduler kyber registered Nov 8 01:14:02.097659 kernel: io scheduler bfq registered Nov 8 01:14:02.097852 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 8 01:14:02.098028 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 8 01:14:02.098266 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 01:14:02.098451 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 8 01:14:02.098623 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 8 01:14:02.098811 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 01:14:02.098985 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 8 01:14:02.099156 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 8 01:14:02.099356 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 01:14:02.099594 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 8 01:14:02.099791 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 8 01:14:02.099967 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 01:14:02.100140 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 8 01:14:02.100367 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 8 01:14:02.100540 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 01:14:02.100746 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 8 01:14:02.100931 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 8 01:14:02.101102 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 01:14:02.101306 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 8 01:14:02.101492 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 8 01:14:02.101664 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 01:14:02.101859 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 8 01:14:02.102053 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 8 01:14:02.102257 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 8 01:14:02.102279 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 01:14:02.102294 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 8 01:14:02.102308 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 8 01:14:02.102322 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 01:14:02.102343 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 01:14:02.102357 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 8 01:14:02.102371 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 8 01:14:02.102384 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 8 01:14:02.102398 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 8 01:14:02.102581 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 8 01:14:02.102768 kernel: rtc_cmos 00:03: registered as rtc0 Nov 8 01:14:02.102936 kernel: rtc_cmos 00:03: setting system clock to 2025-11-08T01:14:01 UTC (1762564441) Nov 8 01:14:02.103111 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 8 01:14:02.103131 kernel: intel_pstate: CPU model not supported Nov 8 01:14:02.103145 kernel: NET: Registered PF_INET6 protocol family Nov 8 01:14:02.103158 kernel: Segment Routing with IPv6 Nov 8 01:14:02.103220 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 01:14:02.103235 kernel: NET: Registered PF_PACKET protocol family Nov 8 01:14:02.103249 kernel: Key type dns_resolver registered Nov 8 01:14:02.103262 kernel: IPI shorthand broadcast: enabled Nov 8 01:14:02.103276 kernel: sched_clock: Marking stable (1218004245, 261799613)->(1751572886, -271769028) Nov 8 01:14:02.103297 kernel: registered taskstats version 1 Nov 8 01:14:02.103311 kernel: Loading compiled-in X.509 certificates Nov 8 01:14:02.103325 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 01:14:02.103338 kernel: Key type .fscrypt registered Nov 8 01:14:02.103351 kernel: Key type fscrypt-provisioning registered Nov 8 01:14:02.103364 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 01:14:02.103378 kernel: ima: Allocated hash algorithm: sha1 Nov 8 01:14:02.103391 kernel: ima: No architecture policies found Nov 8 01:14:02.103405 kernel: clk: Disabling unused clocks Nov 8 01:14:02.103424 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 01:14:02.103482 kernel: Write protecting the kernel read-only data: 36864k Nov 8 01:14:02.103501 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 01:14:02.103515 kernel: Run /init as init process Nov 8 01:14:02.103528 kernel: with arguments: Nov 8 01:14:02.103541 kernel: /init Nov 8 01:14:02.103554 kernel: with environment: Nov 8 01:14:02.103567 kernel: HOME=/ Nov 8 01:14:02.103580 kernel: TERM=linux Nov 8 01:14:02.103604 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 01:14:02.103621 systemd[1]: Detected virtualization kvm. Nov 8 01:14:02.103635 systemd[1]: Detected architecture x86-64. Nov 8 01:14:02.103650 systemd[1]: Running in initrd. Nov 8 01:14:02.103664 systemd[1]: No hostname configured, using default hostname. Nov 8 01:14:02.103678 systemd[1]: Hostname set to . Nov 8 01:14:02.103705 systemd[1]: Initializing machine ID from VM UUID. Nov 8 01:14:02.103727 systemd[1]: Queued start job for default target initrd.target. Nov 8 01:14:02.103742 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 01:14:02.103756 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 01:14:02.103771 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 01:14:02.103786 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 01:14:02.103800 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 01:14:02.103815 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 01:14:02.103836 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 01:14:02.103851 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 01:14:02.103865 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 01:14:02.103880 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 01:14:02.103899 systemd[1]: Reached target paths.target - Path Units. Nov 8 01:14:02.103914 systemd[1]: Reached target slices.target - Slice Units. Nov 8 01:14:02.103928 systemd[1]: Reached target swap.target - Swaps. Nov 8 01:14:02.103943 systemd[1]: Reached target timers.target - Timer Units. Nov 8 01:14:02.103961 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 01:14:02.103976 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 01:14:02.103991 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 01:14:02.104005 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 01:14:02.104020 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 01:14:02.104034 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 01:14:02.104049 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 01:14:02.104063 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 01:14:02.104077 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 01:14:02.104097 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 01:14:02.104112 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 01:14:02.104126 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 01:14:02.104140 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 01:14:02.104155 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 01:14:02.104194 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 01:14:02.104257 systemd-journald[202]: Collecting audit messages is disabled. Nov 8 01:14:02.104297 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 01:14:02.104312 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 01:14:02.104327 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 01:14:02.104347 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 01:14:02.104362 systemd-journald[202]: Journal started Nov 8 01:14:02.104389 systemd-journald[202]: Runtime Journal (/run/log/journal/bdc8e0c88b1a4649a201fcff8563423c) is 4.7M, max 38.0M, 33.2M free. Nov 8 01:14:02.071355 systemd-modules-load[203]: Inserted module 'overlay' Nov 8 01:14:02.162890 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 01:14:02.162924 kernel: Bridge firewalling registered Nov 8 01:14:02.162944 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 01:14:02.120832 systemd-modules-load[203]: Inserted module 'br_netfilter' Nov 8 01:14:02.170077 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 01:14:02.172410 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 01:14:02.180430 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 01:14:02.184095 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 01:14:02.198968 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 01:14:02.202158 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 01:14:02.216462 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 01:14:02.220243 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 01:14:02.221673 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 01:14:02.224765 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 01:14:02.232513 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 01:14:02.240443 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 01:14:02.245801 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 01:14:02.264941 dracut-cmdline[233]: dracut-dracut-053 Nov 8 01:14:02.271191 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 01:14:02.289102 systemd-resolved[237]: Positive Trust Anchors: Nov 8 01:14:02.289135 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 01:14:02.289196 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 01:14:02.294451 systemd-resolved[237]: Defaulting to hostname 'linux'. Nov 8 01:14:02.296190 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 01:14:02.297108 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 01:14:02.386244 kernel: SCSI subsystem initialized Nov 8 01:14:02.398203 kernel: Loading iSCSI transport class v2.0-870. Nov 8 01:14:02.412620 kernel: iscsi: registered transport (tcp) Nov 8 01:14:02.449387 kernel: iscsi: registered transport (qla4xxx) Nov 8 01:14:02.450035 kernel: QLogic iSCSI HBA Driver Nov 8 01:14:02.510697 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 01:14:02.519418 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 01:14:02.561710 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 01:14:02.561853 kernel: device-mapper: uevent: version 1.0.3 Nov 8 01:14:02.561877 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 01:14:02.613252 kernel: raid6: sse2x4 gen() 13586 MB/s Nov 8 01:14:02.631269 kernel: raid6: sse2x2 gen() 8096 MB/s Nov 8 01:14:02.649926 kernel: raid6: sse2x1 gen() 10072 MB/s Nov 8 01:14:02.650024 kernel: raid6: using algorithm sse2x4 gen() 13586 MB/s Nov 8 01:14:02.668908 kernel: raid6: .... xor() 7660 MB/s, rmw enabled Nov 8 01:14:02.668977 kernel: raid6: using ssse3x2 recovery algorithm Nov 8 01:14:02.696218 kernel: xor: automatically using best checksumming function avx Nov 8 01:14:02.898240 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 01:14:02.913528 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 01:14:02.921390 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 01:14:02.948920 systemd-udevd[421]: Using default interface naming scheme 'v255'. Nov 8 01:14:02.955994 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 01:14:02.965390 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 01:14:02.991958 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Nov 8 01:14:03.035201 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 01:14:03.049497 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 01:14:03.161114 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 01:14:03.169407 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 01:14:03.200521 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 01:14:03.207332 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 01:14:03.210960 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 01:14:03.211740 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 01:14:03.221485 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 01:14:03.243980 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 01:14:03.293259 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Nov 8 01:14:03.315212 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 01:14:03.323615 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 8 01:14:03.340343 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 01:14:03.348313 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 01:14:03.348369 kernel: GPT:17805311 != 125829119 Nov 8 01:14:03.348389 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 01:14:03.348407 kernel: GPT:17805311 != 125829119 Nov 8 01:14:03.348424 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 01:14:03.348441 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 01:14:03.340630 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 01:14:03.348852 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 01:14:03.352613 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 01:14:03.359274 kernel: AVX version of gcm_enc/dec engaged. Nov 8 01:14:03.352888 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 01:14:03.356537 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 01:14:03.362721 kernel: AES CTR mode by8 optimization enabled Nov 8 01:14:03.365501 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 01:14:03.375207 kernel: ACPI: bus type USB registered Nov 8 01:14:03.378730 kernel: usbcore: registered new interface driver usbfs Nov 8 01:14:03.378768 kernel: usbcore: registered new interface driver hub Nov 8 01:14:03.381741 kernel: usbcore: registered new device driver usb Nov 8 01:14:03.419214 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 8 01:14:03.419700 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Nov 8 01:14:03.419933 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 8 01:14:03.420155 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 8 01:14:03.421027 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Nov 8 01:14:03.424243 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Nov 8 01:14:03.424474 kernel: hub 1-0:1.0: USB hub found Nov 8 01:14:03.424737 kernel: hub 1-0:1.0: 4 ports detected Nov 8 01:14:03.425019 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 8 01:14:03.426351 kernel: hub 2-0:1.0: USB hub found Nov 8 01:14:03.426580 kernel: hub 2-0:1.0: 4 ports detected Nov 8 01:14:03.445133 kernel: libata version 3.00 loaded. Nov 8 01:14:03.460812 kernel: ahci 0000:00:1f.2: version 3.0 Nov 8 01:14:03.474449 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 8 01:14:03.474479 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (470) Nov 8 01:14:03.481190 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 8 01:14:03.481509 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 8 01:14:03.481746 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (466) Nov 8 01:14:03.489192 kernel: scsi host0: ahci Nov 8 01:14:03.492194 kernel: scsi host1: ahci Nov 8 01:14:03.497192 kernel: scsi host2: ahci Nov 8 01:14:03.499424 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 8 01:14:03.573752 kernel: scsi host3: ahci Nov 8 01:14:03.574116 kernel: scsi host4: ahci Nov 8 01:14:03.574383 kernel: scsi host5: ahci Nov 8 01:14:03.574622 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Nov 8 01:14:03.574645 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Nov 8 01:14:03.574730 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Nov 8 01:14:03.574757 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Nov 8 01:14:03.574775 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Nov 8 01:14:03.574793 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Nov 8 01:14:03.574892 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 01:14:03.582800 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 8 01:14:03.594017 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 8 01:14:03.595092 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 8 01:14:03.603420 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 01:14:03.610425 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 01:14:03.614354 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 01:14:03.620196 disk-uuid[565]: Primary Header is updated. Nov 8 01:14:03.620196 disk-uuid[565]: Secondary Entries is updated. Nov 8 01:14:03.620196 disk-uuid[565]: Secondary Header is updated. Nov 8 01:14:03.627197 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 01:14:03.636262 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 01:14:03.664346 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 01:14:03.671258 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 8 01:14:03.812205 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 01:14:03.812293 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 8 01:14:03.817792 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 8 01:14:03.817851 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 8 01:14:03.818402 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 8 01:14:03.824210 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 8 01:14:03.824254 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 8 01:14:03.835583 kernel: usbcore: registered new interface driver usbhid Nov 8 01:14:03.835639 kernel: usbhid: USB HID core driver Nov 8 01:14:03.842209 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 8 01:14:03.846225 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Nov 8 01:14:04.639437 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 01:14:04.639516 disk-uuid[566]: The operation has completed successfully. Nov 8 01:14:04.714797 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 01:14:04.716052 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 01:14:04.738481 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 01:14:04.742483 sh[586]: Success Nov 8 01:14:04.760191 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Nov 8 01:14:04.826588 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 01:14:04.854566 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 01:14:04.862307 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 01:14:04.879460 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 01:14:04.879553 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 01:14:04.879576 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 01:14:04.882551 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 01:14:04.884265 kernel: BTRFS info (device dm-0): using free space tree Nov 8 01:14:04.895846 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 01:14:04.897476 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 01:14:04.903399 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 01:14:04.915395 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 01:14:04.931594 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 01:14:04.931666 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 01:14:04.931689 kernel: BTRFS info (device vda6): using free space tree Nov 8 01:14:04.940233 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 01:14:04.956211 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 01:14:04.958748 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 01:14:04.965066 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 01:14:04.972440 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 01:14:05.066764 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 01:14:05.076560 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 01:14:05.124081 systemd-networkd[768]: lo: Link UP Nov 8 01:14:05.125216 systemd-networkd[768]: lo: Gained carrier Nov 8 01:14:05.128840 systemd-networkd[768]: Enumeration completed Nov 8 01:14:05.129768 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 01:14:05.130815 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 01:14:05.130821 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 01:14:05.133763 systemd[1]: Reached target network.target - Network. Nov 8 01:14:05.137840 systemd-networkd[768]: eth0: Link UP Nov 8 01:14:05.141243 ignition[676]: Ignition 2.19.0 Nov 8 01:14:05.137846 systemd-networkd[768]: eth0: Gained carrier Nov 8 01:14:05.141264 ignition[676]: Stage: fetch-offline Nov 8 01:14:05.137861 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 01:14:05.141369 ignition[676]: no configs at "/usr/lib/ignition/base.d" Nov 8 01:14:05.143927 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 01:14:05.141419 ignition[676]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 8 01:14:05.141634 ignition[676]: parsed url from cmdline: "" Nov 8 01:14:05.151612 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 01:14:05.141656 ignition[676]: no config URL provided Nov 8 01:14:05.141682 ignition[676]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 01:14:05.141700 ignition[676]: no config at "/usr/lib/ignition/user.ign" Nov 8 01:14:05.141709 ignition[676]: failed to fetch config: resource requires networking Nov 8 01:14:05.141990 ignition[676]: Ignition finished successfully Nov 8 01:14:05.167390 systemd-networkd[768]: eth0: DHCPv4 address 10.244.23.242/30, gateway 10.244.23.241 acquired from 10.244.23.241 Nov 8 01:14:05.176904 ignition[775]: Ignition 2.19.0 Nov 8 01:14:05.176923 ignition[775]: Stage: fetch Nov 8 01:14:05.177231 ignition[775]: no configs at "/usr/lib/ignition/base.d" Nov 8 01:14:05.177252 ignition[775]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 8 01:14:05.177402 ignition[775]: parsed url from cmdline: "" Nov 8 01:14:05.177410 ignition[775]: no config URL provided Nov 8 01:14:05.177419 ignition[775]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 01:14:05.177435 ignition[775]: no config at "/usr/lib/ignition/user.ign" Nov 8 01:14:05.177633 ignition[775]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Nov 8 01:14:05.178194 ignition[775]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Nov 8 01:14:05.178252 ignition[775]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Nov 8 01:14:05.193312 ignition[775]: GET result: OK Nov 8 01:14:05.194250 ignition[775]: parsing config with SHA512: ece9bc35a8f40f1d7159a42f2637cb7bd5f124f5f6bb5d19041316e20a492eca5695f050202e0852dc04522cc67e1b86c819d2e63742f17524a8b0fb71c3f61a Nov 8 01:14:05.200966 unknown[775]: fetched base config from "system" Nov 8 01:14:05.202153 unknown[775]: fetched base config from "system" Nov 8 01:14:05.202895 unknown[775]: fetched user config from "openstack" Nov 8 01:14:05.203326 ignition[775]: fetch: fetch complete Nov 8 01:14:05.203334 ignition[775]: fetch: fetch passed Nov 8 01:14:05.203400 ignition[775]: Ignition finished successfully Nov 8 01:14:05.205886 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 01:14:05.212377 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 01:14:05.243874 ignition[782]: Ignition 2.19.0 Nov 8 01:14:05.243893 ignition[782]: Stage: kargs Nov 8 01:14:05.244139 ignition[782]: no configs at "/usr/lib/ignition/base.d" Nov 8 01:14:05.244159 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 8 01:14:05.248421 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 01:14:05.247120 ignition[782]: kargs: kargs passed Nov 8 01:14:05.247217 ignition[782]: Ignition finished successfully Nov 8 01:14:05.257469 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 01:14:05.275759 ignition[788]: Ignition 2.19.0 Nov 8 01:14:05.275784 ignition[788]: Stage: disks Nov 8 01:14:05.276057 ignition[788]: no configs at "/usr/lib/ignition/base.d" Nov 8 01:14:05.278596 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 01:14:05.276078 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 8 01:14:05.280652 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 01:14:05.277133 ignition[788]: disks: disks passed Nov 8 01:14:05.281652 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 01:14:05.277224 ignition[788]: Ignition finished successfully Nov 8 01:14:05.283476 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 01:14:05.284973 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 01:14:05.286382 systemd[1]: Reached target basic.target - Basic System. Nov 8 01:14:05.297414 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 01:14:05.318566 systemd-fsck[796]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 8 01:14:05.322794 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 01:14:05.332363 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 01:14:05.455194 kernel: EXT4-fs (vda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 01:14:05.455696 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 01:14:05.457062 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 01:14:05.464316 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 01:14:05.471489 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 01:14:05.473469 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 01:14:05.476375 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Nov 8 01:14:05.479252 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 01:14:05.482348 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (804) Nov 8 01:14:05.481245 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 01:14:05.497765 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 01:14:05.497807 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 01:14:05.497829 kernel: BTRFS info (device vda6): using free space tree Nov 8 01:14:05.497847 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 01:14:05.504129 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 01:14:05.506336 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 01:14:05.514440 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 01:14:05.585703 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 01:14:05.593792 initrd-setup-root[838]: cut: /sysroot/etc/group: No such file or directory Nov 8 01:14:05.601340 initrd-setup-root[846]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 01:14:05.610217 initrd-setup-root[853]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 01:14:05.724686 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 01:14:05.731314 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 01:14:05.735402 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 01:14:05.751243 kernel: BTRFS info (device vda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 01:14:05.776224 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 01:14:05.785866 ignition[921]: INFO : Ignition 2.19.0 Nov 8 01:14:05.785866 ignition[921]: INFO : Stage: mount Nov 8 01:14:05.787701 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 01:14:05.787701 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 8 01:14:05.787701 ignition[921]: INFO : mount: mount passed Nov 8 01:14:05.787701 ignition[921]: INFO : Ignition finished successfully Nov 8 01:14:05.788476 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 01:14:05.875715 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 01:14:06.165792 systemd-networkd[768]: eth0: Gained IPv6LL Nov 8 01:14:07.675475 systemd-networkd[768]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:5fc:24:19ff:fef4:17f2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:5fc:24:19ff:fef4:17f2/64 assigned by NDisc. Nov 8 01:14:07.675491 systemd-networkd[768]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 8 01:14:12.680599 coreos-metadata[806]: Nov 08 01:14:12.680 WARN failed to locate config-drive, using the metadata service API instead Nov 8 01:14:12.705988 coreos-metadata[806]: Nov 08 01:14:12.705 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 8 01:14:12.718099 coreos-metadata[806]: Nov 08 01:14:12.718 INFO Fetch successful Nov 8 01:14:12.719098 coreos-metadata[806]: Nov 08 01:14:12.718 INFO wrote hostname srv-1w3cb.gb1.brightbox.com to /sysroot/etc/hostname Nov 8 01:14:12.720837 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Nov 8 01:14:12.721055 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Nov 8 01:14:12.727320 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 01:14:12.763482 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 01:14:12.776207 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Nov 8 01:14:12.780209 kernel: BTRFS info (device vda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 01:14:12.780265 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 01:14:12.782706 kernel: BTRFS info (device vda6): using free space tree Nov 8 01:14:12.787228 kernel: BTRFS info (device vda6): auto enabling async discard Nov 8 01:14:12.790501 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 01:14:12.821761 ignition[955]: INFO : Ignition 2.19.0 Nov 8 01:14:12.821761 ignition[955]: INFO : Stage: files Nov 8 01:14:12.823726 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 01:14:12.823726 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 8 01:14:12.823726 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Nov 8 01:14:12.826630 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 01:14:12.826630 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 01:14:12.828726 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 01:14:12.829743 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 01:14:12.829743 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 01:14:12.829422 unknown[955]: wrote ssh authorized keys file for user: core Nov 8 01:14:12.832951 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 01:14:12.832951 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 8 01:14:13.068869 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 01:14:13.315210 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 8 01:14:13.315210 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 01:14:13.318138 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 01:14:13.318138 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 01:14:13.318138 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 01:14:13.318138 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 01:14:13.318138 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 01:14:13.318138 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 01:14:13.318138 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 01:14:13.318138 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 01:14:13.318138 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 01:14:13.318138 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 01:14:13.318138 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 01:14:13.318138 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 01:14:13.318138 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 8 01:14:13.634041 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 01:14:15.517012 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 8 01:14:15.517012 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 01:14:15.523200 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 01:14:15.523200 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 01:14:15.523200 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 01:14:15.523200 ignition[955]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 01:14:15.523200 ignition[955]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 01:14:15.523200 ignition[955]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 01:14:15.523200 ignition[955]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 01:14:15.523200 ignition[955]: INFO : files: files passed Nov 8 01:14:15.536715 ignition[955]: INFO : Ignition finished successfully Nov 8 01:14:15.526742 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 01:14:15.540623 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 01:14:15.550389 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 01:14:15.559126 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 01:14:15.559350 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 01:14:15.571125 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 01:14:15.571125 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 01:14:15.575447 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 01:14:15.577235 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 01:14:15.578491 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 01:14:15.585525 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 01:14:15.623326 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 01:14:15.623560 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 01:14:15.625573 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 01:14:15.626958 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 01:14:15.628620 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 01:14:15.640582 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 01:14:15.658957 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 01:14:15.673495 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 01:14:15.687762 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 01:14:15.689940 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 01:14:15.690925 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 01:14:15.692595 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 01:14:15.692802 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 01:14:15.694688 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 01:14:15.695651 systemd[1]: Stopped target basic.target - Basic System. Nov 8 01:14:15.697251 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 01:14:15.698775 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 01:14:15.700297 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 01:14:15.704464 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 01:14:15.705660 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 01:14:15.707711 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 01:14:15.709609 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 01:14:15.711628 systemd[1]: Stopped target swap.target - Swaps. Nov 8 01:14:15.712404 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 01:14:15.712768 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 01:14:15.714613 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 01:14:15.715630 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 01:14:15.719233 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 01:14:15.719439 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 01:14:15.720956 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 01:14:15.721271 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 01:14:15.722997 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 01:14:15.723262 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 01:14:15.725093 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 01:14:15.725291 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 01:14:15.736096 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 01:14:15.739477 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 01:14:15.740201 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 01:14:15.740461 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 01:14:15.744016 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 01:14:15.744306 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 01:14:15.753593 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 01:14:15.753837 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 01:14:15.771253 ignition[1007]: INFO : Ignition 2.19.0 Nov 8 01:14:15.774081 ignition[1007]: INFO : Stage: umount Nov 8 01:14:15.774081 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 01:14:15.774081 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 8 01:14:15.774081 ignition[1007]: INFO : umount: umount passed Nov 8 01:14:15.774081 ignition[1007]: INFO : Ignition finished successfully Nov 8 01:14:15.782808 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 01:14:15.783013 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 01:14:15.784291 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 01:14:15.784369 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 01:14:15.785148 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 01:14:15.785244 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 01:14:15.786877 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 01:14:15.786945 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 01:14:15.788342 systemd[1]: Stopped target network.target - Network. Nov 8 01:14:15.789629 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 01:14:15.789715 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 01:14:15.791186 systemd[1]: Stopped target paths.target - Path Units. Nov 8 01:14:15.792595 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 01:14:15.796275 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 01:14:15.797409 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 01:14:15.798050 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 01:14:15.798821 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 01:14:15.798895 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 01:14:15.799623 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 01:14:15.799688 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 01:14:15.800451 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 01:14:15.800561 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 01:14:15.802096 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 01:14:15.802182 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 01:14:15.803812 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 01:14:15.805794 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 01:14:15.811428 systemd-networkd[768]: eth0: DHCPv6 lease lost Nov 8 01:14:15.816251 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 01:14:15.816487 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 01:14:15.821672 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 01:14:15.823455 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 01:14:15.830126 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 01:14:15.831695 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 01:14:15.832035 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 01:14:15.839373 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 01:14:15.840395 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 01:14:15.840598 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 01:14:15.842070 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 01:14:15.842144 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 01:14:15.844876 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 01:14:15.844950 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 01:14:15.849932 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 01:14:15.850018 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 01:14:15.851791 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 01:14:15.861972 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 01:14:15.862252 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 01:14:15.865033 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 01:14:15.865157 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 01:14:15.866383 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 01:14:15.866443 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 01:14:15.867309 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 01:14:15.867385 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 01:14:15.869551 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 01:14:15.869623 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 01:14:15.871345 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 01:14:15.871446 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 01:14:15.894729 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 01:14:15.896284 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 01:14:15.896390 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 01:14:15.897249 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 01:14:15.897334 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 01:14:15.898206 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 01:14:15.898279 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 01:14:15.900022 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 01:14:15.900105 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 01:14:15.904382 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 01:14:15.904688 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 01:14:15.908613 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 01:14:15.908772 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 01:14:15.910668 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 01:14:15.910803 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 01:14:15.915457 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 01:14:15.917121 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 01:14:15.917240 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 01:14:15.927889 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 01:14:15.938626 systemd[1]: Switching root. Nov 8 01:14:15.970546 systemd-journald[202]: Journal stopped Nov 8 01:14:17.564755 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Nov 8 01:14:17.564881 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 01:14:17.564919 kernel: SELinux: policy capability open_perms=1 Nov 8 01:14:17.564957 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 01:14:17.564990 kernel: SELinux: policy capability always_check_network=0 Nov 8 01:14:17.565015 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 01:14:17.565035 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 01:14:17.565054 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 01:14:17.565087 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 01:14:17.565109 kernel: audit: type=1403 audit(1762564456.224:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 01:14:17.565151 systemd[1]: Successfully loaded SELinux policy in 53.445ms. Nov 8 01:14:17.565324 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.280ms. Nov 8 01:14:17.565353 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 01:14:17.565380 systemd[1]: Detected virtualization kvm. Nov 8 01:14:17.565402 systemd[1]: Detected architecture x86-64. Nov 8 01:14:17.565427 systemd[1]: Detected first boot. Nov 8 01:14:17.565459 systemd[1]: Hostname set to . Nov 8 01:14:17.565481 systemd[1]: Initializing machine ID from VM UUID. Nov 8 01:14:17.565501 zram_generator::config[1049]: No configuration found. Nov 8 01:14:17.565536 systemd[1]: Populated /etc with preset unit settings. Nov 8 01:14:17.565565 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 01:14:17.565585 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 01:14:17.565606 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 01:14:17.565627 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 01:14:17.565654 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 01:14:17.565681 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 01:14:17.565702 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 01:14:17.565723 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 01:14:17.565755 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 01:14:17.565777 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 01:14:17.565798 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 01:14:17.565819 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 01:14:17.565840 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 01:14:17.565860 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 01:14:17.565880 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 01:14:17.565900 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 01:14:17.565936 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 01:14:17.565959 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 01:14:17.565979 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 01:14:17.565999 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 01:14:17.566026 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 01:14:17.566047 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 01:14:17.566079 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 01:14:17.566101 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 01:14:17.566121 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 01:14:17.566143 systemd[1]: Reached target slices.target - Slice Units. Nov 8 01:14:17.566163 systemd[1]: Reached target swap.target - Swaps. Nov 8 01:14:17.566249 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 01:14:17.566272 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 01:14:17.566292 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 01:14:17.566318 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 01:14:17.566344 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 01:14:17.566385 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 01:14:17.566419 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 01:14:17.566463 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 01:14:17.566486 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 01:14:17.566507 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 01:14:17.566539 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 01:14:17.566561 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 01:14:17.566581 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 01:14:17.566609 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 01:14:17.566631 systemd[1]: Reached target machines.target - Containers. Nov 8 01:14:17.566651 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 01:14:17.566672 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 01:14:17.566692 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 01:14:17.566719 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 01:14:17.566751 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 01:14:17.566773 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 01:14:17.566793 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 01:14:17.566813 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 01:14:17.566834 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 01:14:17.566861 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 01:14:17.566882 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 01:14:17.566902 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 01:14:17.566934 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 01:14:17.566962 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 01:14:17.566983 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 01:14:17.567003 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 01:14:17.567023 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 01:14:17.567044 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 01:14:17.567065 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 01:14:17.567090 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 01:14:17.567125 systemd[1]: Stopped verity-setup.service. Nov 8 01:14:17.567168 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 01:14:17.567206 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 01:14:17.567229 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 01:14:17.567249 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 01:14:17.567269 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 01:14:17.567303 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 01:14:17.567325 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 01:14:17.567345 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 01:14:17.567366 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 01:14:17.567386 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 01:14:17.567407 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 01:14:17.567428 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 01:14:17.567469 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 01:14:17.567491 kernel: loop: module loaded Nov 8 01:14:17.567512 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 01:14:17.567532 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 01:14:17.567552 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 01:14:17.567573 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 01:14:17.567593 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 01:14:17.567655 systemd-journald[1142]: Collecting audit messages is disabled. Nov 8 01:14:17.567709 kernel: fuse: init (API version 7.39) Nov 8 01:14:17.567733 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 01:14:17.567766 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 01:14:17.567789 systemd-journald[1142]: Journal started Nov 8 01:14:17.567834 systemd-journald[1142]: Runtime Journal (/run/log/journal/bdc8e0c88b1a4649a201fcff8563423c) is 4.7M, max 38.0M, 33.2M free. Nov 8 01:14:17.079207 systemd[1]: Queued start job for default target multi-user.target. Nov 8 01:14:17.570989 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 01:14:17.101229 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 8 01:14:17.101931 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 01:14:17.575539 kernel: ACPI: bus type drm_connector registered Nov 8 01:14:17.575584 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 01:14:17.587188 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 01:14:17.594240 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 01:14:17.598274 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 01:14:17.607250 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 01:14:17.612313 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 01:14:17.626242 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 01:14:17.634238 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 01:14:17.651223 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 01:14:17.667208 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 01:14:17.686925 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 01:14:17.689003 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 01:14:17.690458 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 01:14:17.690778 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 01:14:17.700104 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 01:14:17.702400 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 01:14:17.703953 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 01:14:17.704269 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 01:14:17.705529 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 01:14:17.707146 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 01:14:17.709010 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 01:14:17.734065 kernel: loop0: detected capacity change from 0 to 8 Nov 8 01:14:17.744705 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 01:14:17.757547 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 01:14:17.767455 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 01:14:17.768570 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 01:14:17.784560 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 01:14:17.787213 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 01:14:17.789197 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 01:14:17.819897 kernel: loop1: detected capacity change from 0 to 142488 Nov 8 01:14:17.831349 systemd-journald[1142]: Time spent on flushing to /var/log/journal/bdc8e0c88b1a4649a201fcff8563423c is 153.296ms for 1144 entries. Nov 8 01:14:17.831349 systemd-journald[1142]: System Journal (/var/log/journal/bdc8e0c88b1a4649a201fcff8563423c) is 8.0M, max 584.8M, 576.8M free. Nov 8 01:14:18.036292 systemd-journald[1142]: Received client request to flush runtime journal. Nov 8 01:14:18.036378 kernel: loop2: detected capacity change from 0 to 224512 Nov 8 01:14:17.826879 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 01:14:17.908748 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 01:14:17.909861 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 01:14:17.918311 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Nov 8 01:14:17.918334 systemd-tmpfiles[1164]: ACLs are not supported, ignoring. Nov 8 01:14:18.049546 kernel: loop3: detected capacity change from 0 to 140768 Nov 8 01:14:17.965274 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 01:14:17.984404 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 01:14:17.987241 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 01:14:17.997532 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 01:14:18.040041 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 01:14:18.052459 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 01:14:18.092887 kernel: loop4: detected capacity change from 0 to 8 Nov 8 01:14:18.101708 kernel: loop5: detected capacity change from 0 to 142488 Nov 8 01:14:18.134752 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 01:14:18.152756 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 01:14:18.163213 kernel: loop6: detected capacity change from 0 to 224512 Nov 8 01:14:18.192865 kernel: loop7: detected capacity change from 0 to 140768 Nov 8 01:14:18.231543 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Nov 8 01:14:18.232549 (sd-merge)[1207]: Merged extensions into '/usr'. Nov 8 01:14:18.237861 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Nov 8 01:14:18.237893 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Nov 8 01:14:18.246360 systemd[1]: Reloading requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 01:14:18.246639 systemd[1]: Reloading... Nov 8 01:14:18.457293 zram_generator::config[1237]: No configuration found. Nov 8 01:14:18.509432 ldconfig[1159]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 01:14:18.709498 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 01:14:18.780835 systemd[1]: Reloading finished in 526 ms. Nov 8 01:14:18.834234 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 01:14:18.840157 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 01:14:18.841527 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 01:14:18.842829 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 01:14:18.857560 systemd[1]: Starting ensure-sysext.service... Nov 8 01:14:18.862459 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 01:14:18.872530 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 01:14:18.881591 systemd[1]: Reloading requested from client PID 1295 ('systemctl') (unit ensure-sysext.service)... Nov 8 01:14:18.881634 systemd[1]: Reloading... Nov 8 01:14:18.906260 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 01:14:18.907832 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 01:14:18.909851 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 01:14:18.910750 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Nov 8 01:14:18.910952 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Nov 8 01:14:18.916847 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 01:14:18.916865 systemd-tmpfiles[1296]: Skipping /boot Nov 8 01:14:18.939874 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 01:14:18.940766 systemd-tmpfiles[1296]: Skipping /boot Nov 8 01:14:18.962987 systemd-udevd[1297]: Using default interface naming scheme 'v255'. Nov 8 01:14:19.003220 zram_generator::config[1324]: No configuration found. Nov 8 01:14:19.171285 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1344) Nov 8 01:14:19.282150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 01:14:19.352226 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 8 01:14:19.364222 kernel: ACPI: button: Power Button [PWRF] Nov 8 01:14:19.392272 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 01:14:19.395769 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 01:14:19.395870 systemd[1]: Reloading finished in 513 ms. Nov 8 01:14:19.418211 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 01:14:19.421403 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 01:14:19.425191 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 01:14:19.462258 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 8 01:14:19.478304 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 8 01:14:19.488254 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 8 01:14:19.523634 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 8 01:14:19.536119 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 01:14:19.547709 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 01:14:19.561622 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 01:14:19.562782 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 01:14:19.568278 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 01:14:19.574612 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 01:14:19.585270 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 01:14:19.586715 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 01:14:19.595343 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 01:14:19.602557 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 01:14:19.617806 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 01:14:19.631320 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 01:14:19.645279 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 01:14:19.647862 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 01:14:19.654536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 01:14:19.654878 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 01:14:19.657731 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 01:14:19.751815 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 01:14:19.752242 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 01:14:19.770941 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 01:14:19.775893 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 01:14:19.777442 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 01:14:19.789547 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 01:14:19.794132 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 01:14:19.796247 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 01:14:19.799492 systemd[1]: Finished ensure-sysext.service. Nov 8 01:14:19.822579 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 01:14:19.845736 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 01:14:19.846318 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 01:14:19.855676 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 01:14:19.858063 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 01:14:19.858894 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 01:14:19.863624 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 01:14:19.880701 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 01:14:19.883288 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 01:14:19.885608 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 01:14:19.897842 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 01:14:19.913429 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 01:14:19.925800 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 01:14:19.926142 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 01:14:19.933341 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 01:14:19.936093 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 01:14:19.936799 augenrules[1449]: No rules Nov 8 01:14:19.944687 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 01:14:19.979046 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 01:14:20.077868 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 01:14:20.080251 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 01:14:20.082524 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 01:14:20.091442 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 01:14:20.127888 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 01:14:20.164856 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 01:14:20.167546 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 01:14:20.169448 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 01:14:20.171770 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 01:14:20.179523 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 01:14:20.180490 systemd-networkd[1416]: lo: Link UP Nov 8 01:14:20.182217 systemd-networkd[1416]: lo: Gained carrier Nov 8 01:14:20.185770 systemd-networkd[1416]: Enumeration completed Nov 8 01:14:20.185924 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 01:14:20.192986 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 01:14:20.193006 systemd-networkd[1416]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 01:14:20.194492 systemd-timesyncd[1438]: No network connectivity, watching for changes. Nov 8 01:14:20.197893 systemd-networkd[1416]: eth0: Link UP Nov 8 01:14:20.197907 systemd-networkd[1416]: eth0: Gained carrier Nov 8 01:14:20.197927 systemd-networkd[1416]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 01:14:20.200638 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 01:14:20.208500 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 01:14:20.222938 systemd-resolved[1417]: Positive Trust Anchors: Nov 8 01:14:20.222971 systemd-resolved[1417]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 01:14:20.223017 systemd-resolved[1417]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 01:14:20.226305 systemd-networkd[1416]: eth0: DHCPv4 address 10.244.23.242/30, gateway 10.244.23.241 acquired from 10.244.23.241 Nov 8 01:14:20.227525 systemd-timesyncd[1438]: Network configuration changed, trying to establish connection. Nov 8 01:14:20.236631 systemd-resolved[1417]: Using system hostname 'srv-1w3cb.gb1.brightbox.com'. Nov 8 01:14:20.237212 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 01:14:20.240996 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 01:14:20.241864 systemd[1]: Reached target network.target - Network. Nov 8 01:14:20.242591 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 01:14:20.243577 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 01:14:20.244549 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 01:14:20.245472 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 01:14:20.246574 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 01:14:20.247503 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 01:14:20.248313 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 01:14:20.249113 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 01:14:20.249203 systemd[1]: Reached target paths.target - Path Units. Nov 8 01:14:20.249887 systemd[1]: Reached target timers.target - Timer Units. Nov 8 01:14:20.252146 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 01:14:20.255195 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 01:14:20.264482 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 01:14:20.266046 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 01:14:20.266943 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 01:14:20.267656 systemd[1]: Reached target basic.target - Basic System. Nov 8 01:14:20.268382 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 01:14:20.268455 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 01:14:20.270211 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 01:14:20.276437 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 01:14:20.282380 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 01:14:20.285677 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 01:14:20.296415 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 01:14:20.297798 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 01:14:20.300421 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 01:14:20.303680 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 01:14:20.307383 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 01:14:20.315487 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 01:14:20.323389 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 01:14:20.325794 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 01:14:20.327107 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 01:14:20.331253 jq[1480]: false Nov 8 01:14:20.334384 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 01:14:20.339654 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 01:14:20.351792 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 01:14:20.352092 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 01:14:20.387348 extend-filesystems[1482]: Found loop4 Nov 8 01:14:20.387348 extend-filesystems[1482]: Found loop5 Nov 8 01:14:20.387348 extend-filesystems[1482]: Found loop6 Nov 8 01:14:20.387348 extend-filesystems[1482]: Found loop7 Nov 8 01:14:20.387348 extend-filesystems[1482]: Found vda Nov 8 01:14:20.387348 extend-filesystems[1482]: Found vda1 Nov 8 01:14:20.387348 extend-filesystems[1482]: Found vda2 Nov 8 01:14:20.387348 extend-filesystems[1482]: Found vda3 Nov 8 01:14:20.387348 extend-filesystems[1482]: Found usr Nov 8 01:14:20.387348 extend-filesystems[1482]: Found vda4 Nov 8 01:14:20.387348 extend-filesystems[1482]: Found vda6 Nov 8 01:14:20.387348 extend-filesystems[1482]: Found vda7 Nov 8 01:14:20.387348 extend-filesystems[1482]: Found vda9 Nov 8 01:14:20.387348 extend-filesystems[1482]: Checking size of /dev/vda9 Nov 8 01:14:20.387859 systemd-timesyncd[1438]: Contacted time server 176.58.115.34:123 (3.flatcar.pool.ntp.org). Nov 8 01:14:20.419503 update_engine[1489]: I20251108 01:14:20.407677 1489 main.cc:92] Flatcar Update Engine starting Nov 8 01:14:20.387955 systemd-timesyncd[1438]: Initial clock synchronization to Sat 2025-11-08 01:14:20.776461 UTC. Nov 8 01:14:20.390427 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 01:14:20.391175 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 01:14:20.429819 jq[1491]: true Nov 8 01:14:20.438896 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 01:14:20.440315 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 01:14:20.445566 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 01:14:20.452866 extend-filesystems[1482]: Resized partition /dev/vda9 Nov 8 01:14:20.457495 dbus-daemon[1479]: [system] SELinux support is enabled Nov 8 01:14:20.460423 extend-filesystems[1516]: resize2fs 1.47.1 (20-May-2024) Nov 8 01:14:20.457812 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 01:14:20.461839 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 01:14:20.461889 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 01:14:20.464886 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 01:14:20.464917 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 01:14:20.475200 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Nov 8 01:14:20.484532 dbus-daemon[1479]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1416 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 8 01:14:20.492794 update_engine[1489]: I20251108 01:14:20.491909 1489 update_check_scheduler.cc:74] Next update check in 3m47s Nov 8 01:14:20.497501 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 8 01:14:20.498447 systemd[1]: Started update-engine.service - Update Engine. Nov 8 01:14:20.511440 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 01:14:20.523759 tar[1496]: linux-amd64/LICENSE Nov 8 01:14:20.523759 tar[1496]: linux-amd64/helm Nov 8 01:14:20.525275 jq[1511]: true Nov 8 01:14:20.637461 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1340) Nov 8 01:14:20.724297 systemd-logind[1488]: Watching system buttons on /dev/input/event2 (Power Button) Nov 8 01:14:20.724355 systemd-logind[1488]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 01:14:20.732459 systemd-logind[1488]: New seat seat0. Nov 8 01:14:20.736311 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 01:14:20.818094 bash[1535]: Updated "/home/core/.ssh/authorized_keys" Nov 8 01:14:20.819820 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 01:14:20.830639 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 8 01:14:20.843408 systemd[1]: Starting sshkeys.service... Nov 8 01:14:20.846424 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 8 01:14:20.851512 dbus-daemon[1479]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1518 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 8 01:14:20.864832 systemd[1]: Starting polkit.service - Authorization Manager... Nov 8 01:14:20.909071 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 8 01:14:20.919730 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 8 01:14:20.944230 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 8 01:14:20.962639 polkitd[1543]: Started polkitd version 121 Nov 8 01:14:20.970046 extend-filesystems[1516]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 8 01:14:20.970046 extend-filesystems[1516]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 8 01:14:20.970046 extend-filesystems[1516]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 8 01:14:20.994825 extend-filesystems[1482]: Resized filesystem in /dev/vda9 Nov 8 01:14:20.971832 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 01:14:20.972131 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 01:14:21.031393 polkitd[1543]: Loading rules from directory /etc/polkit-1/rules.d Nov 8 01:14:21.031786 polkitd[1543]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 8 01:14:21.040870 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 01:14:21.042069 polkitd[1543]: Finished loading, compiling and executing 2 rules Nov 8 01:14:21.045013 dbus-daemon[1479]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 8 01:14:21.045300 systemd[1]: Started polkit.service - Authorization Manager. Nov 8 01:14:21.046465 polkitd[1543]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 8 01:14:21.078463 systemd-hostnamed[1518]: Hostname set to (static) Nov 8 01:14:21.089406 locksmithd[1519]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 01:14:21.110771 containerd[1502]: time="2025-11-08T01:14:21.109606163Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 01:14:21.124284 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 01:14:21.155174 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 01:14:21.174494 containerd[1502]: time="2025-11-08T01:14:21.174376452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 01:14:21.179119 containerd[1502]: time="2025-11-08T01:14:21.179053520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 01:14:21.179294 containerd[1502]: time="2025-11-08T01:14:21.179265561Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 01:14:21.179485 containerd[1502]: time="2025-11-08T01:14:21.179455368Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 01:14:21.179915 containerd[1502]: time="2025-11-08T01:14:21.179884549Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 01:14:21.180326 containerd[1502]: time="2025-11-08T01:14:21.180294858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 01:14:21.180568 containerd[1502]: time="2025-11-08T01:14:21.180534744Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 01:14:21.182241 containerd[1502]: time="2025-11-08T01:14:21.181349411Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 01:14:21.182241 containerd[1502]: time="2025-11-08T01:14:21.181687469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 01:14:21.182241 containerd[1502]: time="2025-11-08T01:14:21.181725523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 01:14:21.182241 containerd[1502]: time="2025-11-08T01:14:21.181758143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 01:14:21.182241 containerd[1502]: time="2025-11-08T01:14:21.181777688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 01:14:21.182241 containerd[1502]: time="2025-11-08T01:14:21.181939804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 01:14:21.183063 containerd[1502]: time="2025-11-08T01:14:21.183032358Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 01:14:21.183362 containerd[1502]: time="2025-11-08T01:14:21.183328402Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 01:14:21.184704 containerd[1502]: time="2025-11-08T01:14:21.184262141Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 01:14:21.184704 containerd[1502]: time="2025-11-08T01:14:21.184458629Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 01:14:21.184704 containerd[1502]: time="2025-11-08T01:14:21.184578841Z" level=info msg="metadata content store policy set" policy=shared Nov 8 01:14:21.189531 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 01:14:21.189931 containerd[1502]: time="2025-11-08T01:14:21.189762889Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 01:14:21.189931 containerd[1502]: time="2025-11-08T01:14:21.189872052Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 01:14:21.189857 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 01:14:21.190755 containerd[1502]: time="2025-11-08T01:14:21.190285692Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 01:14:21.190755 containerd[1502]: time="2025-11-08T01:14:21.190329620Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 01:14:21.190755 containerd[1502]: time="2025-11-08T01:14:21.190380824Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 01:14:21.190755 containerd[1502]: time="2025-11-08T01:14:21.190648185Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 01:14:21.191782 containerd[1502]: time="2025-11-08T01:14:21.191753333Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 01:14:21.193027 containerd[1502]: time="2025-11-08T01:14:21.192062628Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 01:14:21.193027 containerd[1502]: time="2025-11-08T01:14:21.192097397Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 01:14:21.193027 containerd[1502]: time="2025-11-08T01:14:21.192125599Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 01:14:21.193027 containerd[1502]: time="2025-11-08T01:14:21.192150663Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 01:14:21.193027 containerd[1502]: time="2025-11-08T01:14:21.192182469Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 01:14:21.193027 containerd[1502]: time="2025-11-08T01:14:21.192238541Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 01:14:21.193027 containerd[1502]: time="2025-11-08T01:14:21.192264633Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 01:14:21.193027 containerd[1502]: time="2025-11-08T01:14:21.192305403Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 01:14:21.193027 containerd[1502]: time="2025-11-08T01:14:21.192331266Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 01:14:21.193027 containerd[1502]: time="2025-11-08T01:14:21.192358078Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 01:14:21.193027 containerd[1502]: time="2025-11-08T01:14:21.192383537Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 01:14:21.193027 containerd[1502]: time="2025-11-08T01:14:21.192430220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.193027 containerd[1502]: time="2025-11-08T01:14:21.192456352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.193027 containerd[1502]: time="2025-11-08T01:14:21.192515907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.193635 containerd[1502]: time="2025-11-08T01:14:21.192551090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.193635 containerd[1502]: time="2025-11-08T01:14:21.192583107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.193635 containerd[1502]: time="2025-11-08T01:14:21.192612764Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.193635 containerd[1502]: time="2025-11-08T01:14:21.192635751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.193635 containerd[1502]: time="2025-11-08T01:14:21.192657125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.193635 containerd[1502]: time="2025-11-08T01:14:21.192678261Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.193635 containerd[1502]: time="2025-11-08T01:14:21.192702539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.193635 containerd[1502]: time="2025-11-08T01:14:21.192722467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.193635 containerd[1502]: time="2025-11-08T01:14:21.192749534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.193635 containerd[1502]: time="2025-11-08T01:14:21.192779557Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.193635 containerd[1502]: time="2025-11-08T01:14:21.192816840Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 01:14:21.193635 containerd[1502]: time="2025-11-08T01:14:21.192864890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.193635 containerd[1502]: time="2025-11-08T01:14:21.192911033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.193635 containerd[1502]: time="2025-11-08T01:14:21.192939340Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 01:14:21.195180 containerd[1502]: time="2025-11-08T01:14:21.194309717Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 01:14:21.195180 containerd[1502]: time="2025-11-08T01:14:21.194492558Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 01:14:21.195180 containerd[1502]: time="2025-11-08T01:14:21.194519540Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 01:14:21.195180 containerd[1502]: time="2025-11-08T01:14:21.194541008Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 01:14:21.195180 containerd[1502]: time="2025-11-08T01:14:21.194558667Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.195180 containerd[1502]: time="2025-11-08T01:14:21.194578755Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 01:14:21.195180 containerd[1502]: time="2025-11-08T01:14:21.194619881Z" level=info msg="NRI interface is disabled by configuration." Nov 8 01:14:21.195180 containerd[1502]: time="2025-11-08T01:14:21.194658415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 01:14:21.196990 containerd[1502]: time="2025-11-08T01:14:21.195813622Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 01:14:21.196990 containerd[1502]: time="2025-11-08T01:14:21.195935241Z" level=info msg="Connect containerd service" Nov 8 01:14:21.196990 containerd[1502]: time="2025-11-08T01:14:21.195999016Z" level=info msg="using legacy CRI server" Nov 8 01:14:21.196990 containerd[1502]: time="2025-11-08T01:14:21.196023294Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 01:14:21.196990 containerd[1502]: time="2025-11-08T01:14:21.196209113Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 01:14:21.201247 containerd[1502]: time="2025-11-08T01:14:21.200533532Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 01:14:21.201931 containerd[1502]: time="2025-11-08T01:14:21.201863095Z" level=info msg="Start subscribing containerd event" Nov 8 01:14:21.202362 containerd[1502]: time="2025-11-08T01:14:21.202332536Z" level=info msg="Start recovering state" Nov 8 01:14:21.202599 containerd[1502]: time="2025-11-08T01:14:21.202572104Z" level=info msg="Start event monitor" Nov 8 01:14:21.203068 containerd[1502]: time="2025-11-08T01:14:21.203038498Z" level=info msg="Start snapshots syncer" Nov 8 01:14:21.203405 containerd[1502]: time="2025-11-08T01:14:21.203378363Z" level=info msg="Start cni network conf syncer for default" Nov 8 01:14:21.203612 containerd[1502]: time="2025-11-08T01:14:21.203490517Z" level=info msg="Start streaming server" Nov 8 01:14:21.203723 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 01:14:21.204271 containerd[1502]: time="2025-11-08T01:14:21.202883904Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 01:14:21.204271 containerd[1502]: time="2025-11-08T01:14:21.204050825Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 01:14:21.205002 containerd[1502]: time="2025-11-08T01:14:21.204974746Z" level=info msg="containerd successfully booted in 0.097551s" Nov 8 01:14:21.205057 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 01:14:21.235354 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 01:14:21.247266 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 01:14:21.255796 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 01:14:21.257278 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 01:14:21.569817 tar[1496]: linux-amd64/README.md Nov 8 01:14:21.586829 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 01:14:21.912629 systemd-networkd[1416]: eth0: Gained IPv6LL Nov 8 01:14:21.917557 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 01:14:21.921646 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 01:14:21.929737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:14:21.944422 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 01:14:21.970820 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 01:14:23.021452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:14:23.036902 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 01:14:23.422465 systemd-networkd[1416]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:5fc:24:19ff:fef4:17f2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:5fc:24:19ff:fef4:17f2/64 assigned by NDisc. Nov 8 01:14:23.422480 systemd-networkd[1416]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 8 01:14:23.664897 kubelet[1603]: E1108 01:14:23.664726 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 01:14:23.667765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 01:14:23.668031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 01:14:23.668736 systemd[1]: kubelet.service: Consumed 1.137s CPU time. Nov 8 01:14:26.329037 login[1581]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Nov 8 01:14:26.330470 login[1580]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 01:14:26.351902 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 01:14:26.355280 systemd-logind[1488]: New session 2 of user core. Nov 8 01:14:26.366860 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 01:14:26.388224 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 01:14:26.398102 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 01:14:26.414681 (systemd)[1619]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 01:14:26.567613 systemd[1619]: Queued start job for default target default.target. Nov 8 01:14:26.578561 systemd[1619]: Created slice app.slice - User Application Slice. Nov 8 01:14:26.578619 systemd[1619]: Reached target paths.target - Paths. Nov 8 01:14:26.578646 systemd[1619]: Reached target timers.target - Timers. Nov 8 01:14:26.581051 systemd[1619]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 01:14:26.597899 systemd[1619]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 01:14:26.598112 systemd[1619]: Reached target sockets.target - Sockets. Nov 8 01:14:26.598139 systemd[1619]: Reached target basic.target - Basic System. Nov 8 01:14:26.598248 systemd[1619]: Reached target default.target - Main User Target. Nov 8 01:14:26.598311 systemd[1619]: Startup finished in 173ms. Nov 8 01:14:26.598799 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 01:14:26.613270 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 01:14:27.331923 login[1581]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 01:14:27.339053 systemd-logind[1488]: New session 1 of user core. Nov 8 01:14:27.346525 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 01:14:27.408366 coreos-metadata[1478]: Nov 08 01:14:27.408 WARN failed to locate config-drive, using the metadata service API instead Nov 8 01:14:27.436809 coreos-metadata[1478]: Nov 08 01:14:27.436 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Nov 8 01:14:27.444511 coreos-metadata[1478]: Nov 08 01:14:27.444 INFO Fetch failed with 404: resource not found Nov 8 01:14:27.444511 coreos-metadata[1478]: Nov 08 01:14:27.444 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 8 01:14:27.445336 coreos-metadata[1478]: Nov 08 01:14:27.445 INFO Fetch successful Nov 8 01:14:27.445516 coreos-metadata[1478]: Nov 08 01:14:27.445 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Nov 8 01:14:27.461474 coreos-metadata[1478]: Nov 08 01:14:27.461 INFO Fetch successful Nov 8 01:14:27.461721 coreos-metadata[1478]: Nov 08 01:14:27.461 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Nov 8 01:14:27.478129 coreos-metadata[1478]: Nov 08 01:14:27.478 INFO Fetch successful Nov 8 01:14:27.478410 coreos-metadata[1478]: Nov 08 01:14:27.478 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Nov 8 01:14:27.492986 coreos-metadata[1478]: Nov 08 01:14:27.492 INFO Fetch successful Nov 8 01:14:27.493261 coreos-metadata[1478]: Nov 08 01:14:27.493 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Nov 8 01:14:27.510738 coreos-metadata[1478]: Nov 08 01:14:27.510 INFO Fetch successful Nov 8 01:14:27.540309 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 01:14:27.542067 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 01:14:28.034691 coreos-metadata[1546]: Nov 08 01:14:28.034 WARN failed to locate config-drive, using the metadata service API instead Nov 8 01:14:28.057804 coreos-metadata[1546]: Nov 08 01:14:28.057 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Nov 8 01:14:28.084823 coreos-metadata[1546]: Nov 08 01:14:28.084 INFO Fetch successful Nov 8 01:14:28.084823 coreos-metadata[1546]: Nov 08 01:14:28.084 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 8 01:14:28.126781 coreos-metadata[1546]: Nov 08 01:14:28.126 INFO Fetch successful Nov 8 01:14:28.128973 unknown[1546]: wrote ssh authorized keys file for user: core Nov 8 01:14:28.151916 update-ssh-keys[1657]: Updated "/home/core/.ssh/authorized_keys" Nov 8 01:14:28.153121 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 8 01:14:28.155982 systemd[1]: Finished sshkeys.service. Nov 8 01:14:28.159362 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 01:14:28.162321 systemd[1]: Startup finished in 1.400s (kernel) + 14.461s (initrd) + 11.991s (userspace) = 27.853s. Nov 8 01:14:31.320295 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 01:14:31.334718 systemd[1]: Started sshd@0-10.244.23.242:22-139.178.68.195:58554.service - OpenSSH per-connection server daemon (139.178.68.195:58554). Nov 8 01:14:32.273614 sshd[1662]: Accepted publickey for core from 139.178.68.195 port 58554 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:14:32.275882 sshd[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:14:32.283353 systemd-logind[1488]: New session 3 of user core. Nov 8 01:14:32.290408 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 01:14:33.066520 systemd[1]: Started sshd@1-10.244.23.242:22-139.178.68.195:58562.service - OpenSSH per-connection server daemon (139.178.68.195:58562). Nov 8 01:14:33.840243 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 01:14:33.857751 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:14:34.002316 sshd[1667]: Accepted publickey for core from 139.178.68.195 port 58562 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:14:34.004439 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:14:34.012574 systemd-logind[1488]: New session 4 of user core. Nov 8 01:14:34.024687 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 01:14:34.057112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:14:34.066001 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 01:14:34.144213 kubelet[1678]: E1108 01:14:34.141738 1678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 01:14:34.147078 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 01:14:34.147366 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 01:14:34.645805 sshd[1667]: pam_unix(sshd:session): session closed for user core Nov 8 01:14:34.650487 systemd[1]: sshd@1-10.244.23.242:22-139.178.68.195:58562.service: Deactivated successfully. Nov 8 01:14:34.652550 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 01:14:34.653501 systemd-logind[1488]: Session 4 logged out. Waiting for processes to exit. Nov 8 01:14:34.655125 systemd-logind[1488]: Removed session 4. Nov 8 01:14:34.817851 systemd[1]: Started sshd@2-10.244.23.242:22-139.178.68.195:60718.service - OpenSSH per-connection server daemon (139.178.68.195:60718). Nov 8 01:14:35.738837 sshd[1689]: Accepted publickey for core from 139.178.68.195 port 60718 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:14:35.741072 sshd[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:14:35.748421 systemd-logind[1488]: New session 5 of user core. Nov 8 01:14:35.758570 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 01:14:36.371009 sshd[1689]: pam_unix(sshd:session): session closed for user core Nov 8 01:14:36.375874 systemd[1]: sshd@2-10.244.23.242:22-139.178.68.195:60718.service: Deactivated successfully. Nov 8 01:14:36.377935 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 01:14:36.379016 systemd-logind[1488]: Session 5 logged out. Waiting for processes to exit. Nov 8 01:14:36.380558 systemd-logind[1488]: Removed session 5. Nov 8 01:14:36.533663 systemd[1]: Started sshd@3-10.244.23.242:22-139.178.68.195:60730.service - OpenSSH per-connection server daemon (139.178.68.195:60730). Nov 8 01:14:37.439530 sshd[1696]: Accepted publickey for core from 139.178.68.195 port 60730 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:14:37.441578 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:14:37.447822 systemd-logind[1488]: New session 6 of user core. Nov 8 01:14:37.460553 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 01:14:38.071535 sshd[1696]: pam_unix(sshd:session): session closed for user core Nov 8 01:14:38.075784 systemd[1]: sshd@3-10.244.23.242:22-139.178.68.195:60730.service: Deactivated successfully. Nov 8 01:14:38.078397 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 01:14:38.080432 systemd-logind[1488]: Session 6 logged out. Waiting for processes to exit. Nov 8 01:14:38.081889 systemd-logind[1488]: Removed session 6. Nov 8 01:14:38.232568 systemd[1]: Started sshd@4-10.244.23.242:22-139.178.68.195:60736.service - OpenSSH per-connection server daemon (139.178.68.195:60736). Nov 8 01:14:39.156752 sshd[1703]: Accepted publickey for core from 139.178.68.195 port 60736 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:14:39.159518 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:14:39.167763 systemd-logind[1488]: New session 7 of user core. Nov 8 01:14:39.179418 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 01:14:39.665543 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 01:14:39.666116 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 01:14:39.686956 sudo[1706]: pam_unix(sudo:session): session closed for user root Nov 8 01:14:39.836023 sshd[1703]: pam_unix(sshd:session): session closed for user core Nov 8 01:14:39.842043 systemd[1]: sshd@4-10.244.23.242:22-139.178.68.195:60736.service: Deactivated successfully. Nov 8 01:14:39.844487 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 01:14:39.845490 systemd-logind[1488]: Session 7 logged out. Waiting for processes to exit. Nov 8 01:14:39.847139 systemd-logind[1488]: Removed session 7. Nov 8 01:14:40.012684 systemd[1]: Started sshd@5-10.244.23.242:22-139.178.68.195:60742.service - OpenSSH per-connection server daemon (139.178.68.195:60742). Nov 8 01:14:40.935455 sshd[1711]: Accepted publickey for core from 139.178.68.195 port 60742 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:14:40.937749 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:14:40.945237 systemd-logind[1488]: New session 8 of user core. Nov 8 01:14:40.952556 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 01:14:41.433535 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 01:14:41.434022 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 01:14:41.440398 sudo[1715]: pam_unix(sudo:session): session closed for user root Nov 8 01:14:41.448585 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 01:14:41.449053 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 01:14:41.466921 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 01:14:41.482565 auditctl[1718]: No rules Nov 8 01:14:41.484677 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 01:14:41.485048 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 01:14:41.491771 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 01:14:41.535439 augenrules[1736]: No rules Nov 8 01:14:41.536331 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 01:14:41.537990 sudo[1714]: pam_unix(sudo:session): session closed for user root Nov 8 01:14:41.693049 sshd[1711]: pam_unix(sshd:session): session closed for user core Nov 8 01:14:41.700483 systemd[1]: sshd@5-10.244.23.242:22-139.178.68.195:60742.service: Deactivated successfully. Nov 8 01:14:41.703876 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 01:14:41.705519 systemd-logind[1488]: Session 8 logged out. Waiting for processes to exit. Nov 8 01:14:41.707294 systemd-logind[1488]: Removed session 8. Nov 8 01:14:41.853603 systemd[1]: Started sshd@6-10.244.23.242:22-139.178.68.195:60750.service - OpenSSH per-connection server daemon (139.178.68.195:60750). Nov 8 01:14:42.783002 sshd[1744]: Accepted publickey for core from 139.178.68.195 port 60750 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:14:42.785217 sshd[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:14:42.791241 systemd-logind[1488]: New session 9 of user core. Nov 8 01:14:42.798502 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 01:14:43.284062 sudo[1747]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 01:14:43.284640 sudo[1747]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 01:14:43.755639 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 01:14:43.755875 (dockerd)[1763]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 01:14:44.195312 dockerd[1763]: time="2025-11-08T01:14:44.194696728Z" level=info msg="Starting up" Nov 8 01:14:44.200634 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 01:14:44.214527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:14:44.473782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:14:44.486748 (kubelet)[1792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 01:14:44.490005 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3368090075-merged.mount: Deactivated successfully. Nov 8 01:14:44.526579 dockerd[1763]: time="2025-11-08T01:14:44.526508127Z" level=info msg="Loading containers: start." Nov 8 01:14:44.582225 kubelet[1792]: E1108 01:14:44.581812 1792 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 01:14:44.585829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 01:14:44.586520 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 01:14:44.698218 kernel: Initializing XFRM netlink socket Nov 8 01:14:44.808414 systemd-networkd[1416]: docker0: Link UP Nov 8 01:14:44.827699 dockerd[1763]: time="2025-11-08T01:14:44.826163528Z" level=info msg="Loading containers: done." Nov 8 01:14:44.849529 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3706733524-merged.mount: Deactivated successfully. Nov 8 01:14:44.859192 dockerd[1763]: time="2025-11-08T01:14:44.858204068Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 01:14:44.859192 dockerd[1763]: time="2025-11-08T01:14:44.858492594Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 01:14:44.859192 dockerd[1763]: time="2025-11-08T01:14:44.858736173Z" level=info msg="Daemon has completed initialization" Nov 8 01:14:44.905985 dockerd[1763]: time="2025-11-08T01:14:44.905870929Z" level=info msg="API listen on /run/docker.sock" Nov 8 01:14:44.906306 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 01:14:46.128751 containerd[1502]: time="2025-11-08T01:14:46.128663532Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 01:14:47.044741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3939459367.mount: Deactivated successfully. Nov 8 01:14:49.226371 containerd[1502]: time="2025-11-08T01:14:49.226073356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:14:49.228693 containerd[1502]: time="2025-11-08T01:14:49.228343775Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837924" Nov 8 01:14:49.230198 containerd[1502]: time="2025-11-08T01:14:49.229513393Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:14:49.233813 containerd[1502]: time="2025-11-08T01:14:49.233766624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:14:49.235735 containerd[1502]: time="2025-11-08T01:14:49.235693074Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 3.106926398s" Nov 8 01:14:49.235847 containerd[1502]: time="2025-11-08T01:14:49.235781746Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 8 01:14:49.238605 containerd[1502]: time="2025-11-08T01:14:49.238572967Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 01:14:51.467660 containerd[1502]: time="2025-11-08T01:14:51.465947281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:14:51.467660 containerd[1502]: time="2025-11-08T01:14:51.467566923Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787035" Nov 8 01:14:51.468683 containerd[1502]: time="2025-11-08T01:14:51.468641675Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:14:51.472206 containerd[1502]: time="2025-11-08T01:14:51.472138265Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:14:51.473986 containerd[1502]: time="2025-11-08T01:14:51.473942372Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.235212184s" Nov 8 01:14:51.474076 containerd[1502]: time="2025-11-08T01:14:51.473989249Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 8 01:14:51.478453 containerd[1502]: time="2025-11-08T01:14:51.478392535Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 01:14:53.442711 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 8 01:14:54.438726 containerd[1502]: time="2025-11-08T01:14:54.438575214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:14:54.440406 containerd[1502]: time="2025-11-08T01:14:54.440164349Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176297" Nov 8 01:14:54.441925 containerd[1502]: time="2025-11-08T01:14:54.441863942Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:14:54.445495 containerd[1502]: time="2025-11-08T01:14:54.445452796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:14:54.448236 containerd[1502]: time="2025-11-08T01:14:54.447117929Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.968281288s" Nov 8 01:14:54.448236 containerd[1502]: time="2025-11-08T01:14:54.447164926Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 8 01:14:54.448801 containerd[1502]: time="2025-11-08T01:14:54.448770077Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 01:14:54.811726 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 8 01:14:54.818482 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:14:55.000095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:14:55.011759 (kubelet)[1997]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 01:14:55.117664 kubelet[1997]: E1108 01:14:55.117442 1997 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 01:14:55.120953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 01:14:55.121314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 01:14:57.753032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount303414422.mount: Deactivated successfully. Nov 8 01:14:58.485098 containerd[1502]: time="2025-11-08T01:14:58.484990360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:14:58.486682 containerd[1502]: time="2025-11-08T01:14:58.486262305Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924214" Nov 8 01:14:58.488201 containerd[1502]: time="2025-11-08T01:14:58.487125113Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:14:58.490218 containerd[1502]: time="2025-11-08T01:14:58.489943354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:14:58.492110 containerd[1502]: time="2025-11-08T01:14:58.491157456Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 4.042131548s" Nov 8 01:14:58.492110 containerd[1502]: time="2025-11-08T01:14:58.491249521Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 8 01:14:58.493239 containerd[1502]: time="2025-11-08T01:14:58.493210655Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 01:14:59.275638 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1028677635.mount: Deactivated successfully. Nov 8 01:15:00.649964 containerd[1502]: time="2025-11-08T01:15:00.649798739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:00.652058 containerd[1502]: time="2025-11-08T01:15:00.651977894Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Nov 8 01:15:00.654863 containerd[1502]: time="2025-11-08T01:15:00.653119382Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:00.660609 containerd[1502]: time="2025-11-08T01:15:00.659229920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:00.665278 containerd[1502]: time="2025-11-08T01:15:00.665214574Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.171860503s" Nov 8 01:15:00.665488 containerd[1502]: time="2025-11-08T01:15:00.665456329Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 8 01:15:00.667978 containerd[1502]: time="2025-11-08T01:15:00.667689085Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 01:15:01.470822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount677108248.mount: Deactivated successfully. Nov 8 01:15:01.489342 containerd[1502]: time="2025-11-08T01:15:01.489239401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:01.490536 containerd[1502]: time="2025-11-08T01:15:01.490483737Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 8 01:15:01.491202 containerd[1502]: time="2025-11-08T01:15:01.491020817Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:01.495580 containerd[1502]: time="2025-11-08T01:15:01.495515607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:01.497197 containerd[1502]: time="2025-11-08T01:15:01.496721507Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 828.596236ms" Nov 8 01:15:01.497197 containerd[1502]: time="2025-11-08T01:15:01.496766670Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 01:15:01.498516 containerd[1502]: time="2025-11-08T01:15:01.498094493Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 01:15:02.411536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1579780075.mount: Deactivated successfully. Nov 8 01:15:05.311869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 8 01:15:05.321543 update_engine[1489]: I20251108 01:15:05.321386 1489 update_attempter.cc:509] Updating boot flags... Nov 8 01:15:05.323606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:15:05.470302 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2129) Nov 8 01:15:05.740478 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:15:05.743668 (kubelet)[2141]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 01:15:05.856483 kubelet[2141]: E1108 01:15:05.856404 2141 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 01:15:05.860755 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 01:15:05.861059 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 01:15:06.670163 containerd[1502]: time="2025-11-08T01:15:06.670047280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:06.672759 containerd[1502]: time="2025-11-08T01:15:06.672686205Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Nov 8 01:15:06.675193 containerd[1502]: time="2025-11-08T01:15:06.673076275Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:06.677667 containerd[1502]: time="2025-11-08T01:15:06.677624993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:06.679701 containerd[1502]: time="2025-11-08T01:15:06.679657314Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.181520076s" Nov 8 01:15:06.679781 containerd[1502]: time="2025-11-08T01:15:06.679706015Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 8 01:15:11.497639 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:15:11.506594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:15:11.552145 systemd[1]: Reloading requested from client PID 2177 ('systemctl') (unit session-9.scope)... Nov 8 01:15:11.552205 systemd[1]: Reloading... Nov 8 01:15:11.726221 zram_generator::config[2216]: No configuration found. Nov 8 01:15:11.887347 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 01:15:11.997871 systemd[1]: Reloading finished in 444 ms. Nov 8 01:15:12.094545 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 01:15:12.094694 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 01:15:12.095320 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:15:12.103946 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:15:12.290444 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:15:12.304756 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 01:15:12.381038 kubelet[2284]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 01:15:12.381038 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 01:15:12.381038 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 01:15:12.381038 kubelet[2284]: I1108 01:15:12.380400 2284 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 01:15:13.072331 kubelet[2284]: I1108 01:15:13.072276 2284 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 01:15:13.072512 kubelet[2284]: I1108 01:15:13.072493 2284 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 01:15:13.073002 kubelet[2284]: I1108 01:15:13.072976 2284 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 01:15:13.107320 kubelet[2284]: E1108 01:15:13.106939 2284 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.23.242:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.23.242:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:15:13.108770 kubelet[2284]: I1108 01:15:13.108739 2284 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 01:15:13.129367 kubelet[2284]: E1108 01:15:13.129313 2284 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 01:15:13.129691 kubelet[2284]: I1108 01:15:13.129583 2284 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 01:15:13.140069 kubelet[2284]: I1108 01:15:13.140020 2284 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 01:15:13.142420 kubelet[2284]: I1108 01:15:13.142319 2284 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 01:15:13.142714 kubelet[2284]: I1108 01:15:13.142391 2284 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-1w3cb.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 01:15:13.144480 kubelet[2284]: I1108 01:15:13.144416 2284 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 01:15:13.144480 kubelet[2284]: I1108 01:15:13.144453 2284 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 01:15:13.146000 kubelet[2284]: I1108 01:15:13.145937 2284 state_mem.go:36] "Initialized new in-memory state store" Nov 8 01:15:13.150853 kubelet[2284]: I1108 01:15:13.150388 2284 kubelet.go:446] "Attempting to sync node with API server" Nov 8 01:15:13.150853 kubelet[2284]: I1108 01:15:13.150438 2284 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 01:15:13.150853 kubelet[2284]: I1108 01:15:13.150481 2284 kubelet.go:352] "Adding apiserver pod source" Nov 8 01:15:13.150853 kubelet[2284]: I1108 01:15:13.150507 2284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 01:15:13.158988 kubelet[2284]: W1108 01:15:13.158814 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.23.242:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-1w3cb.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.23.242:6443: connect: connection refused Nov 8 01:15:13.159265 kubelet[2284]: E1108 01:15:13.159230 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.23.242:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-1w3cb.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.23.242:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:15:13.160668 kubelet[2284]: I1108 01:15:13.160641 2284 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 01:15:13.164415 kubelet[2284]: I1108 01:15:13.164388 2284 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 01:15:13.165372 kubelet[2284]: W1108 01:15:13.165349 2284 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 01:15:13.167102 kubelet[2284]: W1108 01:15:13.166973 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.23.242:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.23.242:6443: connect: connection refused Nov 8 01:15:13.167102 kubelet[2284]: E1108 01:15:13.167034 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.23.242:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.23.242:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:15:13.167459 kubelet[2284]: I1108 01:15:13.167433 2284 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 01:15:13.167525 kubelet[2284]: I1108 01:15:13.167490 2284 server.go:1287] "Started kubelet" Nov 8 01:15:13.167983 kubelet[2284]: I1108 01:15:13.167945 2284 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 01:15:13.169549 kubelet[2284]: I1108 01:15:13.169524 2284 server.go:479] "Adding debug handlers to kubelet server" Nov 8 01:15:13.172577 kubelet[2284]: I1108 01:15:13.172120 2284 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 01:15:13.172675 kubelet[2284]: I1108 01:15:13.172609 2284 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 01:15:13.184340 kubelet[2284]: I1108 01:15:13.184163 2284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 01:15:13.189327 kubelet[2284]: I1108 01:15:13.188968 2284 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 01:15:13.194138 kubelet[2284]: I1108 01:15:13.194107 2284 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 01:15:13.195607 kubelet[2284]: E1108 01:15:13.195571 2284 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-1w3cb.gb1.brightbox.com\" not found" Nov 8 01:15:13.196506 kubelet[2284]: I1108 01:15:13.196468 2284 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 01:15:13.196585 kubelet[2284]: I1108 01:15:13.196568 2284 reconciler.go:26] "Reconciler: start to sync state" Nov 8 01:15:13.200791 kubelet[2284]: W1108 01:15:13.200710 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.23.242:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.23.242:6443: connect: connection refused Nov 8 01:15:13.200879 kubelet[2284]: E1108 01:15:13.200792 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.23.242:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.23.242:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:15:13.200966 kubelet[2284]: E1108 01:15:13.200908 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.23.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-1w3cb.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.23.242:6443: connect: connection refused" interval="200ms" Nov 8 01:15:13.201790 kubelet[2284]: I1108 01:15:13.201756 2284 factory.go:221] Registration of the systemd container factory successfully Nov 8 01:15:13.201917 kubelet[2284]: I1108 01:15:13.201880 2284 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 01:15:13.203507 kubelet[2284]: E1108 01:15:13.173882 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.23.242:6443/api/v1/namespaces/default/events\": dial tcp 10.244.23.242:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-1w3cb.gb1.brightbox.com.1875e316150e466a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-1w3cb.gb1.brightbox.com,UID:srv-1w3cb.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-1w3cb.gb1.brightbox.com,},FirstTimestamp:2025-11-08 01:15:13.167459946 +0000 UTC m=+0.855352612,LastTimestamp:2025-11-08 01:15:13.167459946 +0000 UTC m=+0.855352612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-1w3cb.gb1.brightbox.com,}" Nov 8 01:15:13.205640 kubelet[2284]: I1108 01:15:13.205543 2284 factory.go:221] Registration of the containerd container factory successfully Nov 8 01:15:13.223575 kubelet[2284]: E1108 01:15:13.223503 2284 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 01:15:13.236218 kubelet[2284]: I1108 01:15:13.234770 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 01:15:13.236815 kubelet[2284]: I1108 01:15:13.236786 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 01:15:13.236910 kubelet[2284]: I1108 01:15:13.236842 2284 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 01:15:13.236910 kubelet[2284]: I1108 01:15:13.236889 2284 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 01:15:13.236910 kubelet[2284]: I1108 01:15:13.236903 2284 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 01:15:13.237057 kubelet[2284]: E1108 01:15:13.236980 2284 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 01:15:13.241097 kubelet[2284]: W1108 01:15:13.241060 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.23.242:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.23.242:6443: connect: connection refused Nov 8 01:15:13.241648 kubelet[2284]: E1108 01:15:13.241613 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.23.242:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.23.242:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:15:13.251057 kubelet[2284]: I1108 01:15:13.251012 2284 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 01:15:13.251057 kubelet[2284]: I1108 01:15:13.251043 2284 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 01:15:13.251348 kubelet[2284]: I1108 01:15:13.251076 2284 state_mem.go:36] "Initialized new in-memory state store" Nov 8 01:15:13.253358 kubelet[2284]: I1108 01:15:13.253323 2284 policy_none.go:49] "None policy: Start" Nov 8 01:15:13.253443 kubelet[2284]: I1108 01:15:13.253367 2284 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 01:15:13.253443 kubelet[2284]: I1108 01:15:13.253398 2284 state_mem.go:35] "Initializing new in-memory state store" Nov 8 01:15:13.264851 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 01:15:13.277262 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 01:15:13.283211 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 01:15:13.296722 kubelet[2284]: E1108 01:15:13.296653 2284 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-1w3cb.gb1.brightbox.com\" not found" Nov 8 01:15:13.297199 kubelet[2284]: I1108 01:15:13.296814 2284 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 01:15:13.297199 kubelet[2284]: I1108 01:15:13.297141 2284 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 01:15:13.298301 kubelet[2284]: I1108 01:15:13.297195 2284 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 01:15:13.300471 kubelet[2284]: E1108 01:15:13.299827 2284 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 01:15:13.300471 kubelet[2284]: E1108 01:15:13.299915 2284 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-1w3cb.gb1.brightbox.com\" not found" Nov 8 01:15:13.300471 kubelet[2284]: I1108 01:15:13.300400 2284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 01:15:13.352897 systemd[1]: Created slice kubepods-burstable-podf19f0ca567dedb9ab3ab4b07cac49e69.slice - libcontainer container kubepods-burstable-podf19f0ca567dedb9ab3ab4b07cac49e69.slice. Nov 8 01:15:13.378356 kubelet[2284]: E1108 01:15:13.378297 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1w3cb.gb1.brightbox.com\" not found" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.384074 systemd[1]: Created slice kubepods-burstable-pod2a646da64ab8d7f97b24f73ab805d740.slice - libcontainer container kubepods-burstable-pod2a646da64ab8d7f97b24f73ab805d740.slice. Nov 8 01:15:13.393487 kubelet[2284]: E1108 01:15:13.393446 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1w3cb.gb1.brightbox.com\" not found" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.400494 systemd[1]: Created slice kubepods-burstable-poda4b8ae455b887be179c8c31c2cb7c637.slice - libcontainer container kubepods-burstable-poda4b8ae455b887be179c8c31c2cb7c637.slice. Nov 8 01:15:13.401958 kubelet[2284]: E1108 01:15:13.401899 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.23.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-1w3cb.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.23.242:6443: connect: connection refused" interval="400ms" Nov 8 01:15:13.403819 kubelet[2284]: I1108 01:15:13.403250 2284 kubelet_node_status.go:75] "Attempting to register node" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.403819 kubelet[2284]: E1108 01:15:13.403562 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1w3cb.gb1.brightbox.com\" not found" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.403819 kubelet[2284]: E1108 01:15:13.403614 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.23.242:6443/api/v1/nodes\": dial tcp 10.244.23.242:6443: connect: connection refused" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.497619 kubelet[2284]: I1108 01:15:13.497145 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a646da64ab8d7f97b24f73ab805d740-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-1w3cb.gb1.brightbox.com\" (UID: \"2a646da64ab8d7f97b24f73ab805d740\") " pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.497619 kubelet[2284]: I1108 01:15:13.497244 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a4b8ae455b887be179c8c31c2cb7c637-kubeconfig\") pod \"kube-scheduler-srv-1w3cb.gb1.brightbox.com\" (UID: \"a4b8ae455b887be179c8c31c2cb7c637\") " pod="kube-system/kube-scheduler-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.497619 kubelet[2284]: I1108 01:15:13.497278 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f19f0ca567dedb9ab3ab4b07cac49e69-ca-certs\") pod \"kube-apiserver-srv-1w3cb.gb1.brightbox.com\" (UID: \"f19f0ca567dedb9ab3ab4b07cac49e69\") " pod="kube-system/kube-apiserver-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.497619 kubelet[2284]: I1108 01:15:13.497304 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f19f0ca567dedb9ab3ab4b07cac49e69-k8s-certs\") pod \"kube-apiserver-srv-1w3cb.gb1.brightbox.com\" (UID: \"f19f0ca567dedb9ab3ab4b07cac49e69\") " pod="kube-system/kube-apiserver-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.497619 kubelet[2284]: I1108 01:15:13.497358 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f19f0ca567dedb9ab3ab4b07cac49e69-usr-share-ca-certificates\") pod \"kube-apiserver-srv-1w3cb.gb1.brightbox.com\" (UID: \"f19f0ca567dedb9ab3ab4b07cac49e69\") " pod="kube-system/kube-apiserver-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.498047 kubelet[2284]: I1108 01:15:13.497385 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a646da64ab8d7f97b24f73ab805d740-k8s-certs\") pod \"kube-controller-manager-srv-1w3cb.gb1.brightbox.com\" (UID: \"2a646da64ab8d7f97b24f73ab805d740\") " pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.498047 kubelet[2284]: I1108 01:15:13.497410 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a646da64ab8d7f97b24f73ab805d740-kubeconfig\") pod \"kube-controller-manager-srv-1w3cb.gb1.brightbox.com\" (UID: \"2a646da64ab8d7f97b24f73ab805d740\") " pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.498047 kubelet[2284]: I1108 01:15:13.497436 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a646da64ab8d7f97b24f73ab805d740-ca-certs\") pod \"kube-controller-manager-srv-1w3cb.gb1.brightbox.com\" (UID: \"2a646da64ab8d7f97b24f73ab805d740\") " pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.498047 kubelet[2284]: I1108 01:15:13.497462 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2a646da64ab8d7f97b24f73ab805d740-flexvolume-dir\") pod \"kube-controller-manager-srv-1w3cb.gb1.brightbox.com\" (UID: \"2a646da64ab8d7f97b24f73ab805d740\") " pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.607737 kubelet[2284]: I1108 01:15:13.607585 2284 kubelet_node_status.go:75] "Attempting to register node" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.608262 kubelet[2284]: E1108 01:15:13.608223 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.23.242:6443/api/v1/nodes\": dial tcp 10.244.23.242:6443: connect: connection refused" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:13.680817 containerd[1502]: time="2025-11-08T01:15:13.680605775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-1w3cb.gb1.brightbox.com,Uid:f19f0ca567dedb9ab3ab4b07cac49e69,Namespace:kube-system,Attempt:0,}" Nov 8 01:15:13.702690 containerd[1502]: time="2025-11-08T01:15:13.702611172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-1w3cb.gb1.brightbox.com,Uid:2a646da64ab8d7f97b24f73ab805d740,Namespace:kube-system,Attempt:0,}" Nov 8 01:15:13.705263 containerd[1502]: time="2025-11-08T01:15:13.705226156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-1w3cb.gb1.brightbox.com,Uid:a4b8ae455b887be179c8c31c2cb7c637,Namespace:kube-system,Attempt:0,}" Nov 8 01:15:13.803077 kubelet[2284]: E1108 01:15:13.803026 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.23.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-1w3cb.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.23.242:6443: connect: connection refused" interval="800ms" Nov 8 01:15:13.977795 kubelet[2284]: W1108 01:15:13.977435 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.23.242:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.23.242:6443: connect: connection refused Nov 8 01:15:13.977795 kubelet[2284]: E1108 01:15:13.977624 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.23.242:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.23.242:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:15:14.013602 kubelet[2284]: I1108 01:15:14.013321 2284 kubelet_node_status.go:75] "Attempting to register node" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:14.016187 kubelet[2284]: E1108 01:15:14.013772 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.23.242:6443/api/v1/nodes\": dial tcp 10.244.23.242:6443: connect: connection refused" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:14.067324 kubelet[2284]: W1108 01:15:14.067209 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.23.242:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-1w3cb.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.23.242:6443: connect: connection refused Nov 8 01:15:14.067515 kubelet[2284]: E1108 01:15:14.067338 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.23.242:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-1w3cb.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.23.242:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:15:14.429224 kubelet[2284]: W1108 01:15:14.429122 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.23.242:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.23.242:6443: connect: connection refused Nov 8 01:15:14.429224 kubelet[2284]: E1108 01:15:14.429242 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.23.242:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.23.242:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:15:14.475094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2548556057.mount: Deactivated successfully. Nov 8 01:15:14.492195 containerd[1502]: time="2025-11-08T01:15:14.492094645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 01:15:14.499858 containerd[1502]: time="2025-11-08T01:15:14.499720122Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 8 01:15:14.501224 containerd[1502]: time="2025-11-08T01:15:14.501097434Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 01:15:14.502522 containerd[1502]: time="2025-11-08T01:15:14.502477671Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 01:15:14.504203 containerd[1502]: time="2025-11-08T01:15:14.504043666Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 01:15:14.505626 containerd[1502]: time="2025-11-08T01:15:14.505417391Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 01:15:14.505626 containerd[1502]: time="2025-11-08T01:15:14.505534129Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 01:15:14.509239 containerd[1502]: time="2025-11-08T01:15:14.509203229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 01:15:14.512028 containerd[1502]: time="2025-11-08T01:15:14.511749810Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 806.444781ms" Nov 8 01:15:14.515209 containerd[1502]: time="2025-11-08T01:15:14.514197163Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 833.353528ms" Nov 8 01:15:14.517214 containerd[1502]: time="2025-11-08T01:15:14.516685575Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 813.961402ms" Nov 8 01:15:14.604641 kubelet[2284]: E1108 01:15:14.604559 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.23.242:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-1w3cb.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.23.242:6443: connect: connection refused" interval="1.6s" Nov 8 01:15:14.683600 kubelet[2284]: W1108 01:15:14.683297 2284 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.23.242:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.23.242:6443: connect: connection refused Nov 8 01:15:14.683600 kubelet[2284]: E1108 01:15:14.683390 2284 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.23.242:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.23.242:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:15:14.726699 containerd[1502]: time="2025-11-08T01:15:14.726498433Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:15:14.728303 containerd[1502]: time="2025-11-08T01:15:14.727986688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:15:14.728303 containerd[1502]: time="2025-11-08T01:15:14.728084842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:15:14.729739 containerd[1502]: time="2025-11-08T01:15:14.729138455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:15:14.731491 containerd[1502]: time="2025-11-08T01:15:14.730209381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:15:14.731491 containerd[1502]: time="2025-11-08T01:15:14.730286526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:15:14.731491 containerd[1502]: time="2025-11-08T01:15:14.730306379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:15:14.731491 containerd[1502]: time="2025-11-08T01:15:14.730521329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:15:14.733996 containerd[1502]: time="2025-11-08T01:15:14.733798951Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:15:14.735843 containerd[1502]: time="2025-11-08T01:15:14.735673874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:15:14.735843 containerd[1502]: time="2025-11-08T01:15:14.735729175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:15:14.736137 containerd[1502]: time="2025-11-08T01:15:14.735881047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:15:14.785513 systemd[1]: Started cri-containerd-4868fc690ae2411e545c6c5aea8dddc1e6f65057d2bb7e0591aa5ea7a212369f.scope - libcontainer container 4868fc690ae2411e545c6c5aea8dddc1e6f65057d2bb7e0591aa5ea7a212369f. Nov 8 01:15:14.789841 systemd[1]: Started cri-containerd-98d54a89854eaba7176dc7c28171e3d5c18e59f671b7a8e91e0119fe6497fc3d.scope - libcontainer container 98d54a89854eaba7176dc7c28171e3d5c18e59f671b7a8e91e0119fe6497fc3d. Nov 8 01:15:14.798994 systemd[1]: Started cri-containerd-e41e5acf865f072661c21af6a9e38a0b6dc74cc818c367b6dac07cdad6abfe87.scope - libcontainer container e41e5acf865f072661c21af6a9e38a0b6dc74cc818c367b6dac07cdad6abfe87. Nov 8 01:15:14.819202 kubelet[2284]: I1108 01:15:14.819144 2284 kubelet_node_status.go:75] "Attempting to register node" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:14.821771 kubelet[2284]: E1108 01:15:14.821698 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.244.23.242:6443/api/v1/nodes\": dial tcp 10.244.23.242:6443: connect: connection refused" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:14.898796 containerd[1502]: time="2025-11-08T01:15:14.898701244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-1w3cb.gb1.brightbox.com,Uid:f19f0ca567dedb9ab3ab4b07cac49e69,Namespace:kube-system,Attempt:0,} returns sandbox id \"4868fc690ae2411e545c6c5aea8dddc1e6f65057d2bb7e0591aa5ea7a212369f\"" Nov 8 01:15:14.909390 containerd[1502]: time="2025-11-08T01:15:14.909156419Z" level=info msg="CreateContainer within sandbox \"4868fc690ae2411e545c6c5aea8dddc1e6f65057d2bb7e0591aa5ea7a212369f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 01:15:14.928544 containerd[1502]: time="2025-11-08T01:15:14.928478137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-1w3cb.gb1.brightbox.com,Uid:2a646da64ab8d7f97b24f73ab805d740,Namespace:kube-system,Attempt:0,} returns sandbox id \"e41e5acf865f072661c21af6a9e38a0b6dc74cc818c367b6dac07cdad6abfe87\"" Nov 8 01:15:14.932278 containerd[1502]: time="2025-11-08T01:15:14.932095305Z" level=info msg="CreateContainer within sandbox \"4868fc690ae2411e545c6c5aea8dddc1e6f65057d2bb7e0591aa5ea7a212369f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"075e2a85cd4c42ae9bf7fddd3c57e634cdf9b5b1cd55eb70dbbacd689d22600c\"" Nov 8 01:15:14.938080 containerd[1502]: time="2025-11-08T01:15:14.937199007Z" level=info msg="StartContainer for \"075e2a85cd4c42ae9bf7fddd3c57e634cdf9b5b1cd55eb70dbbacd689d22600c\"" Nov 8 01:15:14.938774 containerd[1502]: time="2025-11-08T01:15:14.938646412Z" level=info msg="CreateContainer within sandbox \"e41e5acf865f072661c21af6a9e38a0b6dc74cc818c367b6dac07cdad6abfe87\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 01:15:14.943728 containerd[1502]: time="2025-11-08T01:15:14.943691905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-1w3cb.gb1.brightbox.com,Uid:a4b8ae455b887be179c8c31c2cb7c637,Namespace:kube-system,Attempt:0,} returns sandbox id \"98d54a89854eaba7176dc7c28171e3d5c18e59f671b7a8e91e0119fe6497fc3d\"" Nov 8 01:15:14.947251 containerd[1502]: time="2025-11-08T01:15:14.947213633Z" level=info msg="CreateContainer within sandbox \"98d54a89854eaba7176dc7c28171e3d5c18e59f671b7a8e91e0119fe6497fc3d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 01:15:14.966440 containerd[1502]: time="2025-11-08T01:15:14.966378619Z" level=info msg="CreateContainer within sandbox \"e41e5acf865f072661c21af6a9e38a0b6dc74cc818c367b6dac07cdad6abfe87\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8cc4f6600515ba05da94147ff1f631fdc8dc2444bc09796b56e4f029034af170\"" Nov 8 01:15:14.969188 containerd[1502]: time="2025-11-08T01:15:14.967544995Z" level=info msg="StartContainer for \"8cc4f6600515ba05da94147ff1f631fdc8dc2444bc09796b56e4f029034af170\"" Nov 8 01:15:14.977032 containerd[1502]: time="2025-11-08T01:15:14.976973637Z" level=info msg="CreateContainer within sandbox \"98d54a89854eaba7176dc7c28171e3d5c18e59f671b7a8e91e0119fe6497fc3d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1dd2a818b4d8938b1e5da6b9eb28f88663418c4c2fb35084471c8c331ed9069d\"" Nov 8 01:15:14.978035 containerd[1502]: time="2025-11-08T01:15:14.978003899Z" level=info msg="StartContainer for \"1dd2a818b4d8938b1e5da6b9eb28f88663418c4c2fb35084471c8c331ed9069d\"" Nov 8 01:15:14.994427 systemd[1]: Started cri-containerd-075e2a85cd4c42ae9bf7fddd3c57e634cdf9b5b1cd55eb70dbbacd689d22600c.scope - libcontainer container 075e2a85cd4c42ae9bf7fddd3c57e634cdf9b5b1cd55eb70dbbacd689d22600c. Nov 8 01:15:15.048388 systemd[1]: Started cri-containerd-1dd2a818b4d8938b1e5da6b9eb28f88663418c4c2fb35084471c8c331ed9069d.scope - libcontainer container 1dd2a818b4d8938b1e5da6b9eb28f88663418c4c2fb35084471c8c331ed9069d. Nov 8 01:15:15.060688 systemd[1]: Started cri-containerd-8cc4f6600515ba05da94147ff1f631fdc8dc2444bc09796b56e4f029034af170.scope - libcontainer container 8cc4f6600515ba05da94147ff1f631fdc8dc2444bc09796b56e4f029034af170. Nov 8 01:15:15.116218 containerd[1502]: time="2025-11-08T01:15:15.114845783Z" level=info msg="StartContainer for \"075e2a85cd4c42ae9bf7fddd3c57e634cdf9b5b1cd55eb70dbbacd689d22600c\" returns successfully" Nov 8 01:15:15.162036 containerd[1502]: time="2025-11-08T01:15:15.161901865Z" level=info msg="StartContainer for \"1dd2a818b4d8938b1e5da6b9eb28f88663418c4c2fb35084471c8c331ed9069d\" returns successfully" Nov 8 01:15:15.174057 containerd[1502]: time="2025-11-08T01:15:15.173520829Z" level=info msg="StartContainer for \"8cc4f6600515ba05da94147ff1f631fdc8dc2444bc09796b56e4f029034af170\" returns successfully" Nov 8 01:15:15.195269 kubelet[2284]: E1108 01:15:15.194356 2284 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.23.242:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.23.242:6443: connect: connection refused" logger="UnhandledError" Nov 8 01:15:15.258110 kubelet[2284]: E1108 01:15:15.258065 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1w3cb.gb1.brightbox.com\" not found" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:15.258761 kubelet[2284]: E1108 01:15:15.258475 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1w3cb.gb1.brightbox.com\" not found" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:15.262392 kubelet[2284]: E1108 01:15:15.262363 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1w3cb.gb1.brightbox.com\" not found" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:16.270227 kubelet[2284]: E1108 01:15:16.270161 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1w3cb.gb1.brightbox.com\" not found" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:16.271687 kubelet[2284]: E1108 01:15:16.270670 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1w3cb.gb1.brightbox.com\" not found" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:16.281976 kubelet[2284]: E1108 01:15:16.281924 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1w3cb.gb1.brightbox.com\" not found" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:16.426609 kubelet[2284]: I1108 01:15:16.426550 2284 kubelet_node_status.go:75] "Attempting to register node" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:17.536976 kubelet[2284]: E1108 01:15:17.536924 2284 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-1w3cb.gb1.brightbox.com\" not found" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:18.169213 kubelet[2284]: I1108 01:15:18.168265 2284 apiserver.go:52] "Watching apiserver" Nov 8 01:15:18.192808 kubelet[2284]: E1108 01:15:18.192756 2284 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-1w3cb.gb1.brightbox.com\" not found" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:18.197087 kubelet[2284]: I1108 01:15:18.197017 2284 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 01:15:18.222073 kubelet[2284]: I1108 01:15:18.222017 2284 kubelet_node_status.go:78] "Successfully registered node" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:18.222073 kubelet[2284]: E1108 01:15:18.222071 2284 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-1w3cb.gb1.brightbox.com\": node \"srv-1w3cb.gb1.brightbox.com\" not found" Nov 8 01:15:18.285523 kubelet[2284]: E1108 01:15:18.285035 2284 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-1w3cb.gb1.brightbox.com.1875e316150e466a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-1w3cb.gb1.brightbox.com,UID:srv-1w3cb.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-1w3cb.gb1.brightbox.com,},FirstTimestamp:2025-11-08 01:15:13.167459946 +0000 UTC m=+0.855352612,LastTimestamp:2025-11-08 01:15:13.167459946 +0000 UTC m=+0.855352612,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-1w3cb.gb1.brightbox.com,}" Nov 8 01:15:18.296576 kubelet[2284]: I1108 01:15:18.296076 2284 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:18.303898 kubelet[2284]: E1108 01:15:18.303859 2284 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-1w3cb.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:18.303898 kubelet[2284]: I1108 01:15:18.303898 2284 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:18.306093 kubelet[2284]: E1108 01:15:18.306043 2284 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-1w3cb.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:18.306093 kubelet[2284]: I1108 01:15:18.306080 2284 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:18.308388 kubelet[2284]: E1108 01:15:18.308356 2284 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-1w3cb.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:19.854511 kubelet[2284]: I1108 01:15:19.854379 2284 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:19.864590 kubelet[2284]: W1108 01:15:19.864540 2284 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 01:15:20.360970 systemd[1]: Reloading requested from client PID 2564 ('systemctl') (unit session-9.scope)... Nov 8 01:15:20.361854 systemd[1]: Reloading... Nov 8 01:15:20.499266 zram_generator::config[2607]: No configuration found. Nov 8 01:15:20.672007 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 01:15:20.805526 systemd[1]: Reloading finished in 442 ms. Nov 8 01:15:20.866603 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:15:20.873796 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 01:15:20.874229 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:15:20.874348 systemd[1]: kubelet.service: Consumed 1.421s CPU time, 132.7M memory peak, 0B memory swap peak. Nov 8 01:15:20.881933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 01:15:21.102004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 01:15:21.113681 (kubelet)[2667]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 01:15:21.252128 kubelet[2667]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 01:15:21.253703 kubelet[2667]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 01:15:21.253703 kubelet[2667]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 01:15:21.253703 kubelet[2667]: I1108 01:15:21.252330 2667 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 01:15:21.263075 kubelet[2667]: I1108 01:15:21.263026 2667 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 01:15:21.263339 kubelet[2667]: I1108 01:15:21.263317 2667 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 01:15:21.263805 kubelet[2667]: I1108 01:15:21.263782 2667 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 01:15:21.269211 kubelet[2667]: I1108 01:15:21.269137 2667 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 01:15:21.276318 kubelet[2667]: I1108 01:15:21.276033 2667 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 01:15:21.282336 kubelet[2667]: E1108 01:15:21.282299 2667 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 01:15:21.282602 kubelet[2667]: I1108 01:15:21.282580 2667 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 01:15:21.288991 kubelet[2667]: I1108 01:15:21.288947 2667 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 01:15:21.289463 kubelet[2667]: I1108 01:15:21.289405 2667 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 01:15:21.289747 kubelet[2667]: I1108 01:15:21.289460 2667 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-1w3cb.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 01:15:21.289984 kubelet[2667]: I1108 01:15:21.289773 2667 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 01:15:21.289984 kubelet[2667]: I1108 01:15:21.289792 2667 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 01:15:21.289984 kubelet[2667]: I1108 01:15:21.289897 2667 state_mem.go:36] "Initialized new in-memory state store" Nov 8 01:15:21.290252 kubelet[2667]: I1108 01:15:21.290222 2667 kubelet.go:446] "Attempting to sync node with API server" Nov 8 01:15:21.290329 kubelet[2667]: I1108 01:15:21.290269 2667 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 01:15:21.290329 kubelet[2667]: I1108 01:15:21.290313 2667 kubelet.go:352] "Adding apiserver pod source" Nov 8 01:15:21.292531 kubelet[2667]: I1108 01:15:21.290339 2667 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 01:15:21.293109 kubelet[2667]: I1108 01:15:21.293084 2667 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 01:15:21.293818 kubelet[2667]: I1108 01:15:21.293794 2667 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 01:15:21.294756 kubelet[2667]: I1108 01:15:21.294734 2667 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 01:15:21.294926 kubelet[2667]: I1108 01:15:21.294906 2667 server.go:1287] "Started kubelet" Nov 8 01:15:21.309629 kubelet[2667]: I1108 01:15:21.309594 2667 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 01:15:21.319223 kubelet[2667]: I1108 01:15:21.317654 2667 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 01:15:21.319689 kubelet[2667]: I1108 01:15:21.319657 2667 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 01:15:21.322681 kubelet[2667]: I1108 01:15:21.320974 2667 server.go:479] "Adding debug handlers to kubelet server" Nov 8 01:15:21.325982 kubelet[2667]: I1108 01:15:21.318418 2667 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 01:15:21.329340 kubelet[2667]: I1108 01:15:21.329315 2667 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 01:15:21.329493 kubelet[2667]: I1108 01:15:21.318352 2667 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 01:15:21.337330 kubelet[2667]: I1108 01:15:21.318328 2667 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 01:15:21.338320 kubelet[2667]: I1108 01:15:21.338297 2667 reconciler.go:26] "Reconciler: start to sync state" Nov 8 01:15:21.342442 kubelet[2667]: I1108 01:15:21.342408 2667 factory.go:221] Registration of the systemd container factory successfully Nov 8 01:15:21.342849 kubelet[2667]: I1108 01:15:21.342635 2667 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 01:15:21.347920 kubelet[2667]: I1108 01:15:21.347382 2667 factory.go:221] Registration of the containerd container factory successfully Nov 8 01:15:21.352327 kubelet[2667]: E1108 01:15:21.351657 2667 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 01:15:21.361444 kubelet[2667]: I1108 01:15:21.361265 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 01:15:21.367933 kubelet[2667]: I1108 01:15:21.367891 2667 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 01:15:21.368088 kubelet[2667]: I1108 01:15:21.367965 2667 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 01:15:21.368088 kubelet[2667]: I1108 01:15:21.368007 2667 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 01:15:21.368088 kubelet[2667]: I1108 01:15:21.368026 2667 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 01:15:21.370106 kubelet[2667]: E1108 01:15:21.369945 2667 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 01:15:21.445734 kubelet[2667]: I1108 01:15:21.445699 2667 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 01:15:21.446027 kubelet[2667]: I1108 01:15:21.445976 2667 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 01:15:21.446333 kubelet[2667]: I1108 01:15:21.446313 2667 state_mem.go:36] "Initialized new in-memory state store" Nov 8 01:15:21.446743 kubelet[2667]: I1108 01:15:21.446717 2667 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 01:15:21.446878 kubelet[2667]: I1108 01:15:21.446838 2667 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 01:15:21.446999 kubelet[2667]: I1108 01:15:21.446981 2667 policy_none.go:49] "None policy: Start" Nov 8 01:15:21.447123 kubelet[2667]: I1108 01:15:21.447104 2667 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 01:15:21.447267 kubelet[2667]: I1108 01:15:21.447249 2667 state_mem.go:35] "Initializing new in-memory state store" Nov 8 01:15:21.447554 kubelet[2667]: I1108 01:15:21.447520 2667 state_mem.go:75] "Updated machine memory state" Nov 8 01:15:21.455455 kubelet[2667]: I1108 01:15:21.455408 2667 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 01:15:21.456450 kubelet[2667]: I1108 01:15:21.456411 2667 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 01:15:21.457191 kubelet[2667]: I1108 01:15:21.456780 2667 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 01:15:21.457399 kubelet[2667]: I1108 01:15:21.457363 2667 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 01:15:21.460342 kubelet[2667]: E1108 01:15:21.460316 2667 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 01:15:21.473002 kubelet[2667]: I1108 01:15:21.472953 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:21.475007 kubelet[2667]: I1108 01:15:21.474555 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:21.475658 kubelet[2667]: I1108 01:15:21.475638 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:21.541086 kubelet[2667]: I1108 01:15:21.541030 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a4b8ae455b887be179c8c31c2cb7c637-kubeconfig\") pod \"kube-scheduler-srv-1w3cb.gb1.brightbox.com\" (UID: \"a4b8ae455b887be179c8c31c2cb7c637\") " pod="kube-system/kube-scheduler-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:21.541649 kubelet[2667]: I1108 01:15:21.541300 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a646da64ab8d7f97b24f73ab805d740-ca-certs\") pod \"kube-controller-manager-srv-1w3cb.gb1.brightbox.com\" (UID: \"2a646da64ab8d7f97b24f73ab805d740\") " pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:21.541649 kubelet[2667]: I1108 01:15:21.541347 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2a646da64ab8d7f97b24f73ab805d740-flexvolume-dir\") pod \"kube-controller-manager-srv-1w3cb.gb1.brightbox.com\" (UID: \"2a646da64ab8d7f97b24f73ab805d740\") " pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:21.541649 kubelet[2667]: I1108 01:15:21.541377 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a646da64ab8d7f97b24f73ab805d740-k8s-certs\") pod \"kube-controller-manager-srv-1w3cb.gb1.brightbox.com\" (UID: \"2a646da64ab8d7f97b24f73ab805d740\") " pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:21.541649 kubelet[2667]: I1108 01:15:21.541406 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a646da64ab8d7f97b24f73ab805d740-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-1w3cb.gb1.brightbox.com\" (UID: \"2a646da64ab8d7f97b24f73ab805d740\") " pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:21.541649 kubelet[2667]: I1108 01:15:21.541444 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f19f0ca567dedb9ab3ab4b07cac49e69-ca-certs\") pod \"kube-apiserver-srv-1w3cb.gb1.brightbox.com\" (UID: \"f19f0ca567dedb9ab3ab4b07cac49e69\") " pod="kube-system/kube-apiserver-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:21.541954 kubelet[2667]: I1108 01:15:21.541471 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f19f0ca567dedb9ab3ab4b07cac49e69-k8s-certs\") pod \"kube-apiserver-srv-1w3cb.gb1.brightbox.com\" (UID: \"f19f0ca567dedb9ab3ab4b07cac49e69\") " pod="kube-system/kube-apiserver-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:21.541954 kubelet[2667]: I1108 01:15:21.541497 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f19f0ca567dedb9ab3ab4b07cac49e69-usr-share-ca-certificates\") pod \"kube-apiserver-srv-1w3cb.gb1.brightbox.com\" (UID: \"f19f0ca567dedb9ab3ab4b07cac49e69\") " pod="kube-system/kube-apiserver-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:21.541954 kubelet[2667]: I1108 01:15:21.541528 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2a646da64ab8d7f97b24f73ab805d740-kubeconfig\") pod \"kube-controller-manager-srv-1w3cb.gb1.brightbox.com\" (UID: \"2a646da64ab8d7f97b24f73ab805d740\") " pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:21.557365 kubelet[2667]: W1108 01:15:21.556972 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 01:15:21.588973 kubelet[2667]: I1108 01:15:21.588935 2667 kubelet_node_status.go:75] "Attempting to register node" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:21.602828 kubelet[2667]: W1108 01:15:21.602660 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 01:15:21.627753 kubelet[2667]: W1108 01:15:21.627697 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 01:15:21.628419 kubelet[2667]: E1108 01:15:21.628062 2667 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-1w3cb.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:21.758215 kubelet[2667]: I1108 01:15:21.757289 2667 kubelet_node_status.go:124] "Node was previously registered" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:21.758215 kubelet[2667]: I1108 01:15:21.757418 2667 kubelet_node_status.go:78] "Successfully registered node" node="srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:22.291483 kubelet[2667]: I1108 01:15:22.291428 2667 apiserver.go:52] "Watching apiserver" Nov 8 01:15:22.337220 kubelet[2667]: I1108 01:15:22.336961 2667 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 01:15:22.411518 kubelet[2667]: I1108 01:15:22.409844 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:22.417121 kubelet[2667]: I1108 01:15:22.417078 2667 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:22.418558 kubelet[2667]: W1108 01:15:22.418460 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 01:15:22.420373 kubelet[2667]: E1108 01:15:22.420327 2667 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-1w3cb.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:22.443432 kubelet[2667]: W1108 01:15:22.443035 2667 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 8 01:15:22.443432 kubelet[2667]: E1108 01:15:22.443122 2667 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-1w3cb.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-1w3cb.gb1.brightbox.com" Nov 8 01:15:22.503203 kubelet[2667]: I1108 01:15:22.501924 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-1w3cb.gb1.brightbox.com" podStartSLOduration=1.501887032 podStartE2EDuration="1.501887032s" podCreationTimestamp="2025-11-08 01:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 01:15:22.482009093 +0000 UTC m=+1.300115105" watchObservedRunningTime="2025-11-08 01:15:22.501887032 +0000 UTC m=+1.319993028" Nov 8 01:15:22.503203 kubelet[2667]: I1108 01:15:22.502095 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-1w3cb.gb1.brightbox.com" podStartSLOduration=1.502085742 podStartE2EDuration="1.502085742s" podCreationTimestamp="2025-11-08 01:15:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 01:15:22.501640722 +0000 UTC m=+1.319746731" watchObservedRunningTime="2025-11-08 01:15:22.502085742 +0000 UTC m=+1.320191745" Nov 8 01:15:22.520242 kubelet[2667]: I1108 01:15:22.519417 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-1w3cb.gb1.brightbox.com" podStartSLOduration=3.51939489 podStartE2EDuration="3.51939489s" podCreationTimestamp="2025-11-08 01:15:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 01:15:22.518632347 +0000 UTC m=+1.336738357" watchObservedRunningTime="2025-11-08 01:15:22.51939489 +0000 UTC m=+1.337500882" Nov 8 01:15:27.404865 kubelet[2667]: I1108 01:15:27.404617 2667 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 01:15:27.406075 kubelet[2667]: I1108 01:15:27.405442 2667 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 01:15:27.406146 containerd[1502]: time="2025-11-08T01:15:27.405125353Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 01:15:27.547996 systemd[1]: Created slice kubepods-besteffort-pod25679722_eb9e_4716_83e9_1105373dce18.slice - libcontainer container kubepods-besteffort-pod25679722_eb9e_4716_83e9_1105373dce18.slice. Nov 8 01:15:27.582208 kubelet[2667]: I1108 01:15:27.580561 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6g6lh\" (UniqueName: \"kubernetes.io/projected/25679722-eb9e-4716-83e9-1105373dce18-kube-api-access-6g6lh\") pod \"kube-proxy-4n9hc\" (UID: \"25679722-eb9e-4716-83e9-1105373dce18\") " pod="kube-system/kube-proxy-4n9hc" Nov 8 01:15:27.582208 kubelet[2667]: I1108 01:15:27.580664 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/25679722-eb9e-4716-83e9-1105373dce18-kube-proxy\") pod \"kube-proxy-4n9hc\" (UID: \"25679722-eb9e-4716-83e9-1105373dce18\") " pod="kube-system/kube-proxy-4n9hc" Nov 8 01:15:27.582208 kubelet[2667]: I1108 01:15:27.580714 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/25679722-eb9e-4716-83e9-1105373dce18-xtables-lock\") pod \"kube-proxy-4n9hc\" (UID: \"25679722-eb9e-4716-83e9-1105373dce18\") " pod="kube-system/kube-proxy-4n9hc" Nov 8 01:15:27.582208 kubelet[2667]: I1108 01:15:27.580750 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/25679722-eb9e-4716-83e9-1105373dce18-lib-modules\") pod \"kube-proxy-4n9hc\" (UID: \"25679722-eb9e-4716-83e9-1105373dce18\") " pod="kube-system/kube-proxy-4n9hc" Nov 8 01:15:27.662674 systemd[1]: Created slice kubepods-besteffort-podc700e9a5_500d_4652_ad8a_686caf16519f.slice - libcontainer container kubepods-besteffort-podc700e9a5_500d_4652_ad8a_686caf16519f.slice. Nov 8 01:15:27.681543 kubelet[2667]: I1108 01:15:27.681476 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x5ms\" (UniqueName: \"kubernetes.io/projected/c700e9a5-500d-4652-ad8a-686caf16519f-kube-api-access-2x5ms\") pod \"tigera-operator-7dcd859c48-nstmj\" (UID: \"c700e9a5-500d-4652-ad8a-686caf16519f\") " pod="tigera-operator/tigera-operator-7dcd859c48-nstmj" Nov 8 01:15:27.682459 kubelet[2667]: I1108 01:15:27.681832 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c700e9a5-500d-4652-ad8a-686caf16519f-var-lib-calico\") pod \"tigera-operator-7dcd859c48-nstmj\" (UID: \"c700e9a5-500d-4652-ad8a-686caf16519f\") " pod="tigera-operator/tigera-operator-7dcd859c48-nstmj" Nov 8 01:15:27.861460 containerd[1502]: time="2025-11-08T01:15:27.861368152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4n9hc,Uid:25679722-eb9e-4716-83e9-1105373dce18,Namespace:kube-system,Attempt:0,}" Nov 8 01:15:27.905384 containerd[1502]: time="2025-11-08T01:15:27.904103484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:15:27.907337 containerd[1502]: time="2025-11-08T01:15:27.905471730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:15:27.907337 containerd[1502]: time="2025-11-08T01:15:27.905496183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:15:27.907337 containerd[1502]: time="2025-11-08T01:15:27.905816245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:15:27.953504 systemd[1]: Started cri-containerd-adef67b3323bd1b63ad7910dc006a388e28f079ac5e98676cf13729531e6f9c9.scope - libcontainer container adef67b3323bd1b63ad7910dc006a388e28f079ac5e98676cf13729531e6f9c9. Nov 8 01:15:27.969129 containerd[1502]: time="2025-11-08T01:15:27.969069446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-nstmj,Uid:c700e9a5-500d-4652-ad8a-686caf16519f,Namespace:tigera-operator,Attempt:0,}" Nov 8 01:15:27.999469 containerd[1502]: time="2025-11-08T01:15:27.999405759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4n9hc,Uid:25679722-eb9e-4716-83e9-1105373dce18,Namespace:kube-system,Attempt:0,} returns sandbox id \"adef67b3323bd1b63ad7910dc006a388e28f079ac5e98676cf13729531e6f9c9\"" Nov 8 01:15:28.007402 containerd[1502]: time="2025-11-08T01:15:28.007320386Z" level=info msg="CreateContainer within sandbox \"adef67b3323bd1b63ad7910dc006a388e28f079ac5e98676cf13729531e6f9c9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 01:15:28.023009 containerd[1502]: time="2025-11-08T01:15:28.022802699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:15:28.023009 containerd[1502]: time="2025-11-08T01:15:28.022921079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:15:28.023336 containerd[1502]: time="2025-11-08T01:15:28.022947289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:15:28.023336 containerd[1502]: time="2025-11-08T01:15:28.023116495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:15:28.030637 containerd[1502]: time="2025-11-08T01:15:28.030423349Z" level=info msg="CreateContainer within sandbox \"adef67b3323bd1b63ad7910dc006a388e28f079ac5e98676cf13729531e6f9c9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0e0a6f4f035a3e0ec69879648002abd76419f92545e57d3d465e11aee80f7a16\"" Nov 8 01:15:28.032283 containerd[1502]: time="2025-11-08T01:15:28.032250453Z" level=info msg="StartContainer for \"0e0a6f4f035a3e0ec69879648002abd76419f92545e57d3d465e11aee80f7a16\"" Nov 8 01:15:28.058911 systemd[1]: Started cri-containerd-574ac19e9792e74815c7bbb4a6f22686e66f52fbb71377e71b60f58442cf9a7f.scope - libcontainer container 574ac19e9792e74815c7bbb4a6f22686e66f52fbb71377e71b60f58442cf9a7f. Nov 8 01:15:28.111405 systemd[1]: Started cri-containerd-0e0a6f4f035a3e0ec69879648002abd76419f92545e57d3d465e11aee80f7a16.scope - libcontainer container 0e0a6f4f035a3e0ec69879648002abd76419f92545e57d3d465e11aee80f7a16. Nov 8 01:15:28.176049 containerd[1502]: time="2025-11-08T01:15:28.175922421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-nstmj,Uid:c700e9a5-500d-4652-ad8a-686caf16519f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"574ac19e9792e74815c7bbb4a6f22686e66f52fbb71377e71b60f58442cf9a7f\"" Nov 8 01:15:28.180575 containerd[1502]: time="2025-11-08T01:15:28.180415972Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 01:15:28.189462 containerd[1502]: time="2025-11-08T01:15:28.189416816Z" level=info msg="StartContainer for \"0e0a6f4f035a3e0ec69879648002abd76419f92545e57d3d465e11aee80f7a16\" returns successfully" Nov 8 01:15:28.445629 kubelet[2667]: I1108 01:15:28.444211 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4n9hc" podStartSLOduration=1.444159831 podStartE2EDuration="1.444159831s" podCreationTimestamp="2025-11-08 01:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 01:15:28.443529026 +0000 UTC m=+7.261635043" watchObservedRunningTime="2025-11-08 01:15:28.444159831 +0000 UTC m=+7.262265836" Nov 8 01:15:28.702079 systemd[1]: run-containerd-runc-k8s.io-adef67b3323bd1b63ad7910dc006a388e28f079ac5e98676cf13729531e6f9c9-runc.TtJC8D.mount: Deactivated successfully. Nov 8 01:15:30.811502 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount281748429.mount: Deactivated successfully. Nov 8 01:15:31.867494 containerd[1502]: time="2025-11-08T01:15:31.867401736Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:31.869366 containerd[1502]: time="2025-11-08T01:15:31.869305392Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 01:15:31.870332 containerd[1502]: time="2025-11-08T01:15:31.870270846Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:31.874559 containerd[1502]: time="2025-11-08T01:15:31.873200850Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:31.874559 containerd[1502]: time="2025-11-08T01:15:31.874399883Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 3.693926942s" Nov 8 01:15:31.874559 containerd[1502]: time="2025-11-08T01:15:31.874447122Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 01:15:31.879558 containerd[1502]: time="2025-11-08T01:15:31.879507354Z" level=info msg="CreateContainer within sandbox \"574ac19e9792e74815c7bbb4a6f22686e66f52fbb71377e71b60f58442cf9a7f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 01:15:31.920440 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3954592854.mount: Deactivated successfully. Nov 8 01:15:31.925220 containerd[1502]: time="2025-11-08T01:15:31.925157682Z" level=info msg="CreateContainer within sandbox \"574ac19e9792e74815c7bbb4a6f22686e66f52fbb71377e71b60f58442cf9a7f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b493cf108e701bbba7a5a49a3945b2de999a853e584eb9aa98a4533b3e931673\"" Nov 8 01:15:31.927225 containerd[1502]: time="2025-11-08T01:15:31.926290685Z" level=info msg="StartContainer for \"b493cf108e701bbba7a5a49a3945b2de999a853e584eb9aa98a4533b3e931673\"" Nov 8 01:15:31.992478 systemd[1]: Started cri-containerd-b493cf108e701bbba7a5a49a3945b2de999a853e584eb9aa98a4533b3e931673.scope - libcontainer container b493cf108e701bbba7a5a49a3945b2de999a853e584eb9aa98a4533b3e931673. Nov 8 01:15:32.039512 containerd[1502]: time="2025-11-08T01:15:32.039461691Z" level=info msg="StartContainer for \"b493cf108e701bbba7a5a49a3945b2de999a853e584eb9aa98a4533b3e931673\" returns successfully" Nov 8 01:15:32.456325 kubelet[2667]: I1108 01:15:32.455665 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-nstmj" podStartSLOduration=1.7591515659999999 podStartE2EDuration="5.455608652s" podCreationTimestamp="2025-11-08 01:15:27 +0000 UTC" firstStartedPulling="2025-11-08 01:15:28.179393718 +0000 UTC m=+6.997499715" lastFinishedPulling="2025-11-08 01:15:31.87585079 +0000 UTC m=+10.693956801" observedRunningTime="2025-11-08 01:15:32.45399003 +0000 UTC m=+11.272096035" watchObservedRunningTime="2025-11-08 01:15:32.455608652 +0000 UTC m=+11.273714657" Nov 8 01:15:37.948904 sudo[1747]: pam_unix(sudo:session): session closed for user root Nov 8 01:15:38.125275 sshd[1744]: pam_unix(sshd:session): session closed for user core Nov 8 01:15:38.133804 systemd-logind[1488]: Session 9 logged out. Waiting for processes to exit. Nov 8 01:15:38.135018 systemd[1]: sshd@6-10.244.23.242:22-139.178.68.195:60750.service: Deactivated successfully. Nov 8 01:15:38.140299 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 01:15:38.140732 systemd[1]: session-9.scope: Consumed 7.234s CPU time, 141.8M memory peak, 0B memory swap peak. Nov 8 01:15:38.143746 systemd-logind[1488]: Removed session 9. Nov 8 01:15:45.832867 systemd[1]: Created slice kubepods-besteffort-podd6afd309_10d7_44ee_99c7_bd33c8d7bc08.slice - libcontainer container kubepods-besteffort-podd6afd309_10d7_44ee_99c7_bd33c8d7bc08.slice. Nov 8 01:15:45.910812 kubelet[2667]: I1108 01:15:45.910573 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d6afd309-10d7-44ee-99c7-bd33c8d7bc08-typha-certs\") pod \"calico-typha-75b474bf-wxvb9\" (UID: \"d6afd309-10d7-44ee-99c7-bd33c8d7bc08\") " pod="calico-system/calico-typha-75b474bf-wxvb9" Nov 8 01:15:45.910812 kubelet[2667]: I1108 01:15:45.910695 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbjj9\" (UniqueName: \"kubernetes.io/projected/d6afd309-10d7-44ee-99c7-bd33c8d7bc08-kube-api-access-dbjj9\") pod \"calico-typha-75b474bf-wxvb9\" (UID: \"d6afd309-10d7-44ee-99c7-bd33c8d7bc08\") " pod="calico-system/calico-typha-75b474bf-wxvb9" Nov 8 01:15:45.910812 kubelet[2667]: I1108 01:15:45.910746 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6afd309-10d7-44ee-99c7-bd33c8d7bc08-tigera-ca-bundle\") pod \"calico-typha-75b474bf-wxvb9\" (UID: \"d6afd309-10d7-44ee-99c7-bd33c8d7bc08\") " pod="calico-system/calico-typha-75b474bf-wxvb9" Nov 8 01:15:46.051661 kubelet[2667]: I1108 01:15:46.051579 2667 status_manager.go:890] "Failed to get status for pod" podUID="e8b56370-1b9d-42a9-bea5-73bbd242122c" pod="calico-system/calico-node-wlhmp" err="pods \"calico-node-wlhmp\" is forbidden: User \"system:node:srv-1w3cb.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-1w3cb.gb1.brightbox.com' and this object" Nov 8 01:15:46.070991 systemd[1]: Created slice kubepods-besteffort-pode8b56370_1b9d_42a9_bea5_73bbd242122c.slice - libcontainer container kubepods-besteffort-pode8b56370_1b9d_42a9_bea5_73bbd242122c.slice. Nov 8 01:15:46.112916 kubelet[2667]: I1108 01:15:46.112652 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e8b56370-1b9d-42a9-bea5-73bbd242122c-var-lib-calico\") pod \"calico-node-wlhmp\" (UID: \"e8b56370-1b9d-42a9-bea5-73bbd242122c\") " pod="calico-system/calico-node-wlhmp" Nov 8 01:15:46.112916 kubelet[2667]: I1108 01:15:46.112728 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e8b56370-1b9d-42a9-bea5-73bbd242122c-var-run-calico\") pod \"calico-node-wlhmp\" (UID: \"e8b56370-1b9d-42a9-bea5-73bbd242122c\") " pod="calico-system/calico-node-wlhmp" Nov 8 01:15:46.112916 kubelet[2667]: I1108 01:15:46.112780 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lmbv\" (UniqueName: \"kubernetes.io/projected/e8b56370-1b9d-42a9-bea5-73bbd242122c-kube-api-access-4lmbv\") pod \"calico-node-wlhmp\" (UID: \"e8b56370-1b9d-42a9-bea5-73bbd242122c\") " pod="calico-system/calico-node-wlhmp" Nov 8 01:15:46.112916 kubelet[2667]: I1108 01:15:46.112843 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e8b56370-1b9d-42a9-bea5-73bbd242122c-cni-net-dir\") pod \"calico-node-wlhmp\" (UID: \"e8b56370-1b9d-42a9-bea5-73bbd242122c\") " pod="calico-system/calico-node-wlhmp" Nov 8 01:15:46.112916 kubelet[2667]: I1108 01:15:46.112873 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e8b56370-1b9d-42a9-bea5-73bbd242122c-flexvol-driver-host\") pod \"calico-node-wlhmp\" (UID: \"e8b56370-1b9d-42a9-bea5-73bbd242122c\") " pod="calico-system/calico-node-wlhmp" Nov 8 01:15:46.113321 kubelet[2667]: I1108 01:15:46.112906 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e8b56370-1b9d-42a9-bea5-73bbd242122c-policysync\") pod \"calico-node-wlhmp\" (UID: \"e8b56370-1b9d-42a9-bea5-73bbd242122c\") " pod="calico-system/calico-node-wlhmp" Nov 8 01:15:46.113321 kubelet[2667]: I1108 01:15:46.112932 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e8b56370-1b9d-42a9-bea5-73bbd242122c-cni-bin-dir\") pod \"calico-node-wlhmp\" (UID: \"e8b56370-1b9d-42a9-bea5-73bbd242122c\") " pod="calico-system/calico-node-wlhmp" Nov 8 01:15:46.113321 kubelet[2667]: I1108 01:15:46.112957 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e8b56370-1b9d-42a9-bea5-73bbd242122c-cni-log-dir\") pod \"calico-node-wlhmp\" (UID: \"e8b56370-1b9d-42a9-bea5-73bbd242122c\") " pod="calico-system/calico-node-wlhmp" Nov 8 01:15:46.113321 kubelet[2667]: I1108 01:15:46.112983 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e8b56370-1b9d-42a9-bea5-73bbd242122c-node-certs\") pod \"calico-node-wlhmp\" (UID: \"e8b56370-1b9d-42a9-bea5-73bbd242122c\") " pod="calico-system/calico-node-wlhmp" Nov 8 01:15:46.113321 kubelet[2667]: I1108 01:15:46.113018 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8b56370-1b9d-42a9-bea5-73bbd242122c-tigera-ca-bundle\") pod \"calico-node-wlhmp\" (UID: \"e8b56370-1b9d-42a9-bea5-73bbd242122c\") " pod="calico-system/calico-node-wlhmp" Nov 8 01:15:46.114270 kubelet[2667]: I1108 01:15:46.113067 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8b56370-1b9d-42a9-bea5-73bbd242122c-lib-modules\") pod \"calico-node-wlhmp\" (UID: \"e8b56370-1b9d-42a9-bea5-73bbd242122c\") " pod="calico-system/calico-node-wlhmp" Nov 8 01:15:46.114270 kubelet[2667]: I1108 01:15:46.113093 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8b56370-1b9d-42a9-bea5-73bbd242122c-xtables-lock\") pod \"calico-node-wlhmp\" (UID: \"e8b56370-1b9d-42a9-bea5-73bbd242122c\") " pod="calico-system/calico-node-wlhmp" Nov 8 01:15:46.141135 containerd[1502]: time="2025-11-08T01:15:46.141005346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75b474bf-wxvb9,Uid:d6afd309-10d7-44ee-99c7-bd33c8d7bc08,Namespace:calico-system,Attempt:0,}" Nov 8 01:15:46.200233 containerd[1502]: time="2025-11-08T01:15:46.199987708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:15:46.200233 containerd[1502]: time="2025-11-08T01:15:46.200124178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:15:46.200233 containerd[1502]: time="2025-11-08T01:15:46.200150082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:15:46.200635 containerd[1502]: time="2025-11-08T01:15:46.200341550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:15:46.274479 kubelet[2667]: E1108 01:15:46.272773 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.274479 kubelet[2667]: W1108 01:15:46.272862 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.277103 kubelet[2667]: E1108 01:15:46.275939 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.298418 kubelet[2667]: E1108 01:15:46.297787 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.298418 kubelet[2667]: W1108 01:15:46.297815 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.298418 kubelet[2667]: E1108 01:15:46.297844 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.304868 kubelet[2667]: E1108 01:15:46.303562 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:15:46.307057 kubelet[2667]: E1108 01:15:46.306851 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.307057 kubelet[2667]: W1108 01:15:46.306920 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.308249 kubelet[2667]: E1108 01:15:46.307315 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.308742 kubelet[2667]: E1108 01:15:46.308619 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.308742 kubelet[2667]: W1108 01:15:46.308639 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.308742 kubelet[2667]: E1108 01:15:46.308676 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.309359 kubelet[2667]: E1108 01:15:46.309216 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.309359 kubelet[2667]: W1108 01:15:46.309235 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.309359 kubelet[2667]: E1108 01:15:46.309251 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.310505 kubelet[2667]: E1108 01:15:46.310133 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.310505 kubelet[2667]: W1108 01:15:46.310264 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.310505 kubelet[2667]: E1108 01:15:46.310286 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.311449 systemd[1]: Started cri-containerd-05d52d94cee7a9b7f6abf3e53ed35f23ab698faf2932db143399b147b0456482.scope - libcontainer container 05d52d94cee7a9b7f6abf3e53ed35f23ab698faf2932db143399b147b0456482. Nov 8 01:15:46.313288 kubelet[2667]: E1108 01:15:46.312691 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.313288 kubelet[2667]: W1108 01:15:46.312731 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.313288 kubelet[2667]: E1108 01:15:46.312751 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.314396 kubelet[2667]: E1108 01:15:46.313999 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.314396 kubelet[2667]: W1108 01:15:46.314018 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.314396 kubelet[2667]: E1108 01:15:46.314155 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.315670 kubelet[2667]: E1108 01:15:46.315497 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.315670 kubelet[2667]: W1108 01:15:46.315550 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.315670 kubelet[2667]: E1108 01:15:46.315567 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.317795 kubelet[2667]: E1108 01:15:46.317771 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.318413 kubelet[2667]: W1108 01:15:46.318054 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.318413 kubelet[2667]: E1108 01:15:46.318113 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.320071 kubelet[2667]: E1108 01:15:46.319908 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.320071 kubelet[2667]: W1108 01:15:46.319930 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.320071 kubelet[2667]: E1108 01:15:46.319947 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.321843 kubelet[2667]: E1108 01:15:46.321179 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.321843 kubelet[2667]: W1108 01:15:46.321262 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.321843 kubelet[2667]: E1108 01:15:46.321327 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.323013 kubelet[2667]: E1108 01:15:46.322691 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.323013 kubelet[2667]: W1108 01:15:46.322874 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.323407 kubelet[2667]: E1108 01:15:46.322894 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.324409 kubelet[2667]: E1108 01:15:46.324209 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.324409 kubelet[2667]: W1108 01:15:46.324229 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.324409 kubelet[2667]: E1108 01:15:46.324246 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.325620 kubelet[2667]: E1108 01:15:46.325553 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.325620 kubelet[2667]: W1108 01:15:46.325573 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.326217 kubelet[2667]: E1108 01:15:46.325836 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.326938 kubelet[2667]: E1108 01:15:46.326840 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.327355 kubelet[2667]: W1108 01:15:46.327078 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.327355 kubelet[2667]: E1108 01:15:46.327104 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.328259 kubelet[2667]: E1108 01:15:46.328027 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.328259 kubelet[2667]: W1108 01:15:46.328086 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.328259 kubelet[2667]: E1108 01:15:46.328106 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.329516 kubelet[2667]: E1108 01:15:46.329135 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.329516 kubelet[2667]: W1108 01:15:46.329233 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.330051 kubelet[2667]: E1108 01:15:46.329255 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.331465 kubelet[2667]: E1108 01:15:46.331257 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.331928 kubelet[2667]: W1108 01:15:46.331599 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.331928 kubelet[2667]: E1108 01:15:46.331630 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.335219 kubelet[2667]: E1108 01:15:46.334652 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.335219 kubelet[2667]: W1108 01:15:46.334677 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.335219 kubelet[2667]: E1108 01:15:46.334697 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.336699 kubelet[2667]: E1108 01:15:46.336427 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.336699 kubelet[2667]: W1108 01:15:46.336448 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.336699 kubelet[2667]: E1108 01:15:46.336465 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.338026 kubelet[2667]: E1108 01:15:46.337737 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.338026 kubelet[2667]: W1108 01:15:46.337765 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.338026 kubelet[2667]: E1108 01:15:46.337783 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.339216 kubelet[2667]: E1108 01:15:46.339128 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.339216 kubelet[2667]: W1108 01:15:46.339147 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.340207 kubelet[2667]: E1108 01:15:46.339164 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.340207 kubelet[2667]: I1108 01:15:46.339503 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xzb6\" (UniqueName: \"kubernetes.io/projected/51b46487-75c6-4a08-a5c4-0240abff3a0b-kube-api-access-7xzb6\") pod \"csi-node-driver-qxczs\" (UID: \"51b46487-75c6-4a08-a5c4-0240abff3a0b\") " pod="calico-system/csi-node-driver-qxczs" Nov 8 01:15:46.341030 kubelet[2667]: E1108 01:15:46.340651 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.341030 kubelet[2667]: W1108 01:15:46.340762 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.341030 kubelet[2667]: E1108 01:15:46.340811 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.341801 kubelet[2667]: E1108 01:15:46.341779 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.342126 kubelet[2667]: W1108 01:15:46.341965 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.342126 kubelet[2667]: E1108 01:15:46.341995 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.343484 kubelet[2667]: E1108 01:15:46.343457 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.343854 kubelet[2667]: W1108 01:15:46.343582 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.343854 kubelet[2667]: E1108 01:15:46.343609 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.343854 kubelet[2667]: I1108 01:15:46.343636 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/51b46487-75c6-4a08-a5c4-0240abff3a0b-registration-dir\") pod \"csi-node-driver-qxczs\" (UID: \"51b46487-75c6-4a08-a5c4-0240abff3a0b\") " pod="calico-system/csi-node-driver-qxczs" Nov 8 01:15:46.345096 kubelet[2667]: E1108 01:15:46.345074 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.345339 kubelet[2667]: W1108 01:15:46.345314 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.345517 kubelet[2667]: E1108 01:15:46.345457 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.345665 kubelet[2667]: I1108 01:15:46.345626 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/51b46487-75c6-4a08-a5c4-0240abff3a0b-varrun\") pod \"csi-node-driver-qxczs\" (UID: \"51b46487-75c6-4a08-a5c4-0240abff3a0b\") " pod="calico-system/csi-node-driver-qxczs" Nov 8 01:15:46.346717 kubelet[2667]: E1108 01:15:46.346404 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.346717 kubelet[2667]: W1108 01:15:46.346425 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.346717 kubelet[2667]: E1108 01:15:46.346442 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.346717 kubelet[2667]: I1108 01:15:46.346466 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/51b46487-75c6-4a08-a5c4-0240abff3a0b-kubelet-dir\") pod \"csi-node-driver-qxczs\" (UID: \"51b46487-75c6-4a08-a5c4-0240abff3a0b\") " pod="calico-system/csi-node-driver-qxczs" Nov 8 01:15:46.348524 kubelet[2667]: E1108 01:15:46.348201 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.348524 kubelet[2667]: W1108 01:15:46.348223 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.348524 kubelet[2667]: E1108 01:15:46.348249 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.348524 kubelet[2667]: I1108 01:15:46.348276 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/51b46487-75c6-4a08-a5c4-0240abff3a0b-socket-dir\") pod \"csi-node-driver-qxczs\" (UID: \"51b46487-75c6-4a08-a5c4-0240abff3a0b\") " pod="calico-system/csi-node-driver-qxczs" Nov 8 01:15:46.349314 kubelet[2667]: E1108 01:15:46.348996 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.349314 kubelet[2667]: W1108 01:15:46.349017 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.350084 kubelet[2667]: E1108 01:15:46.349453 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.350358 kubelet[2667]: E1108 01:15:46.350339 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.350563 kubelet[2667]: W1108 01:15:46.350461 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.350563 kubelet[2667]: E1108 01:15:46.350506 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.351669 kubelet[2667]: E1108 01:15:46.351504 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.351669 kubelet[2667]: W1108 01:15:46.351524 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.351669 kubelet[2667]: E1108 01:15:46.351559 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.352641 kubelet[2667]: E1108 01:15:46.352242 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.352641 kubelet[2667]: W1108 01:15:46.352261 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.352641 kubelet[2667]: E1108 01:15:46.352283 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.353654 kubelet[2667]: E1108 01:15:46.353549 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.353654 kubelet[2667]: W1108 01:15:46.353564 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.353654 kubelet[2667]: E1108 01:15:46.353580 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.355382 kubelet[2667]: E1108 01:15:46.354666 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.355382 kubelet[2667]: W1108 01:15:46.354685 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.355382 kubelet[2667]: E1108 01:15:46.354702 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.355382 kubelet[2667]: E1108 01:15:46.355041 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.355382 kubelet[2667]: W1108 01:15:46.355057 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.355382 kubelet[2667]: E1108 01:15:46.355072 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.357577 kubelet[2667]: E1108 01:15:46.356927 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.357577 kubelet[2667]: W1108 01:15:46.356948 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.357577 kubelet[2667]: E1108 01:15:46.356980 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.383504 containerd[1502]: time="2025-11-08T01:15:46.382266717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wlhmp,Uid:e8b56370-1b9d-42a9-bea5-73bbd242122c,Namespace:calico-system,Attempt:0,}" Nov 8 01:15:46.450515 kubelet[2667]: E1108 01:15:46.450048 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.450515 kubelet[2667]: W1108 01:15:46.450097 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.450515 kubelet[2667]: E1108 01:15:46.450147 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.453336 kubelet[2667]: E1108 01:15:46.452597 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.453336 kubelet[2667]: W1108 01:15:46.452619 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.453336 kubelet[2667]: E1108 01:15:46.452664 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.453336 kubelet[2667]: E1108 01:15:46.453037 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.453336 kubelet[2667]: W1108 01:15:46.453053 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.453336 kubelet[2667]: E1108 01:15:46.453091 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.453801 kubelet[2667]: E1108 01:15:46.453757 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.453801 kubelet[2667]: W1108 01:15:46.453777 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.457582 kubelet[2667]: E1108 01:15:46.455695 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.457582 kubelet[2667]: E1108 01:15:46.456151 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.457582 kubelet[2667]: W1108 01:15:46.456340 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.457582 kubelet[2667]: E1108 01:15:46.456475 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.457582 kubelet[2667]: E1108 01:15:46.456843 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.457582 kubelet[2667]: W1108 01:15:46.456858 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.457582 kubelet[2667]: E1108 01:15:46.457511 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.458085 kubelet[2667]: E1108 01:15:46.458064 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.458358 kubelet[2667]: W1108 01:15:46.458211 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.458358 kubelet[2667]: E1108 01:15:46.458245 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.458814 kubelet[2667]: E1108 01:15:46.458794 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.458930 kubelet[2667]: W1108 01:15:46.458908 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.459122 kubelet[2667]: E1108 01:15:46.459100 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.460848 kubelet[2667]: E1108 01:15:46.460504 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.460848 kubelet[2667]: W1108 01:15:46.460524 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.460848 kubelet[2667]: E1108 01:15:46.460642 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.461280 kubelet[2667]: E1108 01:15:46.461259 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.461426 kubelet[2667]: W1108 01:15:46.461403 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.462247 kubelet[2667]: E1108 01:15:46.462043 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.463340 kubelet[2667]: E1108 01:15:46.462583 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.463340 kubelet[2667]: W1108 01:15:46.462602 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.463340 kubelet[2667]: E1108 01:15:46.463300 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.464725 kubelet[2667]: E1108 01:15:46.464702 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.464885 kubelet[2667]: W1108 01:15:46.464860 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.466474 kubelet[2667]: E1108 01:15:46.465994 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.466474 kubelet[2667]: E1108 01:15:46.466332 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.466474 kubelet[2667]: W1108 01:15:46.466348 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.466708 kubelet[2667]: E1108 01:15:46.466685 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.467230 kubelet[2667]: E1108 01:15:46.467043 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.467230 kubelet[2667]: W1108 01:15:46.467062 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.467230 kubelet[2667]: E1108 01:15:46.467198 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.467603 kubelet[2667]: E1108 01:15:46.467584 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.467925 kubelet[2667]: W1108 01:15:46.467784 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.469252 kubelet[2667]: E1108 01:15:46.468555 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.469415 kubelet[2667]: E1108 01:15:46.469395 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.469559 kubelet[2667]: W1108 01:15:46.469513 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.469929 kubelet[2667]: E1108 01:15:46.469791 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.471663 kubelet[2667]: E1108 01:15:46.471371 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.471663 kubelet[2667]: W1108 01:15:46.471391 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.471663 kubelet[2667]: E1108 01:15:46.471521 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.472003 kubelet[2667]: E1108 01:15:46.471937 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.472003 kubelet[2667]: W1108 01:15:46.471956 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.472358 kubelet[2667]: E1108 01:15:46.472213 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.472817 kubelet[2667]: E1108 01:15:46.472656 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.472817 kubelet[2667]: W1108 01:15:46.472680 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.472994 kubelet[2667]: E1108 01:15:46.472970 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.474139 kubelet[2667]: E1108 01:15:46.473543 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.474139 kubelet[2667]: W1108 01:15:46.473565 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.474404 kubelet[2667]: E1108 01:15:46.474382 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.474831 kubelet[2667]: E1108 01:15:46.474811 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.475022 kubelet[2667]: W1108 01:15:46.474918 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.475136 kubelet[2667]: E1108 01:15:46.475114 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.475842 kubelet[2667]: E1108 01:15:46.475822 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.476259 kubelet[2667]: W1108 01:15:46.475936 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.476468 kubelet[2667]: E1108 01:15:46.476417 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.476543 containerd[1502]: time="2025-11-08T01:15:46.471068977Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:15:46.476543 containerd[1502]: time="2025-11-08T01:15:46.475508377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:15:46.476543 containerd[1502]: time="2025-11-08T01:15:46.475535138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:15:46.476543 containerd[1502]: time="2025-11-08T01:15:46.476100155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:15:46.477946 kubelet[2667]: E1108 01:15:46.477622 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.477946 kubelet[2667]: W1108 01:15:46.477644 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.477946 kubelet[2667]: E1108 01:15:46.477689 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.479062 kubelet[2667]: E1108 01:15:46.478619 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.479062 kubelet[2667]: W1108 01:15:46.478639 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.479062 kubelet[2667]: E1108 01:15:46.479012 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.480331 kubelet[2667]: E1108 01:15:46.480065 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.480331 kubelet[2667]: W1108 01:15:46.480084 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.480331 kubelet[2667]: E1108 01:15:46.480101 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.515376 kubelet[2667]: E1108 01:15:46.514665 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:46.515376 kubelet[2667]: W1108 01:15:46.514693 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:46.515376 kubelet[2667]: E1108 01:15:46.514721 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:46.522468 systemd[1]: Started cri-containerd-e4f2a29a6b07ee60e257f29c0eb2a285294ec0f8caf392b41fc9af406630bc55.scope - libcontainer container e4f2a29a6b07ee60e257f29c0eb2a285294ec0f8caf392b41fc9af406630bc55. Nov 8 01:15:46.629595 containerd[1502]: time="2025-11-08T01:15:46.629077048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wlhmp,Uid:e8b56370-1b9d-42a9-bea5-73bbd242122c,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4f2a29a6b07ee60e257f29c0eb2a285294ec0f8caf392b41fc9af406630bc55\"" Nov 8 01:15:46.630009 containerd[1502]: time="2025-11-08T01:15:46.629940169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75b474bf-wxvb9,Uid:d6afd309-10d7-44ee-99c7-bd33c8d7bc08,Namespace:calico-system,Attempt:0,} returns sandbox id \"05d52d94cee7a9b7f6abf3e53ed35f23ab698faf2932db143399b147b0456482\"" Nov 8 01:15:46.637751 containerd[1502]: time="2025-11-08T01:15:46.637317524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 01:15:48.369948 kubelet[2667]: E1108 01:15:48.368644 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:15:48.400344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2473940802.mount: Deactivated successfully. Nov 8 01:15:50.373780 kubelet[2667]: E1108 01:15:50.372510 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:15:50.455127 containerd[1502]: time="2025-11-08T01:15:50.454993340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:50.459108 containerd[1502]: time="2025-11-08T01:15:50.458959516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 01:15:50.472201 containerd[1502]: time="2025-11-08T01:15:50.472056815Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:50.478736 containerd[1502]: time="2025-11-08T01:15:50.478662071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:50.480312 containerd[1502]: time="2025-11-08T01:15:50.479945578Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.842568818s" Nov 8 01:15:50.480312 containerd[1502]: time="2025-11-08T01:15:50.480009638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 01:15:50.483528 containerd[1502]: time="2025-11-08T01:15:50.483157559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 01:15:50.521640 containerd[1502]: time="2025-11-08T01:15:50.521115587Z" level=info msg="CreateContainer within sandbox \"05d52d94cee7a9b7f6abf3e53ed35f23ab698faf2932db143399b147b0456482\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 01:15:50.561713 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2544939348.mount: Deactivated successfully. Nov 8 01:15:50.580986 containerd[1502]: time="2025-11-08T01:15:50.580815064Z" level=info msg="CreateContainer within sandbox \"05d52d94cee7a9b7f6abf3e53ed35f23ab698faf2932db143399b147b0456482\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8fb764c926c1fdaad295de0c0908cf8b7f4835d92b7f3fe9768380410c8a87c3\"" Nov 8 01:15:50.581785 containerd[1502]: time="2025-11-08T01:15:50.581742097Z" level=info msg="StartContainer for \"8fb764c926c1fdaad295de0c0908cf8b7f4835d92b7f3fe9768380410c8a87c3\"" Nov 8 01:15:50.663488 systemd[1]: Started cri-containerd-8fb764c926c1fdaad295de0c0908cf8b7f4835d92b7f3fe9768380410c8a87c3.scope - libcontainer container 8fb764c926c1fdaad295de0c0908cf8b7f4835d92b7f3fe9768380410c8a87c3. Nov 8 01:15:50.732622 containerd[1502]: time="2025-11-08T01:15:50.732557341Z" level=info msg="StartContainer for \"8fb764c926c1fdaad295de0c0908cf8b7f4835d92b7f3fe9768380410c8a87c3\" returns successfully" Nov 8 01:15:51.579257 kubelet[2667]: E1108 01:15:51.579061 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.579257 kubelet[2667]: W1108 01:15:51.579129 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.579257 kubelet[2667]: E1108 01:15:51.579223 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.580033 kubelet[2667]: E1108 01:15:51.579614 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.580033 kubelet[2667]: W1108 01:15:51.579630 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.580033 kubelet[2667]: E1108 01:15:51.579646 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.580033 kubelet[2667]: E1108 01:15:51.579948 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.580033 kubelet[2667]: W1108 01:15:51.579963 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.580033 kubelet[2667]: E1108 01:15:51.579979 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.580420 kubelet[2667]: E1108 01:15:51.580398 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.580498 kubelet[2667]: W1108 01:15:51.580423 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.580498 kubelet[2667]: E1108 01:15:51.580442 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.580772 kubelet[2667]: E1108 01:15:51.580750 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.580772 kubelet[2667]: W1108 01:15:51.580771 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.580886 kubelet[2667]: E1108 01:15:51.580788 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.581081 kubelet[2667]: E1108 01:15:51.581061 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.581529 kubelet[2667]: W1108 01:15:51.581081 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.581529 kubelet[2667]: E1108 01:15:51.581100 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.581529 kubelet[2667]: E1108 01:15:51.581446 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.581529 kubelet[2667]: W1108 01:15:51.581461 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.581529 kubelet[2667]: E1108 01:15:51.581476 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.581803 kubelet[2667]: E1108 01:15:51.581762 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.581803 kubelet[2667]: W1108 01:15:51.581777 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.581803 kubelet[2667]: E1108 01:15:51.581791 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.582098 kubelet[2667]: E1108 01:15:51.582078 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.582191 kubelet[2667]: W1108 01:15:51.582098 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.582191 kubelet[2667]: E1108 01:15:51.582115 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.582453 kubelet[2667]: E1108 01:15:51.582432 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.582453 kubelet[2667]: W1108 01:15:51.582453 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.582588 kubelet[2667]: E1108 01:15:51.582469 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.582752 kubelet[2667]: E1108 01:15:51.582732 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.582820 kubelet[2667]: W1108 01:15:51.582752 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.582820 kubelet[2667]: E1108 01:15:51.582770 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.583043 kubelet[2667]: E1108 01:15:51.583023 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.583149 kubelet[2667]: W1108 01:15:51.583043 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.583149 kubelet[2667]: E1108 01:15:51.583058 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.583425 kubelet[2667]: E1108 01:15:51.583398 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.583425 kubelet[2667]: W1108 01:15:51.583421 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.583425 kubelet[2667]: E1108 01:15:51.583436 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.583752 kubelet[2667]: E1108 01:15:51.583701 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.583752 kubelet[2667]: W1108 01:15:51.583727 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.583752 kubelet[2667]: E1108 01:15:51.583744 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.584032 kubelet[2667]: E1108 01:15:51.583989 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.584032 kubelet[2667]: W1108 01:15:51.584005 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.584032 kubelet[2667]: E1108 01:15:51.584020 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.607659 kubelet[2667]: E1108 01:15:51.607611 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.607659 kubelet[2667]: W1108 01:15:51.607646 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.607873 kubelet[2667]: E1108 01:15:51.607675 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.608134 kubelet[2667]: E1108 01:15:51.607983 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.608134 kubelet[2667]: W1108 01:15:51.608006 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.608134 kubelet[2667]: E1108 01:15:51.608043 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.608678 kubelet[2667]: E1108 01:15:51.608434 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.608678 kubelet[2667]: W1108 01:15:51.608448 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.608678 kubelet[2667]: E1108 01:15:51.608475 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.609746 kubelet[2667]: E1108 01:15:51.608783 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.609746 kubelet[2667]: W1108 01:15:51.608797 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.609746 kubelet[2667]: E1108 01:15:51.608832 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.609746 kubelet[2667]: E1108 01:15:51.609110 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.609746 kubelet[2667]: W1108 01:15:51.609124 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.609746 kubelet[2667]: E1108 01:15:51.609154 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.609746 kubelet[2667]: E1108 01:15:51.609436 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.609746 kubelet[2667]: W1108 01:15:51.609453 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.609746 kubelet[2667]: E1108 01:15:51.609488 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.610496 kubelet[2667]: E1108 01:15:51.609770 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.610496 kubelet[2667]: W1108 01:15:51.609783 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.610496 kubelet[2667]: E1108 01:15:51.609912 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.610496 kubelet[2667]: E1108 01:15:51.610076 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.610496 kubelet[2667]: W1108 01:15:51.610089 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.610496 kubelet[2667]: E1108 01:15:51.610213 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.610496 kubelet[2667]: E1108 01:15:51.610447 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.610496 kubelet[2667]: W1108 01:15:51.610461 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.610496 kubelet[2667]: E1108 01:15:51.610483 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.612441 kubelet[2667]: E1108 01:15:51.610866 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.612441 kubelet[2667]: W1108 01:15:51.610880 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.612441 kubelet[2667]: E1108 01:15:51.610915 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.612441 kubelet[2667]: E1108 01:15:51.611455 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.612441 kubelet[2667]: W1108 01:15:51.611473 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.612441 kubelet[2667]: E1108 01:15:51.611544 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.612441 kubelet[2667]: E1108 01:15:51.611863 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.612441 kubelet[2667]: W1108 01:15:51.611879 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.612441 kubelet[2667]: E1108 01:15:51.611896 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.612441 kubelet[2667]: E1108 01:15:51.612326 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.616059 kubelet[2667]: W1108 01:15:51.612342 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.616059 kubelet[2667]: E1108 01:15:51.612373 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.616059 kubelet[2667]: E1108 01:15:51.612665 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.616059 kubelet[2667]: W1108 01:15:51.612679 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.616059 kubelet[2667]: E1108 01:15:51.612694 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.616059 kubelet[2667]: E1108 01:15:51.612970 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.616059 kubelet[2667]: W1108 01:15:51.612984 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.616059 kubelet[2667]: E1108 01:15:51.612999 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.616059 kubelet[2667]: E1108 01:15:51.615889 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.616059 kubelet[2667]: W1108 01:15:51.615917 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.617772 kubelet[2667]: E1108 01:15:51.615946 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.617772 kubelet[2667]: E1108 01:15:51.616307 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.617772 kubelet[2667]: W1108 01:15:51.616322 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.617772 kubelet[2667]: E1108 01:15:51.616338 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:51.617772 kubelet[2667]: E1108 01:15:51.616615 2667 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 01:15:51.617772 kubelet[2667]: W1108 01:15:51.616628 2667 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 01:15:51.617772 kubelet[2667]: E1108 01:15:51.616643 2667 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 01:15:52.106210 containerd[1502]: time="2025-11-08T01:15:52.106081724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:52.107751 containerd[1502]: time="2025-11-08T01:15:52.107506271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 01:15:52.110203 containerd[1502]: time="2025-11-08T01:15:52.108537527Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:52.111896 containerd[1502]: time="2025-11-08T01:15:52.111457251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:15:52.112605 containerd[1502]: time="2025-11-08T01:15:52.112562766Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.629339657s" Nov 8 01:15:52.112694 containerd[1502]: time="2025-11-08T01:15:52.112609445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 01:15:52.116416 containerd[1502]: time="2025-11-08T01:15:52.116368998Z" level=info msg="CreateContainer within sandbox \"e4f2a29a6b07ee60e257f29c0eb2a285294ec0f8caf392b41fc9af406630bc55\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 01:15:52.145662 containerd[1502]: time="2025-11-08T01:15:52.145612727Z" level=info msg="CreateContainer within sandbox \"e4f2a29a6b07ee60e257f29c0eb2a285294ec0f8caf392b41fc9af406630bc55\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"efd9a096ee510db14cc7046c282f89c81e54b79f28087b8dcaa07f1059bd368b\"" Nov 8 01:15:52.147198 containerd[1502]: time="2025-11-08T01:15:52.146803440Z" level=info msg="StartContainer for \"efd9a096ee510db14cc7046c282f89c81e54b79f28087b8dcaa07f1059bd368b\"" Nov 8 01:15:52.203442 systemd[1]: Started cri-containerd-efd9a096ee510db14cc7046c282f89c81e54b79f28087b8dcaa07f1059bd368b.scope - libcontainer container efd9a096ee510db14cc7046c282f89c81e54b79f28087b8dcaa07f1059bd368b. Nov 8 01:15:52.254493 containerd[1502]: time="2025-11-08T01:15:52.254296492Z" level=info msg="StartContainer for \"efd9a096ee510db14cc7046c282f89c81e54b79f28087b8dcaa07f1059bd368b\" returns successfully" Nov 8 01:15:52.274137 systemd[1]: cri-containerd-efd9a096ee510db14cc7046c282f89c81e54b79f28087b8dcaa07f1059bd368b.scope: Deactivated successfully. Nov 8 01:15:52.310312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-efd9a096ee510db14cc7046c282f89c81e54b79f28087b8dcaa07f1059bd368b-rootfs.mount: Deactivated successfully. Nov 8 01:15:52.369300 kubelet[2667]: E1108 01:15:52.368963 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:15:52.561278 containerd[1502]: time="2025-11-08T01:15:52.512000002Z" level=info msg="shim disconnected" id=efd9a096ee510db14cc7046c282f89c81e54b79f28087b8dcaa07f1059bd368b namespace=k8s.io Nov 8 01:15:52.561278 containerd[1502]: time="2025-11-08T01:15:52.561090450Z" level=warning msg="cleaning up after shim disconnected" id=efd9a096ee510db14cc7046c282f89c81e54b79f28087b8dcaa07f1059bd368b namespace=k8s.io Nov 8 01:15:52.561278 containerd[1502]: time="2025-11-08T01:15:52.561122739Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 01:15:52.575881 kubelet[2667]: I1108 01:15:52.574800 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75b474bf-wxvb9" podStartSLOduration=3.7291043950000002 podStartE2EDuration="7.574603189s" podCreationTimestamp="2025-11-08 01:15:45 +0000 UTC" firstStartedPulling="2025-11-08 01:15:46.636787935 +0000 UTC m=+25.454893928" lastFinishedPulling="2025-11-08 01:15:50.482286709 +0000 UTC m=+29.300392722" observedRunningTime="2025-11-08 01:15:52.574308047 +0000 UTC m=+31.392414054" watchObservedRunningTime="2025-11-08 01:15:52.574603189 +0000 UTC m=+31.392709195" Nov 8 01:15:53.541437 containerd[1502]: time="2025-11-08T01:15:53.540826945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 01:15:54.369437 kubelet[2667]: E1108 01:15:54.369348 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:15:56.369804 kubelet[2667]: E1108 01:15:56.369192 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:15:58.369085 kubelet[2667]: E1108 01:15:58.368962 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:16:00.375201 kubelet[2667]: E1108 01:16:00.371428 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:16:01.065137 containerd[1502]: time="2025-11-08T01:16:01.065041011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:16:01.068321 containerd[1502]: time="2025-11-08T01:16:01.068158436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 01:16:01.070220 containerd[1502]: time="2025-11-08T01:16:01.069219332Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:16:01.072222 containerd[1502]: time="2025-11-08T01:16:01.072129246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:16:01.074088 containerd[1502]: time="2025-11-08T01:16:01.073380635Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 7.524551173s" Nov 8 01:16:01.074088 containerd[1502]: time="2025-11-08T01:16:01.073438863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 01:16:01.078560 containerd[1502]: time="2025-11-08T01:16:01.078452253Z" level=info msg="CreateContainer within sandbox \"e4f2a29a6b07ee60e257f29c0eb2a285294ec0f8caf392b41fc9af406630bc55\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 01:16:01.102333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4100006879.mount: Deactivated successfully. Nov 8 01:16:01.112380 containerd[1502]: time="2025-11-08T01:16:01.112254966Z" level=info msg="CreateContainer within sandbox \"e4f2a29a6b07ee60e257f29c0eb2a285294ec0f8caf392b41fc9af406630bc55\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0a90f050ead3a488b5063b709da645e26024fd6b30c68b9b4f1689479cdba2d1\"" Nov 8 01:16:01.113582 containerd[1502]: time="2025-11-08T01:16:01.113114165Z" level=info msg="StartContainer for \"0a90f050ead3a488b5063b709da645e26024fd6b30c68b9b4f1689479cdba2d1\"" Nov 8 01:16:01.174482 systemd[1]: Started cri-containerd-0a90f050ead3a488b5063b709da645e26024fd6b30c68b9b4f1689479cdba2d1.scope - libcontainer container 0a90f050ead3a488b5063b709da645e26024fd6b30c68b9b4f1689479cdba2d1. Nov 8 01:16:01.253572 containerd[1502]: time="2025-11-08T01:16:01.253496545Z" level=info msg="StartContainer for \"0a90f050ead3a488b5063b709da645e26024fd6b30c68b9b4f1689479cdba2d1\" returns successfully" Nov 8 01:16:02.331741 systemd[1]: cri-containerd-0a90f050ead3a488b5063b709da645e26024fd6b30c68b9b4f1689479cdba2d1.scope: Deactivated successfully. Nov 8 01:16:02.372120 kubelet[2667]: E1108 01:16:02.370554 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:16:02.395882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a90f050ead3a488b5063b709da645e26024fd6b30c68b9b4f1689479cdba2d1-rootfs.mount: Deactivated successfully. Nov 8 01:16:02.400326 containerd[1502]: time="2025-11-08T01:16:02.398107869Z" level=info msg="shim disconnected" id=0a90f050ead3a488b5063b709da645e26024fd6b30c68b9b4f1689479cdba2d1 namespace=k8s.io Nov 8 01:16:02.401043 containerd[1502]: time="2025-11-08T01:16:02.400333059Z" level=warning msg="cleaning up after shim disconnected" id=0a90f050ead3a488b5063b709da645e26024fd6b30c68b9b4f1689479cdba2d1 namespace=k8s.io Nov 8 01:16:02.401043 containerd[1502]: time="2025-11-08T01:16:02.400357425Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 01:16:02.422088 kubelet[2667]: I1108 01:16:02.416101 2667 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 01:16:02.600767 kubelet[2667]: I1108 01:16:02.599368 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1cfed565-fcb5-4110-9fc3-0c3a9aaca493-calico-apiserver-certs\") pod \"calico-apiserver-54cbc7f844-sccdl\" (UID: \"1cfed565-fcb5-4110-9fc3-0c3a9aaca493\") " pod="calico-apiserver/calico-apiserver-54cbc7f844-sccdl" Nov 8 01:16:02.600767 kubelet[2667]: I1108 01:16:02.600472 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/241195b4-5b0d-45a4-873e-2adb3be76878-whisker-ca-bundle\") pod \"whisker-545dcd8949-9clbl\" (UID: \"241195b4-5b0d-45a4-873e-2adb3be76878\") " pod="calico-system/whisker-545dcd8949-9clbl" Nov 8 01:16:02.600767 kubelet[2667]: I1108 01:16:02.600621 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95vxr\" (UniqueName: \"kubernetes.io/projected/6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39-kube-api-access-95vxr\") pod \"goldmane-666569f655-wd97q\" (UID: \"6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39\") " pod="calico-system/goldmane-666569f655-wd97q" Nov 8 01:16:02.603638 kubelet[2667]: I1108 01:16:02.602518 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3e1f2813-87fb-41fd-ad67-d8abf3b908a6-calico-apiserver-certs\") pod \"calico-apiserver-54cbc7f844-zdscz\" (UID: \"3e1f2813-87fb-41fd-ad67-d8abf3b908a6\") " pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" Nov 8 01:16:02.603638 kubelet[2667]: I1108 01:16:02.602653 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4h29\" (UniqueName: \"kubernetes.io/projected/2dbb88ca-6a44-41bd-ba35-48c338cd1fe1-kube-api-access-t4h29\") pod \"coredns-668d6bf9bc-698zh\" (UID: \"2dbb88ca-6a44-41bd-ba35-48c338cd1fe1\") " pod="kube-system/coredns-668d6bf9bc-698zh" Nov 8 01:16:02.603638 kubelet[2667]: I1108 01:16:02.602984 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a44b2afe-dc17-4635-9d12-87b1697a9f2b-calico-apiserver-certs\") pod \"calico-apiserver-f4d5bbb98-9znr7\" (UID: \"a44b2afe-dc17-4635-9d12-87b1697a9f2b\") " pod="calico-apiserver/calico-apiserver-f4d5bbb98-9znr7" Nov 8 01:16:02.615062 kubelet[2667]: I1108 01:16:02.614684 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dbb88ca-6a44-41bd-ba35-48c338cd1fe1-config-volume\") pod \"coredns-668d6bf9bc-698zh\" (UID: \"2dbb88ca-6a44-41bd-ba35-48c338cd1fe1\") " pod="kube-system/coredns-668d6bf9bc-698zh" Nov 8 01:16:02.615062 kubelet[2667]: I1108 01:16:02.614934 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxgvj\" (UniqueName: \"kubernetes.io/projected/a44b2afe-dc17-4635-9d12-87b1697a9f2b-kube-api-access-vxgvj\") pod \"calico-apiserver-f4d5bbb98-9znr7\" (UID: \"a44b2afe-dc17-4635-9d12-87b1697a9f2b\") " pod="calico-apiserver/calico-apiserver-f4d5bbb98-9znr7" Nov 8 01:16:02.615653 kubelet[2667]: I1108 01:16:02.615361 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cecb66f9-6863-43bf-b9c2-fcaa31f6928a-config-volume\") pod \"coredns-668d6bf9bc-wc8lj\" (UID: \"cecb66f9-6863-43bf-b9c2-fcaa31f6928a\") " pod="kube-system/coredns-668d6bf9bc-wc8lj" Nov 8 01:16:02.615653 kubelet[2667]: I1108 01:16:02.615532 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/241195b4-5b0d-45a4-873e-2adb3be76878-whisker-backend-key-pair\") pod \"whisker-545dcd8949-9clbl\" (UID: \"241195b4-5b0d-45a4-873e-2adb3be76878\") " pod="calico-system/whisker-545dcd8949-9clbl" Nov 8 01:16:02.615980 kubelet[2667]: I1108 01:16:02.615849 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d-tigera-ca-bundle\") pod \"calico-kube-controllers-5fb7bc4b99-gm555\" (UID: \"bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d\") " pod="calico-system/calico-kube-controllers-5fb7bc4b99-gm555" Nov 8 01:16:02.615980 kubelet[2667]: I1108 01:16:02.615923 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j56qx\" (UniqueName: \"kubernetes.io/projected/3e1f2813-87fb-41fd-ad67-d8abf3b908a6-kube-api-access-j56qx\") pod \"calico-apiserver-54cbc7f844-zdscz\" (UID: \"3e1f2813-87fb-41fd-ad67-d8abf3b908a6\") " pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" Nov 8 01:16:02.616210 kubelet[2667]: I1108 01:16:02.615963 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx5fk\" (UniqueName: \"kubernetes.io/projected/241195b4-5b0d-45a4-873e-2adb3be76878-kube-api-access-bx5fk\") pod \"whisker-545dcd8949-9clbl\" (UID: \"241195b4-5b0d-45a4-873e-2adb3be76878\") " pod="calico-system/whisker-545dcd8949-9clbl" Nov 8 01:16:02.616599 kubelet[2667]: I1108 01:16:02.616385 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39-config\") pod \"goldmane-666569f655-wd97q\" (UID: \"6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39\") " pod="calico-system/goldmane-666569f655-wd97q" Nov 8 01:16:02.616599 kubelet[2667]: I1108 01:16:02.616477 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39-goldmane-ca-bundle\") pod \"goldmane-666569f655-wd97q\" (UID: \"6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39\") " pod="calico-system/goldmane-666569f655-wd97q" Nov 8 01:16:02.616599 kubelet[2667]: I1108 01:16:02.616547 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q6x9\" (UniqueName: \"kubernetes.io/projected/bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d-kube-api-access-6q6x9\") pod \"calico-kube-controllers-5fb7bc4b99-gm555\" (UID: \"bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d\") " pod="calico-system/calico-kube-controllers-5fb7bc4b99-gm555" Nov 8 01:16:02.617090 kubelet[2667]: I1108 01:16:02.616705 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbr6v\" (UniqueName: \"kubernetes.io/projected/1cfed565-fcb5-4110-9fc3-0c3a9aaca493-kube-api-access-hbr6v\") pod \"calico-apiserver-54cbc7f844-sccdl\" (UID: \"1cfed565-fcb5-4110-9fc3-0c3a9aaca493\") " pod="calico-apiserver/calico-apiserver-54cbc7f844-sccdl" Nov 8 01:16:02.617090 kubelet[2667]: I1108 01:16:02.616951 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqvfj\" (UniqueName: \"kubernetes.io/projected/cecb66f9-6863-43bf-b9c2-fcaa31f6928a-kube-api-access-dqvfj\") pod \"coredns-668d6bf9bc-wc8lj\" (UID: \"cecb66f9-6863-43bf-b9c2-fcaa31f6928a\") " pod="kube-system/coredns-668d6bf9bc-wc8lj" Nov 8 01:16:02.617090 kubelet[2667]: I1108 01:16:02.616994 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39-goldmane-key-pair\") pod \"goldmane-666569f655-wd97q\" (UID: \"6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39\") " pod="calico-system/goldmane-666569f655-wd97q" Nov 8 01:16:02.617381 containerd[1502]: time="2025-11-08T01:16:02.616610364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 01:16:02.625611 systemd[1]: Created slice kubepods-besteffort-pod241195b4_5b0d_45a4_873e_2adb3be76878.slice - libcontainer container kubepods-besteffort-pod241195b4_5b0d_45a4_873e_2adb3be76878.slice. Nov 8 01:16:02.642223 systemd[1]: Created slice kubepods-burstable-podcecb66f9_6863_43bf_b9c2_fcaa31f6928a.slice - libcontainer container kubepods-burstable-podcecb66f9_6863_43bf_b9c2_fcaa31f6928a.slice. Nov 8 01:16:02.653723 systemd[1]: Created slice kubepods-besteffort-pod1cfed565_fcb5_4110_9fc3_0c3a9aaca493.slice - libcontainer container kubepods-besteffort-pod1cfed565_fcb5_4110_9fc3_0c3a9aaca493.slice. Nov 8 01:16:02.675759 systemd[1]: Created slice kubepods-besteffort-pod3e1f2813_87fb_41fd_ad67_d8abf3b908a6.slice - libcontainer container kubepods-besteffort-pod3e1f2813_87fb_41fd_ad67_d8abf3b908a6.slice. Nov 8 01:16:02.686734 systemd[1]: Created slice kubepods-burstable-pod2dbb88ca_6a44_41bd_ba35_48c338cd1fe1.slice - libcontainer container kubepods-burstable-pod2dbb88ca_6a44_41bd_ba35_48c338cd1fe1.slice. Nov 8 01:16:02.696148 systemd[1]: Created slice kubepods-besteffort-podbcce8685_942f_4e9d_bdd3_fc9f68bc3c6d.slice - libcontainer container kubepods-besteffort-podbcce8685_942f_4e9d_bdd3_fc9f68bc3c6d.slice. Nov 8 01:16:02.708997 systemd[1]: Created slice kubepods-besteffort-poda44b2afe_dc17_4635_9d12_87b1697a9f2b.slice - libcontainer container kubepods-besteffort-poda44b2afe_dc17_4635_9d12_87b1697a9f2b.slice. Nov 8 01:16:02.721423 systemd[1]: Created slice kubepods-besteffort-pod6540e6ee_2026_4c3d_b7ab_1e85d3d9ab39.slice - libcontainer container kubepods-besteffort-pod6540e6ee_2026_4c3d_b7ab_1e85d3d9ab39.slice. Nov 8 01:16:02.950220 containerd[1502]: time="2025-11-08T01:16:02.947448621Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wc8lj,Uid:cecb66f9-6863-43bf-b9c2-fcaa31f6928a,Namespace:kube-system,Attempt:0,}" Nov 8 01:16:02.952876 containerd[1502]: time="2025-11-08T01:16:02.952839093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-545dcd8949-9clbl,Uid:241195b4-5b0d-45a4-873e-2adb3be76878,Namespace:calico-system,Attempt:0,}" Nov 8 01:16:02.970383 containerd[1502]: time="2025-11-08T01:16:02.969503001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cbc7f844-sccdl,Uid:1cfed565-fcb5-4110-9fc3-0c3a9aaca493,Namespace:calico-apiserver,Attempt:0,}" Nov 8 01:16:02.983944 containerd[1502]: time="2025-11-08T01:16:02.983891301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cbc7f844-zdscz,Uid:3e1f2813-87fb-41fd-ad67-d8abf3b908a6,Namespace:calico-apiserver,Attempt:0,}" Nov 8 01:16:02.999672 containerd[1502]: time="2025-11-08T01:16:02.999619931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-698zh,Uid:2dbb88ca-6a44-41bd-ba35-48c338cd1fe1,Namespace:kube-system,Attempt:0,}" Nov 8 01:16:03.024435 containerd[1502]: time="2025-11-08T01:16:03.024226663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4d5bbb98-9znr7,Uid:a44b2afe-dc17-4635-9d12-87b1697a9f2b,Namespace:calico-apiserver,Attempt:0,}" Nov 8 01:16:03.025158 containerd[1502]: time="2025-11-08T01:16:03.025105152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fb7bc4b99-gm555,Uid:bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d,Namespace:calico-system,Attempt:0,}" Nov 8 01:16:03.075281 containerd[1502]: time="2025-11-08T01:16:03.075101761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wd97q,Uid:6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39,Namespace:calico-system,Attempt:0,}" Nov 8 01:16:03.630161 containerd[1502]: time="2025-11-08T01:16:03.630047639Z" level=error msg="Failed to destroy network for sandbox \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.632750 containerd[1502]: time="2025-11-08T01:16:03.632569569Z" level=error msg="encountered an error cleaning up failed sandbox \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.632750 containerd[1502]: time="2025-11-08T01:16:03.632653732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wd97q,Uid:6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.634285 containerd[1502]: time="2025-11-08T01:16:03.632901440Z" level=error msg="Failed to destroy network for sandbox \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.634285 containerd[1502]: time="2025-11-08T01:16:03.633498426Z" level=error msg="Failed to destroy network for sandbox \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.635304 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e-shm.mount: Deactivated successfully. Nov 8 01:16:03.638866 containerd[1502]: time="2025-11-08T01:16:03.637197025Z" level=error msg="Failed to destroy network for sandbox \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.638866 containerd[1502]: time="2025-11-08T01:16:03.637603384Z" level=error msg="encountered an error cleaning up failed sandbox \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.638866 containerd[1502]: time="2025-11-08T01:16:03.637658615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4d5bbb98-9znr7,Uid:a44b2afe-dc17-4635-9d12-87b1697a9f2b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.641859 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e-shm.mount: Deactivated successfully. Nov 8 01:16:03.642140 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024-shm.mount: Deactivated successfully. Nov 8 01:16:03.642447 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0-shm.mount: Deactivated successfully. Nov 8 01:16:03.646520 containerd[1502]: time="2025-11-08T01:16:03.645387984Z" level=error msg="Failed to destroy network for sandbox \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.648518 containerd[1502]: time="2025-11-08T01:16:03.648354170Z" level=error msg="Failed to destroy network for sandbox \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.649519 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6-shm.mount: Deactivated successfully. Nov 8 01:16:03.653337 kubelet[2667]: E1108 01:16:03.652988 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.657791 containerd[1502]: time="2025-11-08T01:16:03.653242560Z" level=error msg="Failed to destroy network for sandbox \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.657485 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc-shm.mount: Deactivated successfully. Nov 8 01:16:03.659340 kubelet[2667]: E1108 01:16:03.654261 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.659340 kubelet[2667]: E1108 01:16:03.655902 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4d5bbb98-9znr7" Nov 8 01:16:03.659340 kubelet[2667]: E1108 01:16:03.655962 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f4d5bbb98-9znr7" Nov 8 01:16:03.660336 kubelet[2667]: E1108 01:16:03.656075 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f4d5bbb98-9znr7_calico-apiserver(a44b2afe-dc17-4635-9d12-87b1697a9f2b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f4d5bbb98-9znr7_calico-apiserver(a44b2afe-dc17-4635-9d12-87b1697a9f2b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4d5bbb98-9znr7" podUID="a44b2afe-dc17-4635-9d12-87b1697a9f2b" Nov 8 01:16:03.660336 kubelet[2667]: E1108 01:16:03.656795 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wd97q" Nov 8 01:16:03.660336 kubelet[2667]: E1108 01:16:03.656842 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-wd97q" Nov 8 01:16:03.660902 kubelet[2667]: E1108 01:16:03.656891 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-wd97q_calico-system(6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-wd97q_calico-system(6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wd97q" podUID="6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39" Nov 8 01:16:03.661356 containerd[1502]: time="2025-11-08T01:16:03.655209530Z" level=error msg="encountered an error cleaning up failed sandbox \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.661592 containerd[1502]: time="2025-11-08T01:16:03.661301449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fb7bc4b99-gm555,Uid:bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.665871 containerd[1502]: time="2025-11-08T01:16:03.657968313Z" level=error msg="encountered an error cleaning up failed sandbox \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.665871 containerd[1502]: time="2025-11-08T01:16:03.664591492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cbc7f844-sccdl,Uid:1cfed565-fcb5-4110-9fc3-0c3a9aaca493,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.665871 containerd[1502]: time="2025-11-08T01:16:03.658145431Z" level=error msg="encountered an error cleaning up failed sandbox \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.665871 containerd[1502]: time="2025-11-08T01:16:03.665502523Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-545dcd8949-9clbl,Uid:241195b4-5b0d-45a4-873e-2adb3be76878,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.665871 containerd[1502]: time="2025-11-08T01:16:03.658281649Z" level=error msg="encountered an error cleaning up failed sandbox \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.665871 containerd[1502]: time="2025-11-08T01:16:03.665638896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-698zh,Uid:2dbb88ca-6a44-41bd-ba35-48c338cd1fe1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.667191 containerd[1502]: time="2025-11-08T01:16:03.660886050Z" level=error msg="Failed to destroy network for sandbox \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.667191 containerd[1502]: time="2025-11-08T01:16:03.666916470Z" level=error msg="encountered an error cleaning up failed sandbox \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.667191 containerd[1502]: time="2025-11-08T01:16:03.666991077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cbc7f844-zdscz,Uid:3e1f2813-87fb-41fd-ad67-d8abf3b908a6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.667393 kubelet[2667]: E1108 01:16:03.666469 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.667393 kubelet[2667]: E1108 01:16:03.666536 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-698zh" Nov 8 01:16:03.667393 kubelet[2667]: E1108 01:16:03.666565 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-698zh" Nov 8 01:16:03.667556 kubelet[2667]: E1108 01:16:03.666630 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-698zh_kube-system(2dbb88ca-6a44-41bd-ba35-48c338cd1fe1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-698zh_kube-system(2dbb88ca-6a44-41bd-ba35-48c338cd1fe1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-698zh" podUID="2dbb88ca-6a44-41bd-ba35-48c338cd1fe1" Nov 8 01:16:03.667556 kubelet[2667]: E1108 01:16:03.666692 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.667556 kubelet[2667]: E1108 01:16:03.666723 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54cbc7f844-sccdl" Nov 8 01:16:03.667811 kubelet[2667]: E1108 01:16:03.666746 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54cbc7f844-sccdl" Nov 8 01:16:03.667811 kubelet[2667]: E1108 01:16:03.666798 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54cbc7f844-sccdl_calico-apiserver(1cfed565-fcb5-4110-9fc3-0c3a9aaca493)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54cbc7f844-sccdl_calico-apiserver(1cfed565-fcb5-4110-9fc3-0c3a9aaca493)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-sccdl" podUID="1cfed565-fcb5-4110-9fc3-0c3a9aaca493" Nov 8 01:16:03.667811 kubelet[2667]: E1108 01:16:03.666849 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.668251 kubelet[2667]: E1108 01:16:03.666887 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fb7bc4b99-gm555" Nov 8 01:16:03.668251 kubelet[2667]: E1108 01:16:03.666909 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fb7bc4b99-gm555" Nov 8 01:16:03.668251 kubelet[2667]: E1108 01:16:03.666945 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fb7bc4b99-gm555_calico-system(bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fb7bc4b99-gm555_calico-system(bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fb7bc4b99-gm555" podUID="bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d" Nov 8 01:16:03.668460 kubelet[2667]: E1108 01:16:03.666993 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.668460 kubelet[2667]: E1108 01:16:03.667032 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-545dcd8949-9clbl" Nov 8 01:16:03.668460 kubelet[2667]: E1108 01:16:03.667055 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-545dcd8949-9clbl" Nov 8 01:16:03.668631 kubelet[2667]: E1108 01:16:03.667100 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-545dcd8949-9clbl_calico-system(241195b4-5b0d-45a4-873e-2adb3be76878)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-545dcd8949-9clbl_calico-system(241195b4-5b0d-45a4-873e-2adb3be76878)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-545dcd8949-9clbl" podUID="241195b4-5b0d-45a4-873e-2adb3be76878" Nov 8 01:16:03.668631 kubelet[2667]: E1108 01:16:03.668068 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.668631 kubelet[2667]: E1108 01:16:03.668114 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" Nov 8 01:16:03.668972 kubelet[2667]: E1108 01:16:03.668136 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" Nov 8 01:16:03.670526 kubelet[2667]: E1108 01:16:03.670464 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54cbc7f844-zdscz_calico-apiserver(3e1f2813-87fb-41fd-ad67-d8abf3b908a6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54cbc7f844-zdscz_calico-apiserver(3e1f2813-87fb-41fd-ad67-d8abf3b908a6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" podUID="3e1f2813-87fb-41fd-ad67-d8abf3b908a6" Nov 8 01:16:03.672091 containerd[1502]: time="2025-11-08T01:16:03.660672445Z" level=error msg="encountered an error cleaning up failed sandbox \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.672252 containerd[1502]: time="2025-11-08T01:16:03.672213684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wc8lj,Uid:cecb66f9-6863-43bf-b9c2-fcaa31f6928a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.672642 kubelet[2667]: E1108 01:16:03.672487 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:03.681934 kubelet[2667]: E1108 01:16:03.672545 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wc8lj" Nov 8 01:16:03.682237 kubelet[2667]: E1108 01:16:03.682187 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-wc8lj" Nov 8 01:16:03.682497 kubelet[2667]: E1108 01:16:03.682460 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-wc8lj_kube-system(cecb66f9-6863-43bf-b9c2-fcaa31f6928a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-wc8lj_kube-system(cecb66f9-6863-43bf-b9c2-fcaa31f6928a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wc8lj" podUID="cecb66f9-6863-43bf-b9c2-fcaa31f6928a" Nov 8 01:16:04.378995 systemd[1]: Created slice kubepods-besteffort-pod51b46487_75c6_4a08_a5c4_0240abff3a0b.slice - libcontainer container kubepods-besteffort-pod51b46487_75c6_4a08_a5c4_0240abff3a0b.slice. Nov 8 01:16:04.382567 containerd[1502]: time="2025-11-08T01:16:04.382523426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxczs,Uid:51b46487-75c6-4a08-a5c4-0240abff3a0b,Namespace:calico-system,Attempt:0,}" Nov 8 01:16:04.398238 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31-shm.mount: Deactivated successfully. Nov 8 01:16:04.398685 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96-shm.mount: Deactivated successfully. Nov 8 01:16:04.484212 containerd[1502]: time="2025-11-08T01:16:04.482655534Z" level=error msg="Failed to destroy network for sandbox \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:04.485518 containerd[1502]: time="2025-11-08T01:16:04.485113848Z" level=error msg="encountered an error cleaning up failed sandbox \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:04.485518 containerd[1502]: time="2025-11-08T01:16:04.485211541Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxczs,Uid:51b46487-75c6-4a08-a5c4-0240abff3a0b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:04.486456 kubelet[2667]: E1108 01:16:04.486387 2667 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:04.486643 kubelet[2667]: E1108 01:16:04.486473 2667 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qxczs" Nov 8 01:16:04.486643 kubelet[2667]: E1108 01:16:04.486527 2667 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qxczs" Nov 8 01:16:04.486643 kubelet[2667]: E1108 01:16:04.486587 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qxczs_calico-system(51b46487-75c6-4a08-a5c4-0240abff3a0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qxczs_calico-system(51b46487-75c6-4a08-a5c4-0240abff3a0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:16:04.488979 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8-shm.mount: Deactivated successfully. Nov 8 01:16:04.664634 kubelet[2667]: I1108 01:16:04.663985 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Nov 8 01:16:04.681053 kubelet[2667]: I1108 01:16:04.680951 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Nov 8 01:16:04.688757 kubelet[2667]: I1108 01:16:04.688556 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Nov 8 01:16:04.692213 kubelet[2667]: I1108 01:16:04.692139 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Nov 8 01:16:04.697135 kubelet[2667]: I1108 01:16:04.696810 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Nov 8 01:16:04.704560 kubelet[2667]: I1108 01:16:04.704471 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Nov 8 01:16:04.710399 kubelet[2667]: I1108 01:16:04.710340 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Nov 8 01:16:04.713239 kubelet[2667]: I1108 01:16:04.712565 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Nov 8 01:16:04.715907 kubelet[2667]: I1108 01:16:04.715880 2667 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Nov 8 01:16:04.754130 containerd[1502]: time="2025-11-08T01:16:04.754063811Z" level=info msg="StopPodSandbox for \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\"" Nov 8 01:16:04.756628 containerd[1502]: time="2025-11-08T01:16:04.755713827Z" level=info msg="StopPodSandbox for \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\"" Nov 8 01:16:04.757033 containerd[1502]: time="2025-11-08T01:16:04.755762076Z" level=info msg="StopPodSandbox for \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\"" Nov 8 01:16:04.761064 containerd[1502]: time="2025-11-08T01:16:04.761027896Z" level=info msg="Ensure that sandbox d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e in task-service has been cleanup successfully" Nov 8 01:16:04.761215 containerd[1502]: time="2025-11-08T01:16:04.761156478Z" level=info msg="Ensure that sandbox 8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e in task-service has been cleanup successfully" Nov 8 01:16:04.778595 containerd[1502]: time="2025-11-08T01:16:04.755811675Z" level=info msg="StopPodSandbox for \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\"" Nov 8 01:16:04.779617 containerd[1502]: time="2025-11-08T01:16:04.755863971Z" level=info msg="StopPodSandbox for \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\"" Nov 8 01:16:04.779617 containerd[1502]: time="2025-11-08T01:16:04.779200669Z" level=info msg="Ensure that sandbox a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31 in task-service has been cleanup successfully" Nov 8 01:16:04.779847 containerd[1502]: time="2025-11-08T01:16:04.779795235Z" level=info msg="Ensure that sandbox b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8 in task-service has been cleanup successfully" Nov 8 01:16:04.780941 containerd[1502]: time="2025-11-08T01:16:04.761047503Z" level=info msg="Ensure that sandbox 9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6 in task-service has been cleanup successfully" Nov 8 01:16:04.783912 containerd[1502]: time="2025-11-08T01:16:04.755909239Z" level=info msg="StopPodSandbox for \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\"" Nov 8 01:16:04.784411 containerd[1502]: time="2025-11-08T01:16:04.784370609Z" level=info msg="Ensure that sandbox 7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96 in task-service has been cleanup successfully" Nov 8 01:16:04.786797 containerd[1502]: time="2025-11-08T01:16:04.755947222Z" level=info msg="StopPodSandbox for \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\"" Nov 8 01:16:04.787122 containerd[1502]: time="2025-11-08T01:16:04.787085602Z" level=info msg="Ensure that sandbox 8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024 in task-service has been cleanup successfully" Nov 8 01:16:04.791184 containerd[1502]: time="2025-11-08T01:16:04.756357245Z" level=info msg="StopPodSandbox for \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\"" Nov 8 01:16:04.791610 containerd[1502]: time="2025-11-08T01:16:04.791577508Z" level=info msg="Ensure that sandbox 70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc in task-service has been cleanup successfully" Nov 8 01:16:04.797107 containerd[1502]: time="2025-11-08T01:16:04.756395041Z" level=info msg="StopPodSandbox for \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\"" Nov 8 01:16:04.797402 containerd[1502]: time="2025-11-08T01:16:04.797366230Z" level=info msg="Ensure that sandbox 532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0 in task-service has been cleanup successfully" Nov 8 01:16:04.972028 containerd[1502]: time="2025-11-08T01:16:04.968617696Z" level=error msg="StopPodSandbox for \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\" failed" error="failed to destroy network for sandbox \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:04.972206 kubelet[2667]: E1108 01:16:04.972124 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Nov 8 01:16:04.992192 kubelet[2667]: E1108 01:16:04.972305 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e"} Nov 8 01:16:04.992192 kubelet[2667]: E1108 01:16:04.991788 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:16:04.992192 kubelet[2667]: E1108 01:16:04.991881 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-wd97q" podUID="6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39" Nov 8 01:16:04.996137 containerd[1502]: time="2025-11-08T01:16:04.995946575Z" level=error msg="StopPodSandbox for \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\" failed" error="failed to destroy network for sandbox \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:04.996427 kubelet[2667]: E1108 01:16:04.996305 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Nov 8 01:16:04.996427 kubelet[2667]: E1108 01:16:04.996360 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e"} Nov 8 01:16:04.996427 kubelet[2667]: E1108 01:16:04.996404 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a44b2afe-dc17-4635-9d12-87b1697a9f2b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:16:04.997265 kubelet[2667]: E1108 01:16:04.996434 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a44b2afe-dc17-4635-9d12-87b1697a9f2b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f4d5bbb98-9znr7" podUID="a44b2afe-dc17-4635-9d12-87b1697a9f2b" Nov 8 01:16:04.998385 containerd[1502]: time="2025-11-08T01:16:04.998343735Z" level=error msg="StopPodSandbox for \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\" failed" error="failed to destroy network for sandbox \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:04.998586 kubelet[2667]: E1108 01:16:04.998525 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Nov 8 01:16:04.998675 kubelet[2667]: E1108 01:16:04.998596 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0"} Nov 8 01:16:04.998675 kubelet[2667]: E1108 01:16:04.998634 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cecb66f9-6863-43bf-b9c2-fcaa31f6928a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:16:04.998813 kubelet[2667]: E1108 01:16:04.998662 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cecb66f9-6863-43bf-b9c2-fcaa31f6928a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-wc8lj" podUID="cecb66f9-6863-43bf-b9c2-fcaa31f6928a" Nov 8 01:16:05.020790 containerd[1502]: time="2025-11-08T01:16:05.020641812Z" level=error msg="StopPodSandbox for \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\" failed" error="failed to destroy network for sandbox \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:05.021640 kubelet[2667]: E1108 01:16:05.021361 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Nov 8 01:16:05.021640 kubelet[2667]: E1108 01:16:05.021452 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31"} Nov 8 01:16:05.021640 kubelet[2667]: E1108 01:16:05.021526 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2dbb88ca-6a44-41bd-ba35-48c338cd1fe1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:16:05.021640 kubelet[2667]: E1108 01:16:05.021618 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2dbb88ca-6a44-41bd-ba35-48c338cd1fe1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-698zh" podUID="2dbb88ca-6a44-41bd-ba35-48c338cd1fe1" Nov 8 01:16:05.037127 containerd[1502]: time="2025-11-08T01:16:05.037050070Z" level=error msg="StopPodSandbox for \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\" failed" error="failed to destroy network for sandbox \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:05.038972 kubelet[2667]: E1108 01:16:05.038808 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Nov 8 01:16:05.039123 kubelet[2667]: E1108 01:16:05.038992 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6"} Nov 8 01:16:05.039408 kubelet[2667]: E1108 01:16:05.039357 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"241195b4-5b0d-45a4-873e-2adb3be76878\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:16:05.039691 kubelet[2667]: E1108 01:16:05.039538 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"241195b4-5b0d-45a4-873e-2adb3be76878\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-545dcd8949-9clbl" podUID="241195b4-5b0d-45a4-873e-2adb3be76878" Nov 8 01:16:05.046499 containerd[1502]: time="2025-11-08T01:16:05.046437709Z" level=error msg="StopPodSandbox for \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\" failed" error="failed to destroy network for sandbox \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:05.047076 containerd[1502]: time="2025-11-08T01:16:05.046874216Z" level=error msg="StopPodSandbox for \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\" failed" error="failed to destroy network for sandbox \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:05.047159 kubelet[2667]: E1108 01:16:05.046984 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Nov 8 01:16:05.047159 kubelet[2667]: E1108 01:16:05.047055 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8"} Nov 8 01:16:05.047159 kubelet[2667]: E1108 01:16:05.047106 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"51b46487-75c6-4a08-a5c4-0240abff3a0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:16:05.047159 kubelet[2667]: E1108 01:16:05.047141 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"51b46487-75c6-4a08-a5c4-0240abff3a0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:16:05.048129 kubelet[2667]: E1108 01:16:05.047276 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Nov 8 01:16:05.048129 kubelet[2667]: E1108 01:16:05.047311 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc"} Nov 8 01:16:05.048129 kubelet[2667]: E1108 01:16:05.047354 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:16:05.048129 kubelet[2667]: E1108 01:16:05.047383 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fb7bc4b99-gm555" podUID="bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d" Nov 8 01:16:05.054458 containerd[1502]: time="2025-11-08T01:16:05.054321423Z" level=error msg="StopPodSandbox for \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\" failed" error="failed to destroy network for sandbox \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:05.054828 kubelet[2667]: E1108 01:16:05.054649 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Nov 8 01:16:05.054937 kubelet[2667]: E1108 01:16:05.054825 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96"} Nov 8 01:16:05.054937 kubelet[2667]: E1108 01:16:05.054902 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3e1f2813-87fb-41fd-ad67-d8abf3b908a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:16:05.055089 kubelet[2667]: E1108 01:16:05.054967 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3e1f2813-87fb-41fd-ad67-d8abf3b908a6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" podUID="3e1f2813-87fb-41fd-ad67-d8abf3b908a6" Nov 8 01:16:05.061279 containerd[1502]: time="2025-11-08T01:16:05.061115791Z" level=error msg="StopPodSandbox for \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\" failed" error="failed to destroy network for sandbox \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 01:16:05.062631 kubelet[2667]: E1108 01:16:05.061648 2667 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Nov 8 01:16:05.062631 kubelet[2667]: E1108 01:16:05.061733 2667 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024"} Nov 8 01:16:05.062631 kubelet[2667]: E1108 01:16:05.061942 2667 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1cfed565-fcb5-4110-9fc3-0c3a9aaca493\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 01:16:05.062631 kubelet[2667]: E1108 01:16:05.062074 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1cfed565-fcb5-4110-9fc3-0c3a9aaca493\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-sccdl" podUID="1cfed565-fcb5-4110-9fc3-0c3a9aaca493" Nov 8 01:16:14.651463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1944026757.mount: Deactivated successfully. Nov 8 01:16:14.777249 containerd[1502]: time="2025-11-08T01:16:14.775576718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 01:16:14.777249 containerd[1502]: time="2025-11-08T01:16:14.775835873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:16:14.825220 containerd[1502]: time="2025-11-08T01:16:14.824905157Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:16:14.827425 containerd[1502]: time="2025-11-08T01:16:14.826818647Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 01:16:14.829270 containerd[1502]: time="2025-11-08T01:16:14.827746359Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.211080083s" Nov 8 01:16:14.829270 containerd[1502]: time="2025-11-08T01:16:14.827847861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 01:16:14.908432 containerd[1502]: time="2025-11-08T01:16:14.907090394Z" level=info msg="CreateContainer within sandbox \"e4f2a29a6b07ee60e257f29c0eb2a285294ec0f8caf392b41fc9af406630bc55\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 01:16:14.959753 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1656754132.mount: Deactivated successfully. Nov 8 01:16:14.975145 containerd[1502]: time="2025-11-08T01:16:14.975056830Z" level=info msg="CreateContainer within sandbox \"e4f2a29a6b07ee60e257f29c0eb2a285294ec0f8caf392b41fc9af406630bc55\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c1bd3517b8e5f7fe6dc0f8a81e003309d7e0e7b4477ccb84cbda9baa8fab5b0f\"" Nov 8 01:16:14.977842 containerd[1502]: time="2025-11-08T01:16:14.976109441Z" level=info msg="StartContainer for \"c1bd3517b8e5f7fe6dc0f8a81e003309d7e0e7b4477ccb84cbda9baa8fab5b0f\"" Nov 8 01:16:15.088749 systemd[1]: Started cri-containerd-c1bd3517b8e5f7fe6dc0f8a81e003309d7e0e7b4477ccb84cbda9baa8fab5b0f.scope - libcontainer container c1bd3517b8e5f7fe6dc0f8a81e003309d7e0e7b4477ccb84cbda9baa8fab5b0f. Nov 8 01:16:15.169988 containerd[1502]: time="2025-11-08T01:16:15.169591833Z" level=info msg="StartContainer for \"c1bd3517b8e5f7fe6dc0f8a81e003309d7e0e7b4477ccb84cbda9baa8fab5b0f\" returns successfully" Nov 8 01:16:15.382379 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 01:16:15.383893 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 01:16:15.787251 containerd[1502]: time="2025-11-08T01:16:15.786681751Z" level=info msg="StopPodSandbox for \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\"" Nov 8 01:16:16.085687 kubelet[2667]: I1108 01:16:16.081177 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wlhmp" podStartSLOduration=1.86923787 podStartE2EDuration="30.073515566s" podCreationTimestamp="2025-11-08 01:15:46 +0000 UTC" firstStartedPulling="2025-11-08 01:15:46.636789532 +0000 UTC m=+25.454895528" lastFinishedPulling="2025-11-08 01:16:14.841067217 +0000 UTC m=+53.659173224" observedRunningTime="2025-11-08 01:16:15.949529222 +0000 UTC m=+54.767635233" watchObservedRunningTime="2025-11-08 01:16:16.073515566 +0000 UTC m=+54.891621571" Nov 8 01:16:16.373791 containerd[1502]: time="2025-11-08T01:16:16.371744759Z" level=info msg="StopPodSandbox for \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\"" Nov 8 01:16:16.561263 containerd[1502]: 2025-11-08 01:16:16.451 [INFO][3952] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Nov 8 01:16:16.561263 containerd[1502]: 2025-11-08 01:16:16.451 [INFO][3952] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" iface="eth0" netns="/var/run/netns/cni-74ebdc8a-25ad-0800-14e2-a564d23a89ae" Nov 8 01:16:16.561263 containerd[1502]: 2025-11-08 01:16:16.452 [INFO][3952] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" iface="eth0" netns="/var/run/netns/cni-74ebdc8a-25ad-0800-14e2-a564d23a89ae" Nov 8 01:16:16.561263 containerd[1502]: 2025-11-08 01:16:16.452 [INFO][3952] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" iface="eth0" netns="/var/run/netns/cni-74ebdc8a-25ad-0800-14e2-a564d23a89ae" Nov 8 01:16:16.561263 containerd[1502]: 2025-11-08 01:16:16.453 [INFO][3952] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Nov 8 01:16:16.561263 containerd[1502]: 2025-11-08 01:16:16.453 [INFO][3952] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Nov 8 01:16:16.561263 containerd[1502]: 2025-11-08 01:16:16.526 [INFO][3961] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" HandleID="k8s-pod-network.7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:16.561263 containerd[1502]: 2025-11-08 01:16:16.528 [INFO][3961] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:16.561263 containerd[1502]: 2025-11-08 01:16:16.528 [INFO][3961] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:16.561263 containerd[1502]: 2025-11-08 01:16:16.548 [WARNING][3961] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" HandleID="k8s-pod-network.7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:16.561263 containerd[1502]: 2025-11-08 01:16:16.548 [INFO][3961] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" HandleID="k8s-pod-network.7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:16.561263 containerd[1502]: 2025-11-08 01:16:16.550 [INFO][3961] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:16.561263 containerd[1502]: 2025-11-08 01:16:16.554 [INFO][3952] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Nov 8 01:16:16.562308 containerd[1502]: time="2025-11-08T01:16:16.562014401Z" level=info msg="TearDown network for sandbox \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\" successfully" Nov 8 01:16:16.562308 containerd[1502]: time="2025-11-08T01:16:16.562221518Z" level=info msg="StopPodSandbox for \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\" returns successfully" Nov 8 01:16:16.578075 systemd[1]: run-netns-cni\x2d74ebdc8a\x2d25ad\x2d0800\x2d14e2\x2da564d23a89ae.mount: Deactivated successfully. Nov 8 01:16:16.583943 containerd[1502]: time="2025-11-08T01:16:16.583784949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cbc7f844-zdscz,Uid:3e1f2813-87fb-41fd-ad67-d8abf3b908a6,Namespace:calico-apiserver,Attempt:1,}" Nov 8 01:16:16.595056 containerd[1502]: 2025-11-08 01:16:16.069 [INFO][3921] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Nov 8 01:16:16.595056 containerd[1502]: 2025-11-08 01:16:16.072 [INFO][3921] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" iface="eth0" netns="/var/run/netns/cni-c781c21c-091d-0a24-8bcf-8a87bd171da7" Nov 8 01:16:16.595056 containerd[1502]: 2025-11-08 01:16:16.073 [INFO][3921] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" iface="eth0" netns="/var/run/netns/cni-c781c21c-091d-0a24-8bcf-8a87bd171da7" Nov 8 01:16:16.595056 containerd[1502]: 2025-11-08 01:16:16.074 [INFO][3921] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" iface="eth0" netns="/var/run/netns/cni-c781c21c-091d-0a24-8bcf-8a87bd171da7" Nov 8 01:16:16.595056 containerd[1502]: 2025-11-08 01:16:16.074 [INFO][3921] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Nov 8 01:16:16.595056 containerd[1502]: 2025-11-08 01:16:16.074 [INFO][3921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Nov 8 01:16:16.595056 containerd[1502]: 2025-11-08 01:16:16.527 [INFO][3928] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" HandleID="k8s-pod-network.9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Workload="srv--1w3cb.gb1.brightbox.com-k8s-whisker--545dcd8949--9clbl-eth0" Nov 8 01:16:16.595056 containerd[1502]: 2025-11-08 01:16:16.528 [INFO][3928] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:16.595056 containerd[1502]: 2025-11-08 01:16:16.552 [INFO][3928] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:16.595056 containerd[1502]: 2025-11-08 01:16:16.584 [WARNING][3928] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" HandleID="k8s-pod-network.9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Workload="srv--1w3cb.gb1.brightbox.com-k8s-whisker--545dcd8949--9clbl-eth0" Nov 8 01:16:16.595056 containerd[1502]: 2025-11-08 01:16:16.584 [INFO][3928] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" HandleID="k8s-pod-network.9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Workload="srv--1w3cb.gb1.brightbox.com-k8s-whisker--545dcd8949--9clbl-eth0" Nov 8 01:16:16.595056 containerd[1502]: 2025-11-08 01:16:16.587 [INFO][3928] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:16.595056 containerd[1502]: 2025-11-08 01:16:16.592 [INFO][3921] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Nov 8 01:16:16.597372 containerd[1502]: time="2025-11-08T01:16:16.595337781Z" level=info msg="TearDown network for sandbox \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\" successfully" Nov 8 01:16:16.597372 containerd[1502]: time="2025-11-08T01:16:16.595370985Z" level=info msg="StopPodSandbox for \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\" returns successfully" Nov 8 01:16:16.601502 systemd[1]: run-netns-cni\x2dc781c21c\x2d091d\x2d0a24\x2d8bcf\x2d8a87bd171da7.mount: Deactivated successfully. Nov 8 01:16:16.779335 kubelet[2667]: I1108 01:16:16.778889 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/241195b4-5b0d-45a4-873e-2adb3be76878-whisker-backend-key-pair\") pod \"241195b4-5b0d-45a4-873e-2adb3be76878\" (UID: \"241195b4-5b0d-45a4-873e-2adb3be76878\") " Nov 8 01:16:16.780822 kubelet[2667]: I1108 01:16:16.780388 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx5fk\" (UniqueName: \"kubernetes.io/projected/241195b4-5b0d-45a4-873e-2adb3be76878-kube-api-access-bx5fk\") pod \"241195b4-5b0d-45a4-873e-2adb3be76878\" (UID: \"241195b4-5b0d-45a4-873e-2adb3be76878\") " Nov 8 01:16:16.780822 kubelet[2667]: I1108 01:16:16.780446 2667 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/241195b4-5b0d-45a4-873e-2adb3be76878-whisker-ca-bundle\") pod \"241195b4-5b0d-45a4-873e-2adb3be76878\" (UID: \"241195b4-5b0d-45a4-873e-2adb3be76878\") " Nov 8 01:16:16.797226 kubelet[2667]: I1108 01:16:16.794099 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/241195b4-5b0d-45a4-873e-2adb3be76878-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "241195b4-5b0d-45a4-873e-2adb3be76878" (UID: "241195b4-5b0d-45a4-873e-2adb3be76878"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 01:16:16.802554 kubelet[2667]: I1108 01:16:16.802492 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/241195b4-5b0d-45a4-873e-2adb3be76878-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "241195b4-5b0d-45a4-873e-2adb3be76878" (UID: "241195b4-5b0d-45a4-873e-2adb3be76878"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 01:16:16.805123 systemd[1]: var-lib-kubelet-pods-241195b4\x2d5b0d\x2d45a4\x2d873e\x2d2adb3be76878-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 01:16:16.808410 kubelet[2667]: I1108 01:16:16.808311 2667 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/241195b4-5b0d-45a4-873e-2adb3be76878-kube-api-access-bx5fk" (OuterVolumeSpecName: "kube-api-access-bx5fk") pod "241195b4-5b0d-45a4-873e-2adb3be76878" (UID: "241195b4-5b0d-45a4-873e-2adb3be76878"). InnerVolumeSpecName "kube-api-access-bx5fk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 01:16:16.813080 systemd[1]: var-lib-kubelet-pods-241195b4\x2d5b0d\x2d45a4\x2d873e\x2d2adb3be76878-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbx5fk.mount: Deactivated successfully. Nov 8 01:16:16.876023 systemd-networkd[1416]: cali860db06f897: Link UP Nov 8 01:16:16.876731 systemd-networkd[1416]: cali860db06f897: Gained carrier Nov 8 01:16:16.882404 kubelet[2667]: I1108 01:16:16.882366 2667 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/241195b4-5b0d-45a4-873e-2adb3be76878-whisker-ca-bundle\") on node \"srv-1w3cb.gb1.brightbox.com\" DevicePath \"\"" Nov 8 01:16:16.897182 kubelet[2667]: I1108 01:16:16.897095 2667 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/241195b4-5b0d-45a4-873e-2adb3be76878-whisker-backend-key-pair\") on node \"srv-1w3cb.gb1.brightbox.com\" DevicePath \"\"" Nov 8 01:16:16.897182 kubelet[2667]: I1108 01:16:16.897150 2667 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bx5fk\" (UniqueName: \"kubernetes.io/projected/241195b4-5b0d-45a4-873e-2adb3be76878-kube-api-access-bx5fk\") on node \"srv-1w3cb.gb1.brightbox.com\" DevicePath \"\"" Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.669 [INFO][3969] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.688 [INFO][3969] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0 calico-apiserver-54cbc7f844- calico-apiserver 3e1f2813-87fb-41fd-ad67-d8abf3b908a6 952 0 2025-11-08 01:15:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54cbc7f844 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-1w3cb.gb1.brightbox.com calico-apiserver-54cbc7f844-zdscz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali860db06f897 [] [] }} ContainerID="1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-zdscz" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-" Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.693 [INFO][3969] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-zdscz" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.763 [INFO][3982] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" HandleID="k8s-pod-network.1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.763 [INFO][3982] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" HandleID="k8s-pod-network.1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-1w3cb.gb1.brightbox.com", "pod":"calico-apiserver-54cbc7f844-zdscz", "timestamp":"2025-11-08 01:16:16.76319631 +0000 UTC"}, Hostname:"srv-1w3cb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.763 [INFO][3982] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.763 [INFO][3982] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.763 [INFO][3982] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-1w3cb.gb1.brightbox.com' Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.778 [INFO][3982] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.795 [INFO][3982] ipam/ipam.go 394: Looking up existing affinities for host host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.813 [INFO][3982] ipam/ipam.go 511: Trying affinity for 192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.818 [INFO][3982] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.821 [INFO][3982] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.821 [INFO][3982] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.823 [INFO][3982] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.828 [INFO][3982] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.837 [INFO][3982] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.65/26] block=192.168.17.64/26 handle="k8s-pod-network.1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.837 [INFO][3982] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.65/26] handle="k8s-pod-network.1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.837 [INFO][3982] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:16.929715 containerd[1502]: 2025-11-08 01:16:16.837 [INFO][3982] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.65/26] IPv6=[] ContainerID="1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" HandleID="k8s-pod-network.1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:16.928895 systemd[1]: Removed slice kubepods-besteffort-pod241195b4_5b0d_45a4_873e_2adb3be76878.slice - libcontainer container kubepods-besteffort-pod241195b4_5b0d_45a4_873e_2adb3be76878.slice. Nov 8 01:16:16.940489 containerd[1502]: 2025-11-08 01:16:16.840 [INFO][3969] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-zdscz" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0", GenerateName:"calico-apiserver-54cbc7f844-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e1f2813-87fb-41fd-ad67-d8abf3b908a6", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54cbc7f844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-54cbc7f844-zdscz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali860db06f897", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:16.940489 containerd[1502]: 2025-11-08 01:16:16.840 [INFO][3969] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.65/32] ContainerID="1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-zdscz" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:16.940489 containerd[1502]: 2025-11-08 01:16:16.840 [INFO][3969] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali860db06f897 ContainerID="1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-zdscz" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:16.940489 containerd[1502]: 2025-11-08 01:16:16.861 [INFO][3969] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-zdscz" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:16.940489 containerd[1502]: 2025-11-08 01:16:16.875 [INFO][3969] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-zdscz" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0", GenerateName:"calico-apiserver-54cbc7f844-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e1f2813-87fb-41fd-ad67-d8abf3b908a6", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54cbc7f844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a", Pod:"calico-apiserver-54cbc7f844-zdscz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali860db06f897", MAC:"12:0b:44:7f:8f:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:16.940489 containerd[1502]: 2025-11-08 01:16:16.917 [INFO][3969] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-zdscz" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:17.037032 systemd[1]: run-containerd-runc-k8s.io-c1bd3517b8e5f7fe6dc0f8a81e003309d7e0e7b4477ccb84cbda9baa8fab5b0f-runc.q6uG1w.mount: Deactivated successfully. Nov 8 01:16:17.075984 containerd[1502]: time="2025-11-08T01:16:17.075535415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:16:17.077141 containerd[1502]: time="2025-11-08T01:16:17.076128987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:16:17.077141 containerd[1502]: time="2025-11-08T01:16:17.076835506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:17.080792 containerd[1502]: time="2025-11-08T01:16:17.080007659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:17.137437 systemd[1]: Started cri-containerd-1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a.scope - libcontainer container 1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a. Nov 8 01:16:17.170450 systemd[1]: Created slice kubepods-besteffort-pode22bf778_56e1_456c_a095_d6acd02811e3.slice - libcontainer container kubepods-besteffort-pode22bf778_56e1_456c_a095_d6acd02811e3.slice. Nov 8 01:16:17.280949 containerd[1502]: time="2025-11-08T01:16:17.280884170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cbc7f844-zdscz,Uid:3e1f2813-87fb-41fd-ad67-d8abf3b908a6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a\"" Nov 8 01:16:17.288789 containerd[1502]: time="2025-11-08T01:16:17.288379439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:16:17.300594 kubelet[2667]: I1108 01:16:17.300523 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xswz\" (UniqueName: \"kubernetes.io/projected/e22bf778-56e1-456c-a095-d6acd02811e3-kube-api-access-4xswz\") pod \"whisker-5f775cbcb9-f95x7\" (UID: \"e22bf778-56e1-456c-a095-d6acd02811e3\") " pod="calico-system/whisker-5f775cbcb9-f95x7" Nov 8 01:16:17.301243 kubelet[2667]: I1108 01:16:17.300607 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e22bf778-56e1-456c-a095-d6acd02811e3-whisker-backend-key-pair\") pod \"whisker-5f775cbcb9-f95x7\" (UID: \"e22bf778-56e1-456c-a095-d6acd02811e3\") " pod="calico-system/whisker-5f775cbcb9-f95x7" Nov 8 01:16:17.301243 kubelet[2667]: I1108 01:16:17.300675 2667 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e22bf778-56e1-456c-a095-d6acd02811e3-whisker-ca-bundle\") pod \"whisker-5f775cbcb9-f95x7\" (UID: \"e22bf778-56e1-456c-a095-d6acd02811e3\") " pod="calico-system/whisker-5f775cbcb9-f95x7" Nov 8 01:16:17.381951 containerd[1502]: time="2025-11-08T01:16:17.381578881Z" level=info msg="StopPodSandbox for \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\"" Nov 8 01:16:17.386897 containerd[1502]: time="2025-11-08T01:16:17.381578486Z" level=info msg="StopPodSandbox for \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\"" Nov 8 01:16:17.415985 kubelet[2667]: I1108 01:16:17.414102 2667 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="241195b4-5b0d-45a4-873e-2adb3be76878" path="/var/lib/kubelet/pods/241195b4-5b0d-45a4-873e-2adb3be76878/volumes" Nov 8 01:16:17.476929 containerd[1502]: time="2025-11-08T01:16:17.476864323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f775cbcb9-f95x7,Uid:e22bf778-56e1-456c-a095-d6acd02811e3,Namespace:calico-system,Attempt:0,}" Nov 8 01:16:17.620569 containerd[1502]: time="2025-11-08T01:16:17.620502731Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:17.645518 containerd[1502]: time="2025-11-08T01:16:17.622595137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:16:17.646060 containerd[1502]: time="2025-11-08T01:16:17.623219129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:16:17.647197 kubelet[2667]: E1108 01:16:17.646868 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:16:17.648070 kubelet[2667]: E1108 01:16:17.648023 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:16:17.681576 kubelet[2667]: E1108 01:16:17.680108 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j56qx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54cbc7f844-zdscz_calico-apiserver(3e1f2813-87fb-41fd-ad67-d8abf3b908a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:17.682538 kubelet[2667]: E1108 01:16:17.682487 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" podUID="3e1f2813-87fb-41fd-ad67-d8abf3b908a6" Nov 8 01:16:17.795395 containerd[1502]: 2025-11-08 01:16:17.668 [INFO][4099] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Nov 8 01:16:17.795395 containerd[1502]: 2025-11-08 01:16:17.668 [INFO][4099] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" iface="eth0" netns="/var/run/netns/cni-61b508be-15ef-44ce-988b-19f48aafe17a" Nov 8 01:16:17.795395 containerd[1502]: 2025-11-08 01:16:17.671 [INFO][4099] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" iface="eth0" netns="/var/run/netns/cni-61b508be-15ef-44ce-988b-19f48aafe17a" Nov 8 01:16:17.795395 containerd[1502]: 2025-11-08 01:16:17.673 [INFO][4099] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" iface="eth0" netns="/var/run/netns/cni-61b508be-15ef-44ce-988b-19f48aafe17a" Nov 8 01:16:17.795395 containerd[1502]: 2025-11-08 01:16:17.674 [INFO][4099] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Nov 8 01:16:17.795395 containerd[1502]: 2025-11-08 01:16:17.674 [INFO][4099] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Nov 8 01:16:17.795395 containerd[1502]: 2025-11-08 01:16:17.762 [INFO][4184] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" HandleID="k8s-pod-network.b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Workload="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:17.795395 containerd[1502]: 2025-11-08 01:16:17.764 [INFO][4184] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:17.795395 containerd[1502]: 2025-11-08 01:16:17.764 [INFO][4184] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:17.795395 containerd[1502]: 2025-11-08 01:16:17.782 [WARNING][4184] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" HandleID="k8s-pod-network.b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Workload="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:17.795395 containerd[1502]: 2025-11-08 01:16:17.783 [INFO][4184] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" HandleID="k8s-pod-network.b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Workload="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:17.795395 containerd[1502]: 2025-11-08 01:16:17.787 [INFO][4184] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:17.795395 containerd[1502]: 2025-11-08 01:16:17.792 [INFO][4099] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Nov 8 01:16:17.797065 containerd[1502]: time="2025-11-08T01:16:17.796296529Z" level=info msg="TearDown network for sandbox \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\" successfully" Nov 8 01:16:17.797065 containerd[1502]: time="2025-11-08T01:16:17.796338776Z" level=info msg="StopPodSandbox for \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\" returns successfully" Nov 8 01:16:17.803464 containerd[1502]: time="2025-11-08T01:16:17.802550515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxczs,Uid:51b46487-75c6-4a08-a5c4-0240abff3a0b,Namespace:calico-system,Attempt:1,}" Nov 8 01:16:17.824348 systemd[1]: run-netns-cni\x2d61b508be\x2d15ef\x2d44ce\x2d988b\x2d19f48aafe17a.mount: Deactivated successfully. Nov 8 01:16:17.893990 kubelet[2667]: E1108 01:16:17.893855 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" podUID="3e1f2813-87fb-41fd-ad67-d8abf3b908a6" Nov 8 01:16:17.941541 containerd[1502]: 2025-11-08 01:16:17.686 [INFO][4103] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Nov 8 01:16:17.941541 containerd[1502]: 2025-11-08 01:16:17.687 [INFO][4103] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" iface="eth0" netns="/var/run/netns/cni-b349d1b6-59c5-80c0-6118-fb59758ce2f3" Nov 8 01:16:17.941541 containerd[1502]: 2025-11-08 01:16:17.688 [INFO][4103] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" iface="eth0" netns="/var/run/netns/cni-b349d1b6-59c5-80c0-6118-fb59758ce2f3" Nov 8 01:16:17.941541 containerd[1502]: 2025-11-08 01:16:17.689 [INFO][4103] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" iface="eth0" netns="/var/run/netns/cni-b349d1b6-59c5-80c0-6118-fb59758ce2f3" Nov 8 01:16:17.941541 containerd[1502]: 2025-11-08 01:16:17.689 [INFO][4103] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Nov 8 01:16:17.941541 containerd[1502]: 2025-11-08 01:16:17.689 [INFO][4103] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Nov 8 01:16:17.941541 containerd[1502]: 2025-11-08 01:16:17.845 [INFO][4189] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" HandleID="k8s-pod-network.d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:17.941541 containerd[1502]: 2025-11-08 01:16:17.845 [INFO][4189] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:17.941541 containerd[1502]: 2025-11-08 01:16:17.845 [INFO][4189] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:17.941541 containerd[1502]: 2025-11-08 01:16:17.873 [WARNING][4189] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" HandleID="k8s-pod-network.d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:17.941541 containerd[1502]: 2025-11-08 01:16:17.873 [INFO][4189] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" HandleID="k8s-pod-network.d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:17.941541 containerd[1502]: 2025-11-08 01:16:17.894 [INFO][4189] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:17.941541 containerd[1502]: 2025-11-08 01:16:17.921 [INFO][4103] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Nov 8 01:16:17.949560 systemd[1]: run-netns-cni\x2db349d1b6\x2d59c5\x2d80c0\x2d6118\x2dfb59758ce2f3.mount: Deactivated successfully. Nov 8 01:16:17.958068 containerd[1502]: time="2025-11-08T01:16:17.957924318Z" level=info msg="TearDown network for sandbox \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\" successfully" Nov 8 01:16:17.959872 containerd[1502]: time="2025-11-08T01:16:17.959207460Z" level=info msg="StopPodSandbox for \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\" returns successfully" Nov 8 01:16:17.962764 containerd[1502]: time="2025-11-08T01:16:17.962730899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4d5bbb98-9znr7,Uid:a44b2afe-dc17-4635-9d12-87b1697a9f2b,Namespace:calico-apiserver,Attempt:1,}" Nov 8 01:16:18.200407 systemd-networkd[1416]: calif50045fd9b4: Link UP Nov 8 01:16:18.202336 systemd-networkd[1416]: calif50045fd9b4: Gained carrier Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:17.701 [INFO][4160] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:17.741 [INFO][4160] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--1w3cb.gb1.brightbox.com-k8s-whisker--5f775cbcb9--f95x7-eth0 whisker-5f775cbcb9- calico-system e22bf778-56e1-456c-a095-d6acd02811e3 970 0 2025-11-08 01:16:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f775cbcb9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-1w3cb.gb1.brightbox.com whisker-5f775cbcb9-f95x7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif50045fd9b4 [] [] }} ContainerID="8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" Namespace="calico-system" Pod="whisker-5f775cbcb9-f95x7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-whisker--5f775cbcb9--f95x7-" Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:17.742 [INFO][4160] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" Namespace="calico-system" Pod="whisker-5f775cbcb9-f95x7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-whisker--5f775cbcb9--f95x7-eth0" Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:17.864 [INFO][4198] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" HandleID="k8s-pod-network.8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" Workload="srv--1w3cb.gb1.brightbox.com-k8s-whisker--5f775cbcb9--f95x7-eth0" Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:17.865 [INFO][4198] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" HandleID="k8s-pod-network.8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" Workload="srv--1w3cb.gb1.brightbox.com-k8s-whisker--5f775cbcb9--f95x7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333e40), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-1w3cb.gb1.brightbox.com", "pod":"whisker-5f775cbcb9-f95x7", "timestamp":"2025-11-08 01:16:17.864603703 +0000 UTC"}, Hostname:"srv-1w3cb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:17.865 [INFO][4198] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:17.895 [INFO][4198] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:17.902 [INFO][4198] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-1w3cb.gb1.brightbox.com' Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:18.016 [INFO][4198] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:18.059 [INFO][4198] ipam/ipam.go 394: Looking up existing affinities for host host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:18.078 [INFO][4198] ipam/ipam.go 511: Trying affinity for 192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:18.086 [INFO][4198] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:18.113 [INFO][4198] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:18.114 [INFO][4198] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:18.119 [INFO][4198] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205 Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:18.144 [INFO][4198] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:18.176 [INFO][4198] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.66/26] block=192.168.17.64/26 handle="k8s-pod-network.8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:18.177 [INFO][4198] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.66/26] handle="k8s-pod-network.8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:18.177 [INFO][4198] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:18.253313 containerd[1502]: 2025-11-08 01:16:18.177 [INFO][4198] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.66/26] IPv6=[] ContainerID="8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" HandleID="k8s-pod-network.8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" Workload="srv--1w3cb.gb1.brightbox.com-k8s-whisker--5f775cbcb9--f95x7-eth0" Nov 8 01:16:18.258400 containerd[1502]: 2025-11-08 01:16:18.186 [INFO][4160] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" Namespace="calico-system" Pod="whisker-5f775cbcb9-f95x7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-whisker--5f775cbcb9--f95x7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-whisker--5f775cbcb9--f95x7-eth0", GenerateName:"whisker-5f775cbcb9-", Namespace:"calico-system", SelfLink:"", UID:"e22bf778-56e1-456c-a095-d6acd02811e3", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f775cbcb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"", Pod:"whisker-5f775cbcb9-f95x7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.17.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif50045fd9b4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:18.258400 containerd[1502]: 2025-11-08 01:16:18.188 [INFO][4160] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.66/32] ContainerID="8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" Namespace="calico-system" Pod="whisker-5f775cbcb9-f95x7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-whisker--5f775cbcb9--f95x7-eth0" Nov 8 01:16:18.258400 containerd[1502]: 2025-11-08 01:16:18.188 [INFO][4160] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif50045fd9b4 ContainerID="8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" Namespace="calico-system" Pod="whisker-5f775cbcb9-f95x7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-whisker--5f775cbcb9--f95x7-eth0" Nov 8 01:16:18.258400 containerd[1502]: 2025-11-08 01:16:18.204 [INFO][4160] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" Namespace="calico-system" Pod="whisker-5f775cbcb9-f95x7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-whisker--5f775cbcb9--f95x7-eth0" Nov 8 01:16:18.258400 containerd[1502]: 2025-11-08 01:16:18.206 [INFO][4160] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" Namespace="calico-system" Pod="whisker-5f775cbcb9-f95x7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-whisker--5f775cbcb9--f95x7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-whisker--5f775cbcb9--f95x7-eth0", GenerateName:"whisker-5f775cbcb9-", Namespace:"calico-system", SelfLink:"", UID:"e22bf778-56e1-456c-a095-d6acd02811e3", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f775cbcb9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205", Pod:"whisker-5f775cbcb9-f95x7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.17.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif50045fd9b4", MAC:"fa:6a:02:24:4b:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:18.258400 containerd[1502]: 2025-11-08 01:16:18.240 [INFO][4160] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205" Namespace="calico-system" Pod="whisker-5f775cbcb9-f95x7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-whisker--5f775cbcb9--f95x7-eth0" Nov 8 01:16:18.338803 containerd[1502]: time="2025-11-08T01:16:18.337553767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:16:18.340192 containerd[1502]: time="2025-11-08T01:16:18.339103014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:16:18.340192 containerd[1502]: time="2025-11-08T01:16:18.339133913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:18.340192 containerd[1502]: time="2025-11-08T01:16:18.339302603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:18.379898 containerd[1502]: time="2025-11-08T01:16:18.379827928Z" level=info msg="StopPodSandbox for \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\"" Nov 8 01:16:18.383384 containerd[1502]: time="2025-11-08T01:16:18.382220438Z" level=info msg="StopPodSandbox for \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\"" Nov 8 01:16:18.393383 systemd[1]: Started cri-containerd-8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205.scope - libcontainer container 8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205. Nov 8 01:16:18.521091 systemd-networkd[1416]: calid2be38051ec: Link UP Nov 8 01:16:18.521526 systemd-networkd[1416]: calid2be38051ec: Gained carrier Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.117 [INFO][4237] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.174 [INFO][4237] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0 calico-apiserver-f4d5bbb98- calico-apiserver a44b2afe-dc17-4635-9d12-87b1697a9f2b 978 0 2025-11-08 01:15:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f4d5bbb98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-1w3cb.gb1.brightbox.com calico-apiserver-f4d5bbb98-9znr7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid2be38051ec [] [] }} ContainerID="282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" Namespace="calico-apiserver" Pod="calico-apiserver-f4d5bbb98-9znr7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-" Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.175 [INFO][4237] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" Namespace="calico-apiserver" Pod="calico-apiserver-f4d5bbb98-9znr7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.329 [INFO][4267] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" HandleID="k8s-pod-network.282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.330 [INFO][4267] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" HandleID="k8s-pod-network.282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f6d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-1w3cb.gb1.brightbox.com", "pod":"calico-apiserver-f4d5bbb98-9znr7", "timestamp":"2025-11-08 01:16:18.329933861 +0000 UTC"}, Hostname:"srv-1w3cb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.330 [INFO][4267] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.331 [INFO][4267] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.331 [INFO][4267] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-1w3cb.gb1.brightbox.com' Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.367 [INFO][4267] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.392 [INFO][4267] ipam/ipam.go 394: Looking up existing affinities for host host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.414 [INFO][4267] ipam/ipam.go 511: Trying affinity for 192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.424 [INFO][4267] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.437 [INFO][4267] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.437 [INFO][4267] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.446 [INFO][4267] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18 Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.460 [INFO][4267] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.474 [INFO][4267] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.67/26] block=192.168.17.64/26 handle="k8s-pod-network.282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.475 [INFO][4267] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.67/26] handle="k8s-pod-network.282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.476 [INFO][4267] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:18.593320 containerd[1502]: 2025-11-08 01:16:18.476 [INFO][4267] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.67/26] IPv6=[] ContainerID="282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" HandleID="k8s-pod-network.282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:18.599119 containerd[1502]: 2025-11-08 01:16:18.495 [INFO][4237] cni-plugin/k8s.go 418: Populated endpoint ContainerID="282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" Namespace="calico-apiserver" Pod="calico-apiserver-f4d5bbb98-9znr7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0", GenerateName:"calico-apiserver-f4d5bbb98-", Namespace:"calico-apiserver", SelfLink:"", UID:"a44b2afe-dc17-4635-9d12-87b1697a9f2b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4d5bbb98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-f4d5bbb98-9znr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2be38051ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:18.599119 containerd[1502]: 2025-11-08 01:16:18.496 [INFO][4237] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.67/32] ContainerID="282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" Namespace="calico-apiserver" Pod="calico-apiserver-f4d5bbb98-9znr7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:18.599119 containerd[1502]: 2025-11-08 01:16:18.496 [INFO][4237] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2be38051ec ContainerID="282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" Namespace="calico-apiserver" Pod="calico-apiserver-f4d5bbb98-9znr7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:18.599119 containerd[1502]: 2025-11-08 01:16:18.528 [INFO][4237] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" Namespace="calico-apiserver" Pod="calico-apiserver-f4d5bbb98-9znr7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:18.599119 containerd[1502]: 2025-11-08 01:16:18.536 [INFO][4237] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" Namespace="calico-apiserver" Pod="calico-apiserver-f4d5bbb98-9znr7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0", GenerateName:"calico-apiserver-f4d5bbb98-", Namespace:"calico-apiserver", SelfLink:"", UID:"a44b2afe-dc17-4635-9d12-87b1697a9f2b", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4d5bbb98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18", Pod:"calico-apiserver-f4d5bbb98-9znr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2be38051ec", MAC:"92:df:d6:66:e0:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:18.599119 containerd[1502]: 2025-11-08 01:16:18.578 [INFO][4237] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18" Namespace="calico-apiserver" Pod="calico-apiserver-f4d5bbb98-9znr7" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:18.703771 containerd[1502]: time="2025-11-08T01:16:18.700461487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:16:18.703771 containerd[1502]: time="2025-11-08T01:16:18.700560006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:16:18.703771 containerd[1502]: time="2025-11-08T01:16:18.700578668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:18.703771 containerd[1502]: time="2025-11-08T01:16:18.700729589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:18.714154 systemd-networkd[1416]: cali2a477e6f1ae: Link UP Nov 8 01:16:18.717888 systemd-networkd[1416]: cali2a477e6f1ae: Gained carrier Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.107 [INFO][4211] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.177 [INFO][4211] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0 csi-node-driver- calico-system 51b46487-75c6-4a08-a5c4-0240abff3a0b 977 0 2025-11-08 01:15:46 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-1w3cb.gb1.brightbox.com csi-node-driver-qxczs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2a477e6f1ae [] [] }} ContainerID="29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" Namespace="calico-system" Pod="csi-node-driver-qxczs" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-" Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.177 [INFO][4211] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" Namespace="calico-system" Pod="csi-node-driver-qxczs" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.429 [INFO][4262] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" HandleID="k8s-pod-network.29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" Workload="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.429 [INFO][4262] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" HandleID="k8s-pod-network.29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" Workload="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000175980), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-1w3cb.gb1.brightbox.com", "pod":"csi-node-driver-qxczs", "timestamp":"2025-11-08 01:16:18.429745784 +0000 UTC"}, Hostname:"srv-1w3cb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.430 [INFO][4262] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.476 [INFO][4262] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.477 [INFO][4262] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-1w3cb.gb1.brightbox.com' Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.530 [INFO][4262] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.544 [INFO][4262] ipam/ipam.go 394: Looking up existing affinities for host host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.563 [INFO][4262] ipam/ipam.go 511: Trying affinity for 192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.581 [INFO][4262] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.607 [INFO][4262] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.607 [INFO][4262] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.632 [INFO][4262] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858 Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.650 [INFO][4262] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.678 [INFO][4262] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.68/26] block=192.168.17.64/26 handle="k8s-pod-network.29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.679 [INFO][4262] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.68/26] handle="k8s-pod-network.29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.679 [INFO][4262] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:18.757349 containerd[1502]: 2025-11-08 01:16:18.679 [INFO][4262] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.68/26] IPv6=[] ContainerID="29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" HandleID="k8s-pod-network.29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" Workload="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:18.761207 containerd[1502]: 2025-11-08 01:16:18.694 [INFO][4211] cni-plugin/k8s.go 418: Populated endpoint ContainerID="29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" Namespace="calico-system" Pod="csi-node-driver-qxczs" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"51b46487-75c6-4a08-a5c4-0240abff3a0b", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-qxczs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a477e6f1ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:18.761207 containerd[1502]: 2025-11-08 01:16:18.696 [INFO][4211] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.68/32] ContainerID="29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" Namespace="calico-system" Pod="csi-node-driver-qxczs" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:18.761207 containerd[1502]: 2025-11-08 01:16:18.696 [INFO][4211] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2a477e6f1ae ContainerID="29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" Namespace="calico-system" Pod="csi-node-driver-qxczs" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:18.761207 containerd[1502]: 2025-11-08 01:16:18.720 [INFO][4211] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" Namespace="calico-system" Pod="csi-node-driver-qxczs" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:18.761207 containerd[1502]: 2025-11-08 01:16:18.727 [INFO][4211] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" Namespace="calico-system" Pod="csi-node-driver-qxczs" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"51b46487-75c6-4a08-a5c4-0240abff3a0b", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858", Pod:"csi-node-driver-qxczs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a477e6f1ae", MAC:"16:3e:95:24:bc:ab", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:18.761207 containerd[1502]: 2025-11-08 01:16:18.747 [INFO][4211] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858" Namespace="calico-system" Pod="csi-node-driver-qxczs" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:18.784426 systemd-networkd[1416]: cali860db06f897: Gained IPv6LL Nov 8 01:16:18.790487 systemd[1]: Started cri-containerd-282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18.scope - libcontainer container 282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18. Nov 8 01:16:18.899395 kubelet[2667]: E1108 01:16:18.899163 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" podUID="3e1f2813-87fb-41fd-ad67-d8abf3b908a6" Nov 8 01:16:18.914460 containerd[1502]: time="2025-11-08T01:16:18.914314175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:16:18.915334 containerd[1502]: time="2025-11-08T01:16:18.915223846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:16:18.915334 containerd[1502]: time="2025-11-08T01:16:18.915251613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:18.917195 containerd[1502]: time="2025-11-08T01:16:18.915915988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:18.971253 containerd[1502]: 2025-11-08 01:16:18.684 [INFO][4329] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Nov 8 01:16:18.971253 containerd[1502]: 2025-11-08 01:16:18.688 [INFO][4329] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" iface="eth0" netns="/var/run/netns/cni-f2cebc27-5c14-425c-7de1-d5e43ef4dfba" Nov 8 01:16:18.971253 containerd[1502]: 2025-11-08 01:16:18.688 [INFO][4329] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" iface="eth0" netns="/var/run/netns/cni-f2cebc27-5c14-425c-7de1-d5e43ef4dfba" Nov 8 01:16:18.971253 containerd[1502]: 2025-11-08 01:16:18.689 [INFO][4329] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" iface="eth0" netns="/var/run/netns/cni-f2cebc27-5c14-425c-7de1-d5e43ef4dfba" Nov 8 01:16:18.971253 containerd[1502]: 2025-11-08 01:16:18.689 [INFO][4329] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Nov 8 01:16:18.971253 containerd[1502]: 2025-11-08 01:16:18.689 [INFO][4329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Nov 8 01:16:18.971253 containerd[1502]: 2025-11-08 01:16:18.891 [INFO][4371] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" HandleID="k8s-pod-network.70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:18.971253 containerd[1502]: 2025-11-08 01:16:18.898 [INFO][4371] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:18.971253 containerd[1502]: 2025-11-08 01:16:18.899 [INFO][4371] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:18.971253 containerd[1502]: 2025-11-08 01:16:18.939 [WARNING][4371] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" HandleID="k8s-pod-network.70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:18.971253 containerd[1502]: 2025-11-08 01:16:18.940 [INFO][4371] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" HandleID="k8s-pod-network.70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:18.971253 containerd[1502]: 2025-11-08 01:16:18.947 [INFO][4371] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:18.971253 containerd[1502]: 2025-11-08 01:16:18.956 [INFO][4329] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Nov 8 01:16:18.979243 containerd[1502]: time="2025-11-08T01:16:18.971494633Z" level=info msg="TearDown network for sandbox \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\" successfully" Nov 8 01:16:18.979243 containerd[1502]: time="2025-11-08T01:16:18.971546819Z" level=info msg="StopPodSandbox for \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\" returns successfully" Nov 8 01:16:18.979243 containerd[1502]: time="2025-11-08T01:16:18.974846972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fb7bc4b99-gm555,Uid:bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d,Namespace:calico-system,Attempt:1,}" Nov 8 01:16:18.979023 systemd[1]: run-netns-cni\x2df2cebc27\x2d5c14\x2d425c\x2d7de1\x2dd5e43ef4dfba.mount: Deactivated successfully. Nov 8 01:16:19.003671 systemd[1]: Started cri-containerd-29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858.scope - libcontainer container 29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858. Nov 8 01:16:19.014367 containerd[1502]: 2025-11-08 01:16:18.687 [INFO][4327] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Nov 8 01:16:19.014367 containerd[1502]: 2025-11-08 01:16:18.687 [INFO][4327] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" iface="eth0" netns="/var/run/netns/cni-5e35dc12-fe7c-c74b-43ad-4bf3e2841a3f" Nov 8 01:16:19.014367 containerd[1502]: 2025-11-08 01:16:18.688 [INFO][4327] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" iface="eth0" netns="/var/run/netns/cni-5e35dc12-fe7c-c74b-43ad-4bf3e2841a3f" Nov 8 01:16:19.014367 containerd[1502]: 2025-11-08 01:16:18.692 [INFO][4327] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" iface="eth0" netns="/var/run/netns/cni-5e35dc12-fe7c-c74b-43ad-4bf3e2841a3f" Nov 8 01:16:19.014367 containerd[1502]: 2025-11-08 01:16:18.693 [INFO][4327] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Nov 8 01:16:19.014367 containerd[1502]: 2025-11-08 01:16:18.693 [INFO][4327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Nov 8 01:16:19.014367 containerd[1502]: 2025-11-08 01:16:18.911 [INFO][4373] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" HandleID="k8s-pod-network.8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:19.014367 containerd[1502]: 2025-11-08 01:16:18.913 [INFO][4373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:19.014367 containerd[1502]: 2025-11-08 01:16:18.948 [INFO][4373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:19.014367 containerd[1502]: 2025-11-08 01:16:18.993 [WARNING][4373] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" HandleID="k8s-pod-network.8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:19.014367 containerd[1502]: 2025-11-08 01:16:18.994 [INFO][4373] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" HandleID="k8s-pod-network.8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:19.014367 containerd[1502]: 2025-11-08 01:16:19.005 [INFO][4373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:19.014367 containerd[1502]: 2025-11-08 01:16:19.011 [INFO][4327] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Nov 8 01:16:19.023252 containerd[1502]: time="2025-11-08T01:16:19.014824373Z" level=info msg="TearDown network for sandbox \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\" successfully" Nov 8 01:16:19.023252 containerd[1502]: time="2025-11-08T01:16:19.014860284Z" level=info msg="StopPodSandbox for \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\" returns successfully" Nov 8 01:16:19.021406 systemd[1]: run-netns-cni\x2d5e35dc12\x2dfe7c\x2dc74b\x2d43ad\x2d4bf3e2841a3f.mount: Deactivated successfully. Nov 8 01:16:19.024574 containerd[1502]: time="2025-11-08T01:16:19.023726686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cbc7f844-sccdl,Uid:1cfed565-fcb5-4110-9fc3-0c3a9aaca493,Namespace:calico-apiserver,Attempt:1,}" Nov 8 01:16:19.125602 containerd[1502]: time="2025-11-08T01:16:19.125525430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f775cbcb9-f95x7,Uid:e22bf778-56e1-456c-a095-d6acd02811e3,Namespace:calico-system,Attempt:0,} returns sandbox id \"8223542817f1ab3a2b2fa60e8de4b78425451b2d571b44e1e2fbc97e287a1205\"" Nov 8 01:16:19.134890 containerd[1502]: time="2025-11-08T01:16:19.134418347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 01:16:19.384070 containerd[1502]: time="2025-11-08T01:16:19.383921870Z" level=info msg="StopPodSandbox for \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\"" Nov 8 01:16:19.400928 containerd[1502]: time="2025-11-08T01:16:19.400856501Z" level=info msg="StopPodSandbox for \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\"" Nov 8 01:16:19.409204 containerd[1502]: time="2025-11-08T01:16:19.409020884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxczs,Uid:51b46487-75c6-4a08-a5c4-0240abff3a0b,Namespace:calico-system,Attempt:1,} returns sandbox id \"29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858\"" Nov 8 01:16:19.477919 systemd-networkd[1416]: cali6d97cd7930f: Link UP Nov 8 01:16:19.493351 systemd-networkd[1416]: cali6d97cd7930f: Gained carrier Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.134 [INFO][4445] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.157 [INFO][4445] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0 calico-kube-controllers-5fb7bc4b99- calico-system bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d 999 0 2025-11-08 01:15:46 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5fb7bc4b99 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-1w3cb.gb1.brightbox.com calico-kube-controllers-5fb7bc4b99-gm555 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6d97cd7930f [] [] }} ContainerID="84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" Namespace="calico-system" Pod="calico-kube-controllers-5fb7bc4b99-gm555" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-" Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.158 [INFO][4445] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" Namespace="calico-system" Pod="calico-kube-controllers-5fb7bc4b99-gm555" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.284 [INFO][4473] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" HandleID="k8s-pod-network.84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.284 [INFO][4473] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" HandleID="k8s-pod-network.84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000317c10), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-1w3cb.gb1.brightbox.com", "pod":"calico-kube-controllers-5fb7bc4b99-gm555", "timestamp":"2025-11-08 01:16:19.284385625 +0000 UTC"}, Hostname:"srv-1w3cb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.284 [INFO][4473] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.285 [INFO][4473] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.285 [INFO][4473] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-1w3cb.gb1.brightbox.com' Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.309 [INFO][4473] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.327 [INFO][4473] ipam/ipam.go 394: Looking up existing affinities for host host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.371 [INFO][4473] ipam/ipam.go 511: Trying affinity for 192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.388 [INFO][4473] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.402 [INFO][4473] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.403 [INFO][4473] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.410 [INFO][4473] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.431 [INFO][4473] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.456 [INFO][4473] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.69/26] block=192.168.17.64/26 handle="k8s-pod-network.84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.456 [INFO][4473] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.69/26] handle="k8s-pod-network.84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.456 [INFO][4473] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:19.534638 containerd[1502]: 2025-11-08 01:16:19.456 [INFO][4473] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.69/26] IPv6=[] ContainerID="84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" HandleID="k8s-pod-network.84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:19.536651 containerd[1502]: 2025-11-08 01:16:19.465 [INFO][4445] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" Namespace="calico-system" Pod="calico-kube-controllers-5fb7bc4b99-gm555" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0", GenerateName:"calico-kube-controllers-5fb7bc4b99-", Namespace:"calico-system", SelfLink:"", UID:"bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fb7bc4b99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-5fb7bc4b99-gm555", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6d97cd7930f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:19.536651 containerd[1502]: 2025-11-08 01:16:19.465 [INFO][4445] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.69/32] ContainerID="84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" Namespace="calico-system" Pod="calico-kube-controllers-5fb7bc4b99-gm555" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:19.536651 containerd[1502]: 2025-11-08 01:16:19.465 [INFO][4445] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d97cd7930f ContainerID="84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" Namespace="calico-system" Pod="calico-kube-controllers-5fb7bc4b99-gm555" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:19.536651 containerd[1502]: 2025-11-08 01:16:19.495 [INFO][4445] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" Namespace="calico-system" Pod="calico-kube-controllers-5fb7bc4b99-gm555" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:19.536651 containerd[1502]: 2025-11-08 01:16:19.497 [INFO][4445] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" Namespace="calico-system" Pod="calico-kube-controllers-5fb7bc4b99-gm555" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0", GenerateName:"calico-kube-controllers-5fb7bc4b99-", Namespace:"calico-system", SelfLink:"", UID:"bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fb7bc4b99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a", Pod:"calico-kube-controllers-5fb7bc4b99-gm555", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6d97cd7930f", MAC:"fa:fb:07:3c:3b:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:19.536651 containerd[1502]: 2025-11-08 01:16:19.527 [INFO][4445] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a" Namespace="calico-system" Pod="calico-kube-controllers-5fb7bc4b99-gm555" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:19.581325 containerd[1502]: time="2025-11-08T01:16:19.580021118Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:19.591657 containerd[1502]: time="2025-11-08T01:16:19.591591828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 01:16:19.591997 containerd[1502]: time="2025-11-08T01:16:19.591947280Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 01:16:19.592352 kubelet[2667]: E1108 01:16:19.592296 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:16:19.592498 kubelet[2667]: E1108 01:16:19.592377 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:16:19.592774 kubelet[2667]: E1108 01:16:19.592718 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:70e36517ff7645e2bd57c80a14e0d94d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4xswz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f775cbcb9-f95x7_calico-system(e22bf778-56e1-456c-a095-d6acd02811e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:19.594232 containerd[1502]: time="2025-11-08T01:16:19.593918374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 01:16:19.598183 containerd[1502]: time="2025-11-08T01:16:19.598129390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f4d5bbb98-9znr7,Uid:a44b2afe-dc17-4635-9d12-87b1697a9f2b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18\"" Nov 8 01:16:19.712241 containerd[1502]: time="2025-11-08T01:16:19.710978874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:16:19.712241 containerd[1502]: time="2025-11-08T01:16:19.711095635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:16:19.712241 containerd[1502]: time="2025-11-08T01:16:19.711121581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:19.712241 containerd[1502]: time="2025-11-08T01:16:19.711313611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:19.764588 systemd[1]: Started cri-containerd-84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a.scope - libcontainer container 84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a. Nov 8 01:16:19.829221 systemd-networkd[1416]: calibd430cccb71: Link UP Nov 8 01:16:19.835014 systemd-networkd[1416]: calibd430cccb71: Gained carrier Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.238 [INFO][4459] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.298 [INFO][4459] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0 calico-apiserver-54cbc7f844- calico-apiserver 1cfed565-fcb5-4110-9fc3-0c3a9aaca493 1000 0 2025-11-08 01:15:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54cbc7f844 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-1w3cb.gb1.brightbox.com calico-apiserver-54cbc7f844-sccdl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibd430cccb71 [] [] }} ContainerID="e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-sccdl" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-" Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.298 [INFO][4459] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-sccdl" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.501 [INFO][4485] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" HandleID="k8s-pod-network.e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.501 [INFO][4485] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" HandleID="k8s-pod-network.e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000312870), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-1w3cb.gb1.brightbox.com", "pod":"calico-apiserver-54cbc7f844-sccdl", "timestamp":"2025-11-08 01:16:19.501605687 +0000 UTC"}, Hostname:"srv-1w3cb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.501 [INFO][4485] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.502 [INFO][4485] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.502 [INFO][4485] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-1w3cb.gb1.brightbox.com' Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.579 [INFO][4485] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.651 [INFO][4485] ipam/ipam.go 394: Looking up existing affinities for host host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.689 [INFO][4485] ipam/ipam.go 511: Trying affinity for 192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.710 [INFO][4485] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.725 [INFO][4485] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.725 [INFO][4485] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.740 [INFO][4485] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.779 [INFO][4485] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.797 [INFO][4485] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.70/26] block=192.168.17.64/26 handle="k8s-pod-network.e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.798 [INFO][4485] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.70/26] handle="k8s-pod-network.e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.798 [INFO][4485] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:19.886927 containerd[1502]: 2025-11-08 01:16:19.798 [INFO][4485] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.70/26] IPv6=[] ContainerID="e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" HandleID="k8s-pod-network.e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:19.888099 containerd[1502]: 2025-11-08 01:16:19.805 [INFO][4459] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-sccdl" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0", GenerateName:"calico-apiserver-54cbc7f844-", Namespace:"calico-apiserver", SelfLink:"", UID:"1cfed565-fcb5-4110-9fc3-0c3a9aaca493", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54cbc7f844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-54cbc7f844-sccdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd430cccb71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:19.888099 containerd[1502]: 2025-11-08 01:16:19.805 [INFO][4459] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.70/32] ContainerID="e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-sccdl" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:19.888099 containerd[1502]: 2025-11-08 01:16:19.805 [INFO][4459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd430cccb71 ContainerID="e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-sccdl" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:19.888099 containerd[1502]: 2025-11-08 01:16:19.833 [INFO][4459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-sccdl" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:19.888099 containerd[1502]: 2025-11-08 01:16:19.838 [INFO][4459] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-sccdl" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0", GenerateName:"calico-apiserver-54cbc7f844-", Namespace:"calico-apiserver", SelfLink:"", UID:"1cfed565-fcb5-4110-9fc3-0c3a9aaca493", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54cbc7f844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b", Pod:"calico-apiserver-54cbc7f844-sccdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd430cccb71", MAC:"2e:82:3d:5f:75:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:19.888099 containerd[1502]: 2025-11-08 01:16:19.880 [INFO][4459] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b" Namespace="calico-apiserver" Pod="calico-apiserver-54cbc7f844-sccdl" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:19.914072 containerd[1502]: 2025-11-08 01:16:19.753 [INFO][4517] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Nov 8 01:16:19.914072 containerd[1502]: 2025-11-08 01:16:19.761 [INFO][4517] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" iface="eth0" netns="/var/run/netns/cni-414aa39e-b65d-0bd4-7585-7d2fa2bae5f1" Nov 8 01:16:19.914072 containerd[1502]: 2025-11-08 01:16:19.762 [INFO][4517] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" iface="eth0" netns="/var/run/netns/cni-414aa39e-b65d-0bd4-7585-7d2fa2bae5f1" Nov 8 01:16:19.914072 containerd[1502]: 2025-11-08 01:16:19.765 [INFO][4517] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" iface="eth0" netns="/var/run/netns/cni-414aa39e-b65d-0bd4-7585-7d2fa2bae5f1" Nov 8 01:16:19.914072 containerd[1502]: 2025-11-08 01:16:19.765 [INFO][4517] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Nov 8 01:16:19.914072 containerd[1502]: 2025-11-08 01:16:19.765 [INFO][4517] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Nov 8 01:16:19.914072 containerd[1502]: 2025-11-08 01:16:19.881 [INFO][4582] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" HandleID="k8s-pod-network.532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:19.914072 containerd[1502]: 2025-11-08 01:16:19.882 [INFO][4582] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:19.914072 containerd[1502]: 2025-11-08 01:16:19.883 [INFO][4582] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:19.914072 containerd[1502]: 2025-11-08 01:16:19.899 [WARNING][4582] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" HandleID="k8s-pod-network.532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:19.914072 containerd[1502]: 2025-11-08 01:16:19.899 [INFO][4582] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" HandleID="k8s-pod-network.532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:19.914072 containerd[1502]: 2025-11-08 01:16:19.902 [INFO][4582] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:19.914072 containerd[1502]: 2025-11-08 01:16:19.908 [INFO][4517] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Nov 8 01:16:19.917825 containerd[1502]: time="2025-11-08T01:16:19.916026192Z" level=info msg="TearDown network for sandbox \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\" successfully" Nov 8 01:16:19.917825 containerd[1502]: time="2025-11-08T01:16:19.916077812Z" level=info msg="StopPodSandbox for \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\" returns successfully" Nov 8 01:16:19.921938 containerd[1502]: time="2025-11-08T01:16:19.918748840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wc8lj,Uid:cecb66f9-6863-43bf-b9c2-fcaa31f6928a,Namespace:kube-system,Attempt:1,}" Nov 8 01:16:19.920556 systemd[1]: run-netns-cni\x2d414aa39e\x2db65d\x2d0bd4\x2d7585\x2d7d2fa2bae5f1.mount: Deactivated successfully. Nov 8 01:16:19.964541 containerd[1502]: time="2025-11-08T01:16:19.958897740Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:19.969579 containerd[1502]: 2025-11-08 01:16:19.702 [INFO][4516] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Nov 8 01:16:19.969579 containerd[1502]: 2025-11-08 01:16:19.702 [INFO][4516] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" iface="eth0" netns="/var/run/netns/cni-c32b8ece-efb7-04a4-4862-343625ee8a86" Nov 8 01:16:19.969579 containerd[1502]: 2025-11-08 01:16:19.703 [INFO][4516] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" iface="eth0" netns="/var/run/netns/cni-c32b8ece-efb7-04a4-4862-343625ee8a86" Nov 8 01:16:19.969579 containerd[1502]: 2025-11-08 01:16:19.704 [INFO][4516] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" iface="eth0" netns="/var/run/netns/cni-c32b8ece-efb7-04a4-4862-343625ee8a86" Nov 8 01:16:19.969579 containerd[1502]: 2025-11-08 01:16:19.704 [INFO][4516] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Nov 8 01:16:19.969579 containerd[1502]: 2025-11-08 01:16:19.705 [INFO][4516] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Nov 8 01:16:19.969579 containerd[1502]: 2025-11-08 01:16:19.883 [INFO][4562] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" HandleID="k8s-pod-network.a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:19.969579 containerd[1502]: 2025-11-08 01:16:19.884 [INFO][4562] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:19.969579 containerd[1502]: 2025-11-08 01:16:19.903 [INFO][4562] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:19.969579 containerd[1502]: 2025-11-08 01:16:19.930 [WARNING][4562] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" HandleID="k8s-pod-network.a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:19.969579 containerd[1502]: 2025-11-08 01:16:19.931 [INFO][4562] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" HandleID="k8s-pod-network.a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:19.969579 containerd[1502]: 2025-11-08 01:16:19.934 [INFO][4562] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:19.969579 containerd[1502]: 2025-11-08 01:16:19.943 [INFO][4516] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Nov 8 01:16:19.974508 containerd[1502]: time="2025-11-08T01:16:19.973326200Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 01:16:19.974657 kubelet[2667]: E1108 01:16:19.973718 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:16:19.974657 kubelet[2667]: E1108 01:16:19.973782 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:16:19.974657 kubelet[2667]: E1108 01:16:19.974160 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xzb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qxczs_calico-system(51b46487-75c6-4a08-a5c4-0240abff3a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:19.976326 containerd[1502]: time="2025-11-08T01:16:19.973392934Z" level=info msg="TearDown network for sandbox \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\" successfully" Nov 8 01:16:19.976326 containerd[1502]: time="2025-11-08T01:16:19.973510414Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 01:16:19.975703 systemd[1]: run-netns-cni\x2dc32b8ece\x2defb7\x2d04a4\x2d4862\x2d343625ee8a86.mount: Deactivated successfully. Nov 8 01:16:19.978188 containerd[1502]: time="2025-11-08T01:16:19.976595140Z" level=info msg="StopPodSandbox for \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\" returns successfully" Nov 8 01:16:19.979733 containerd[1502]: time="2025-11-08T01:16:19.979650763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 01:16:19.983070 containerd[1502]: time="2025-11-08T01:16:19.982679713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-698zh,Uid:2dbb88ca-6a44-41bd-ba35-48c338cd1fe1,Namespace:kube-system,Attempt:1,}" Nov 8 01:16:19.990724 systemd-networkd[1416]: calif50045fd9b4: Gained IPv6LL Nov 8 01:16:20.027716 containerd[1502]: time="2025-11-08T01:16:20.012496091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:16:20.027716 containerd[1502]: time="2025-11-08T01:16:20.012612256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:16:20.027716 containerd[1502]: time="2025-11-08T01:16:20.012632256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:20.027716 containerd[1502]: time="2025-11-08T01:16:20.012836139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:20.084421 systemd[1]: Started cri-containerd-e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b.scope - libcontainer container e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b. Nov 8 01:16:20.347198 containerd[1502]: time="2025-11-08T01:16:20.347108846Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:20.352794 containerd[1502]: time="2025-11-08T01:16:20.352661428Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 01:16:20.352794 containerd[1502]: time="2025-11-08T01:16:20.352774332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 01:16:20.353848 kubelet[2667]: E1108 01:16:20.353762 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:16:20.353848 kubelet[2667]: E1108 01:16:20.353830 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:16:20.356321 kubelet[2667]: E1108 01:16:20.354154 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xswz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f775cbcb9-f95x7_calico-system(e22bf778-56e1-456c-a095-d6acd02811e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:20.356321 kubelet[2667]: E1108 01:16:20.355443 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f775cbcb9-f95x7" podUID="e22bf778-56e1-456c-a095-d6acd02811e3" Nov 8 01:16:20.357322 containerd[1502]: time="2025-11-08T01:16:20.357266686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:16:20.371807 containerd[1502]: time="2025-11-08T01:16:20.371635013Z" level=info msg="StopPodSandbox for \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\"" Nov 8 01:16:20.373576 systemd-networkd[1416]: calid2be38051ec: Gained IPv6LL Nov 8 01:16:20.376550 containerd[1502]: time="2025-11-08T01:16:20.376501403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fb7bc4b99-gm555,Uid:bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d,Namespace:calico-system,Attempt:1,} returns sandbox id \"84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a\"" Nov 8 01:16:20.386249 systemd-networkd[1416]: cali5537a440ea5: Link UP Nov 8 01:16:20.391615 systemd-networkd[1416]: cali5537a440ea5: Gained carrier Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.092 [INFO][4608] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.122 [INFO][4608] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0 coredns-668d6bf9bc- kube-system cecb66f9-6863-43bf-b9c2-fcaa31f6928a 1018 0 2025-11-08 01:15:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-1w3cb.gb1.brightbox.com coredns-668d6bf9bc-wc8lj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5537a440ea5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-wc8lj" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-" Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.128 [INFO][4608] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-wc8lj" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.215 [INFO][4659] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" HandleID="k8s-pod-network.952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.216 [INFO][4659] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" HandleID="k8s-pod-network.952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319300), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-1w3cb.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-wc8lj", "timestamp":"2025-11-08 01:16:20.215515415 +0000 UTC"}, Hostname:"srv-1w3cb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.217 [INFO][4659] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.217 [INFO][4659] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.217 [INFO][4659] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-1w3cb.gb1.brightbox.com' Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.234 [INFO][4659] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.255 [INFO][4659] ipam/ipam.go 394: Looking up existing affinities for host host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.274 [INFO][4659] ipam/ipam.go 511: Trying affinity for 192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.280 [INFO][4659] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.286 [INFO][4659] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.286 [INFO][4659] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.290 [INFO][4659] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27 Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.328 [INFO][4659] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.341 [INFO][4659] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.71/26] block=192.168.17.64/26 handle="k8s-pod-network.952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.341 [INFO][4659] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.71/26] handle="k8s-pod-network.952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.341 [INFO][4659] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:20.451453 containerd[1502]: 2025-11-08 01:16:20.341 [INFO][4659] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.71/26] IPv6=[] ContainerID="952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" HandleID="k8s-pod-network.952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:20.453361 containerd[1502]: 2025-11-08 01:16:20.348 [INFO][4608] cni-plugin/k8s.go 418: Populated endpoint ContainerID="952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-wc8lj" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cecb66f9-6863-43bf-b9c2-fcaa31f6928a", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-wc8lj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5537a440ea5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:20.453361 containerd[1502]: 2025-11-08 01:16:20.348 [INFO][4608] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.71/32] ContainerID="952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-wc8lj" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:20.453361 containerd[1502]: 2025-11-08 01:16:20.348 [INFO][4608] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5537a440ea5 ContainerID="952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-wc8lj" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:20.453361 containerd[1502]: 2025-11-08 01:16:20.402 [INFO][4608] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-wc8lj" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:20.453361 containerd[1502]: 2025-11-08 01:16:20.405 [INFO][4608] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-wc8lj" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cecb66f9-6863-43bf-b9c2-fcaa31f6928a", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27", Pod:"coredns-668d6bf9bc-wc8lj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5537a440ea5", MAC:"42:b3:76:aa:0b:1a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:20.453361 containerd[1502]: 2025-11-08 01:16:20.435 [INFO][4608] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27" Namespace="kube-system" Pod="coredns-668d6bf9bc-wc8lj" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:20.507634 containerd[1502]: time="2025-11-08T01:16:20.506717951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54cbc7f844-sccdl,Uid:1cfed565-fcb5-4110-9fc3-0c3a9aaca493,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b\"" Nov 8 01:16:20.543752 systemd-networkd[1416]: cali33e6f0003fe: Link UP Nov 8 01:16:20.552296 systemd-networkd[1416]: cali33e6f0003fe: Gained carrier Nov 8 01:16:20.581610 containerd[1502]: time="2025-11-08T01:16:20.581395873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:16:20.582135 containerd[1502]: time="2025-11-08T01:16:20.581890265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:16:20.582135 containerd[1502]: time="2025-11-08T01:16:20.581950060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:20.583869 containerd[1502]: time="2025-11-08T01:16:20.583768477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.104 [INFO][4625] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.136 [INFO][4625] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0 coredns-668d6bf9bc- kube-system 2dbb88ca-6a44-41bd-ba35-48c338cd1fe1 1017 0 2025-11-08 01:15:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-1w3cb.gb1.brightbox.com coredns-668d6bf9bc-698zh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali33e6f0003fe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" Namespace="kube-system" Pod="coredns-668d6bf9bc-698zh" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-" Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.136 [INFO][4625] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" Namespace="kube-system" Pod="coredns-668d6bf9bc-698zh" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.228 [INFO][4664] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" HandleID="k8s-pod-network.98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.229 [INFO][4664] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" HandleID="k8s-pod-network.98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000353890), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-1w3cb.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-698zh", "timestamp":"2025-11-08 01:16:20.228508965 +0000 UTC"}, Hostname:"srv-1w3cb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.230 [INFO][4664] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.349 [INFO][4664] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.349 [INFO][4664] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-1w3cb.gb1.brightbox.com' Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.375 [INFO][4664] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.393 [INFO][4664] ipam/ipam.go 394: Looking up existing affinities for host host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.413 [INFO][4664] ipam/ipam.go 511: Trying affinity for 192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.430 [INFO][4664] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.444 [INFO][4664] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.445 [INFO][4664] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.453 [INFO][4664] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954 Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.469 [INFO][4664] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.512 [INFO][4664] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.72/26] block=192.168.17.64/26 handle="k8s-pod-network.98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.512 [INFO][4664] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.72/26] handle="k8s-pod-network.98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.512 [INFO][4664] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:20.614454 containerd[1502]: 2025-11-08 01:16:20.512 [INFO][4664] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.72/26] IPv6=[] ContainerID="98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" HandleID="k8s-pod-network.98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:20.618537 containerd[1502]: 2025-11-08 01:16:20.521 [INFO][4625] cni-plugin/k8s.go 418: Populated endpoint ContainerID="98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" Namespace="kube-system" Pod="coredns-668d6bf9bc-698zh" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2dbb88ca-6a44-41bd-ba35-48c338cd1fe1", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-698zh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali33e6f0003fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:20.618537 containerd[1502]: 2025-11-08 01:16:20.521 [INFO][4625] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.72/32] ContainerID="98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" Namespace="kube-system" Pod="coredns-668d6bf9bc-698zh" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:20.618537 containerd[1502]: 2025-11-08 01:16:20.521 [INFO][4625] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali33e6f0003fe ContainerID="98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" Namespace="kube-system" Pod="coredns-668d6bf9bc-698zh" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:20.618537 containerd[1502]: 2025-11-08 01:16:20.566 [INFO][4625] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" Namespace="kube-system" Pod="coredns-668d6bf9bc-698zh" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:20.618537 containerd[1502]: 2025-11-08 01:16:20.571 [INFO][4625] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" Namespace="kube-system" Pod="coredns-668d6bf9bc-698zh" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2dbb88ca-6a44-41bd-ba35-48c338cd1fe1", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954", Pod:"coredns-668d6bf9bc-698zh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali33e6f0003fe", MAC:"62:38:0b:ec:58:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:20.618537 containerd[1502]: 2025-11-08 01:16:20.609 [INFO][4625] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954" Namespace="kube-system" Pod="coredns-668d6bf9bc-698zh" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:20.628405 systemd[1]: Started cri-containerd-952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27.scope - libcontainer container 952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27. Nov 8 01:16:20.699783 containerd[1502]: time="2025-11-08T01:16:20.698848816Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:16:20.699783 containerd[1502]: time="2025-11-08T01:16:20.698929970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:16:20.700078 containerd[1502]: time="2025-11-08T01:16:20.698953260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:20.700078 containerd[1502]: time="2025-11-08T01:16:20.699073133Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:20.736502 containerd[1502]: time="2025-11-08T01:16:20.736301540Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:20.738201 containerd[1502]: time="2025-11-08T01:16:20.737862224Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:16:20.739190 containerd[1502]: time="2025-11-08T01:16:20.738369261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:16:20.739677 kubelet[2667]: E1108 01:16:20.739605 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:16:20.739819 kubelet[2667]: E1108 01:16:20.739677 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:16:20.740124 kubelet[2667]: E1108 01:16:20.740016 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxgvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f4d5bbb98-9znr7_calico-apiserver(a44b2afe-dc17-4635-9d12-87b1697a9f2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:20.741365 kubelet[2667]: E1108 01:16:20.741282 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4d5bbb98-9znr7" podUID="a44b2afe-dc17-4635-9d12-87b1697a9f2b" Nov 8 01:16:20.742893 containerd[1502]: time="2025-11-08T01:16:20.742148938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 01:16:20.757651 systemd-networkd[1416]: cali2a477e6f1ae: Gained IPv6LL Nov 8 01:16:20.781370 systemd[1]: Started cri-containerd-98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954.scope - libcontainer container 98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954. Nov 8 01:16:20.797264 containerd[1502]: time="2025-11-08T01:16:20.797073642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-wc8lj,Uid:cecb66f9-6863-43bf-b9c2-fcaa31f6928a,Namespace:kube-system,Attempt:1,} returns sandbox id \"952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27\"" Nov 8 01:16:20.829591 containerd[1502]: 2025-11-08 01:16:20.641 [INFO][4690] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Nov 8 01:16:20.829591 containerd[1502]: 2025-11-08 01:16:20.641 [INFO][4690] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" iface="eth0" netns="/var/run/netns/cni-1eb44863-e18d-06cc-ac7e-b38f9ea7c66a" Nov 8 01:16:20.829591 containerd[1502]: 2025-11-08 01:16:20.645 [INFO][4690] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" iface="eth0" netns="/var/run/netns/cni-1eb44863-e18d-06cc-ac7e-b38f9ea7c66a" Nov 8 01:16:20.829591 containerd[1502]: 2025-11-08 01:16:20.646 [INFO][4690] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" iface="eth0" netns="/var/run/netns/cni-1eb44863-e18d-06cc-ac7e-b38f9ea7c66a" Nov 8 01:16:20.829591 containerd[1502]: 2025-11-08 01:16:20.646 [INFO][4690] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Nov 8 01:16:20.829591 containerd[1502]: 2025-11-08 01:16:20.646 [INFO][4690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Nov 8 01:16:20.829591 containerd[1502]: 2025-11-08 01:16:20.757 [INFO][4751] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" HandleID="k8s-pod-network.8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:16:20.829591 containerd[1502]: 2025-11-08 01:16:20.758 [INFO][4751] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:20.829591 containerd[1502]: 2025-11-08 01:16:20.758 [INFO][4751] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:20.829591 containerd[1502]: 2025-11-08 01:16:20.777 [WARNING][4751] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" HandleID="k8s-pod-network.8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:16:20.829591 containerd[1502]: 2025-11-08 01:16:20.777 [INFO][4751] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" HandleID="k8s-pod-network.8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:16:20.829591 containerd[1502]: 2025-11-08 01:16:20.786 [INFO][4751] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:20.829591 containerd[1502]: 2025-11-08 01:16:20.800 [INFO][4690] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Nov 8 01:16:20.837260 systemd[1]: run-netns-cni\x2d1eb44863\x2de18d\x2d06cc\x2dac7e\x2db38f9ea7c66a.mount: Deactivated successfully. Nov 8 01:16:20.853585 containerd[1502]: time="2025-11-08T01:16:20.853094455Z" level=info msg="TearDown network for sandbox \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\" successfully" Nov 8 01:16:20.854215 containerd[1502]: time="2025-11-08T01:16:20.853986376Z" level=info msg="StopPodSandbox for \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\" returns successfully" Nov 8 01:16:20.855950 containerd[1502]: time="2025-11-08T01:16:20.853996430Z" level=info msg="CreateContainer within sandbox \"952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 01:16:20.858777 containerd[1502]: time="2025-11-08T01:16:20.858714501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wd97q,Uid:6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39,Namespace:calico-system,Attempt:1,}" Nov 8 01:16:20.957588 kubelet[2667]: E1108 01:16:20.955336 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4d5bbb98-9znr7" podUID="a44b2afe-dc17-4635-9d12-87b1697a9f2b" Nov 8 01:16:20.959429 kubelet[2667]: E1108 01:16:20.959376 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f775cbcb9-f95x7" podUID="e22bf778-56e1-456c-a095-d6acd02811e3" Nov 8 01:16:20.961739 containerd[1502]: time="2025-11-08T01:16:20.961491017Z" level=info msg="CreateContainer within sandbox \"952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4af18746f58346742b5eaabe03b2c2c83342e08475c3e0b05e3b72fd809f7dcf\"" Nov 8 01:16:20.965939 containerd[1502]: time="2025-11-08T01:16:20.965753273Z" level=info msg="StartContainer for \"4af18746f58346742b5eaabe03b2c2c83342e08475c3e0b05e3b72fd809f7dcf\"" Nov 8 01:16:20.978397 containerd[1502]: time="2025-11-08T01:16:20.978235676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-698zh,Uid:2dbb88ca-6a44-41bd-ba35-48c338cd1fe1,Namespace:kube-system,Attempt:1,} returns sandbox id \"98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954\"" Nov 8 01:16:20.989279 containerd[1502]: time="2025-11-08T01:16:20.989074805Z" level=info msg="CreateContainer within sandbox \"98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 01:16:21.034206 kernel: bpftool[4863]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 01:16:21.044615 containerd[1502]: time="2025-11-08T01:16:21.044418050Z" level=info msg="CreateContainer within sandbox \"98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a9034f4b7baead0637651abf33491edae7dc7702d8ff657fa16a38a863263326\"" Nov 8 01:16:21.046762 containerd[1502]: time="2025-11-08T01:16:21.046680103Z" level=info msg="StartContainer for \"a9034f4b7baead0637651abf33491edae7dc7702d8ff657fa16a38a863263326\"" Nov 8 01:16:21.055495 systemd[1]: Started cri-containerd-4af18746f58346742b5eaabe03b2c2c83342e08475c3e0b05e3b72fd809f7dcf.scope - libcontainer container 4af18746f58346742b5eaabe03b2c2c83342e08475c3e0b05e3b72fd809f7dcf. Nov 8 01:16:21.134113 containerd[1502]: time="2025-11-08T01:16:21.134048774Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:21.143521 containerd[1502]: time="2025-11-08T01:16:21.143335986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 01:16:21.144033 containerd[1502]: time="2025-11-08T01:16:21.143923283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 01:16:21.146065 kubelet[2667]: E1108 01:16:21.144935 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:16:21.146065 kubelet[2667]: E1108 01:16:21.145047 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:16:21.146065 kubelet[2667]: E1108 01:16:21.145823 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xzb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qxczs_calico-system(51b46487-75c6-4a08-a5c4-0240abff3a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:21.154854 kubelet[2667]: E1108 01:16:21.147082 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:16:21.157052 containerd[1502]: time="2025-11-08T01:16:21.157010539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 01:16:21.194953 systemd[1]: Started cri-containerd-a9034f4b7baead0637651abf33491edae7dc7702d8ff657fa16a38a863263326.scope - libcontainer container a9034f4b7baead0637651abf33491edae7dc7702d8ff657fa16a38a863263326. Nov 8 01:16:21.277312 containerd[1502]: time="2025-11-08T01:16:21.276377988Z" level=info msg="StartContainer for \"4af18746f58346742b5eaabe03b2c2c83342e08475c3e0b05e3b72fd809f7dcf\" returns successfully" Nov 8 01:16:21.376529 containerd[1502]: time="2025-11-08T01:16:21.375994585Z" level=info msg="StartContainer for \"a9034f4b7baead0637651abf33491edae7dc7702d8ff657fa16a38a863263326\" returns successfully" Nov 8 01:16:21.399901 systemd-networkd[1416]: cali6d97cd7930f: Gained IPv6LL Nov 8 01:16:21.471487 containerd[1502]: time="2025-11-08T01:16:21.470218020Z" level=info msg="StopPodSandbox for \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\"" Nov 8 01:16:21.526469 systemd-networkd[1416]: calibd430cccb71: Gained IPv6LL Nov 8 01:16:21.553011 containerd[1502]: time="2025-11-08T01:16:21.552847203Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:21.555452 containerd[1502]: time="2025-11-08T01:16:21.555203875Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 01:16:21.555452 containerd[1502]: time="2025-11-08T01:16:21.555327972Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 01:16:21.556487 kubelet[2667]: E1108 01:16:21.555856 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:16:21.556487 kubelet[2667]: E1108 01:16:21.555928 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:16:21.556487 kubelet[2667]: E1108 01:16:21.556263 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q6x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5fb7bc4b99-gm555_calico-system(bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:21.557564 kubelet[2667]: E1108 01:16:21.557506 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fb7bc4b99-gm555" podUID="bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d" Nov 8 01:16:21.559270 containerd[1502]: time="2025-11-08T01:16:21.559222369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:16:21.814375 systemd-networkd[1416]: cali756d0b101d1: Link UP Nov 8 01:16:21.814898 systemd-networkd[1416]: cali756d0b101d1: Gained carrier Nov 8 01:16:21.823125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983831341.mount: Deactivated successfully. Nov 8 01:16:21.895133 containerd[1502]: time="2025-11-08T01:16:21.894878845Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:21.899409 containerd[1502]: time="2025-11-08T01:16:21.899043681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:16:21.899409 containerd[1502]: time="2025-11-08T01:16:21.899291227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:16:21.900077 kubelet[2667]: E1108 01:16:21.899881 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:16:21.900077 kubelet[2667]: E1108 01:16:21.899991 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:16:21.901628 kubelet[2667]: E1108 01:16:21.901051 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbr6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54cbc7f844-sccdl_calico-apiserver(1cfed565-fcb5-4110-9fc3-0c3a9aaca493): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:21.903360 kubelet[2667]: E1108 01:16:21.903196 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-sccdl" podUID="1cfed565-fcb5-4110-9fc3-0c3a9aaca493" Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.267 [INFO][4820] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0 goldmane-666569f655- calico-system 6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39 1041 0 2025-11-08 01:15:43 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-1w3cb.gb1.brightbox.com goldmane-666569f655-wd97q eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali756d0b101d1 [] [] }} ContainerID="9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" Namespace="calico-system" Pod="goldmane-666569f655-wd97q" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-" Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.269 [INFO][4820] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" Namespace="calico-system" Pod="goldmane-666569f655-wd97q" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.638 [INFO][4923] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" HandleID="k8s-pod-network.9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" Workload="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.638 [INFO][4923] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" HandleID="k8s-pod-network.9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" Workload="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003471a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-1w3cb.gb1.brightbox.com", "pod":"goldmane-666569f655-wd97q", "timestamp":"2025-11-08 01:16:21.638436906 +0000 UTC"}, Hostname:"srv-1w3cb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.638 [INFO][4923] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.638 [INFO][4923] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.638 [INFO][4923] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-1w3cb.gb1.brightbox.com' Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.666 [INFO][4923] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.676 [INFO][4923] ipam/ipam.go 394: Looking up existing affinities for host host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.706 [INFO][4923] ipam/ipam.go 511: Trying affinity for 192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.714 [INFO][4923] ipam/ipam.go 158: Attempting to load block cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.727 [INFO][4923] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.17.64/26 host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.728 [INFO][4923] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.17.64/26 handle="k8s-pod-network.9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.732 [INFO][4923] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053 Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.759 [INFO][4923] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.17.64/26 handle="k8s-pod-network.9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.771 [INFO][4923] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.17.73/26] block=192.168.17.64/26 handle="k8s-pod-network.9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.772 [INFO][4923] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.17.73/26] handle="k8s-pod-network.9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" host="srv-1w3cb.gb1.brightbox.com" Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.772 [INFO][4923] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:21.906143 containerd[1502]: 2025-11-08 01:16:21.772 [INFO][4923] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.17.73/26] IPv6=[] ContainerID="9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" HandleID="k8s-pod-network.9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" Workload="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:16:21.908103 containerd[1502]: 2025-11-08 01:16:21.780 [INFO][4820] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" Namespace="calico-system" Pod="goldmane-666569f655-wd97q" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-wd97q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali756d0b101d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:21.908103 containerd[1502]: 2025-11-08 01:16:21.781 [INFO][4820] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.17.73/32] ContainerID="9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" Namespace="calico-system" Pod="goldmane-666569f655-wd97q" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:16:21.908103 containerd[1502]: 2025-11-08 01:16:21.781 [INFO][4820] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali756d0b101d1 ContainerID="9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" Namespace="calico-system" Pod="goldmane-666569f655-wd97q" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:16:21.908103 containerd[1502]: 2025-11-08 01:16:21.819 [INFO][4820] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" Namespace="calico-system" Pod="goldmane-666569f655-wd97q" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:16:21.908103 containerd[1502]: 2025-11-08 01:16:21.836 [INFO][4820] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" Namespace="calico-system" Pod="goldmane-666569f655-wd97q" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053", Pod:"goldmane-666569f655-wd97q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali756d0b101d1", MAC:"1a:0d:b1:fc:67:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:21.908103 containerd[1502]: 2025-11-08 01:16:21.901 [INFO][4820] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053" Namespace="calico-system" Pod="goldmane-666569f655-wd97q" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:16:21.940548 containerd[1502]: 2025-11-08 01:16:21.738 [WARNING][4940] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cecb66f9-6863-43bf-b9c2-fcaa31f6928a", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27", Pod:"coredns-668d6bf9bc-wc8lj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5537a440ea5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:21.940548 containerd[1502]: 2025-11-08 01:16:21.739 [INFO][4940] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Nov 8 01:16:21.940548 containerd[1502]: 2025-11-08 01:16:21.739 [INFO][4940] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" iface="eth0" netns="" Nov 8 01:16:21.940548 containerd[1502]: 2025-11-08 01:16:21.739 [INFO][4940] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Nov 8 01:16:21.940548 containerd[1502]: 2025-11-08 01:16:21.739 [INFO][4940] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Nov 8 01:16:21.940548 containerd[1502]: 2025-11-08 01:16:21.875 [INFO][4953] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" HandleID="k8s-pod-network.532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:21.940548 containerd[1502]: 2025-11-08 01:16:21.876 [INFO][4953] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:21.940548 containerd[1502]: 2025-11-08 01:16:21.876 [INFO][4953] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:21.940548 containerd[1502]: 2025-11-08 01:16:21.921 [WARNING][4953] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" HandleID="k8s-pod-network.532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:21.940548 containerd[1502]: 2025-11-08 01:16:21.921 [INFO][4953] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" HandleID="k8s-pod-network.532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:21.940548 containerd[1502]: 2025-11-08 01:16:21.932 [INFO][4953] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:21.940548 containerd[1502]: 2025-11-08 01:16:21.935 [INFO][4940] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Nov 8 01:16:21.940548 containerd[1502]: time="2025-11-08T01:16:21.939588836Z" level=info msg="TearDown network for sandbox \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\" successfully" Nov 8 01:16:21.940548 containerd[1502]: time="2025-11-08T01:16:21.939625384Z" level=info msg="StopPodSandbox for \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\" returns successfully" Nov 8 01:16:21.956069 containerd[1502]: time="2025-11-08T01:16:21.955687921Z" level=info msg="RemovePodSandbox for \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\"" Nov 8 01:16:21.956069 containerd[1502]: time="2025-11-08T01:16:21.955755292Z" level=info msg="Forcibly stopping sandbox \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\"" Nov 8 01:16:21.985258 containerd[1502]: time="2025-11-08T01:16:21.984657667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 01:16:21.985258 containerd[1502]: time="2025-11-08T01:16:21.984750760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 01:16:21.985258 containerd[1502]: time="2025-11-08T01:16:21.984769513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:21.985258 containerd[1502]: time="2025-11-08T01:16:21.984890237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 01:16:21.990244 kubelet[2667]: E1108 01:16:21.988100 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-sccdl" podUID="1cfed565-fcb5-4110-9fc3-0c3a9aaca493" Nov 8 01:16:21.992386 kubelet[2667]: E1108 01:16:21.991559 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fb7bc4b99-gm555" podUID="bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d" Nov 8 01:16:21.999668 kubelet[2667]: E1108 01:16:21.999608 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:16:22.043545 systemd-networkd[1416]: cali5537a440ea5: Gained IPv6LL Nov 8 01:16:22.047415 systemd[1]: Started cri-containerd-9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053.scope - libcontainer container 9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053. Nov 8 01:16:22.055195 kubelet[2667]: I1108 01:16:22.053497 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-698zh" podStartSLOduration=55.05346086 podStartE2EDuration="55.05346086s" podCreationTimestamp="2025-11-08 01:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 01:16:22.052448908 +0000 UTC m=+60.870554925" watchObservedRunningTime="2025-11-08 01:16:22.05346086 +0000 UTC m=+60.871566861" Nov 8 01:16:22.356919 containerd[1502]: 2025-11-08 01:16:22.216 [WARNING][5009] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cecb66f9-6863-43bf-b9c2-fcaa31f6928a", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"952a88b9fa53a03862377d559477b9bdd6f25b160bf01cebb0b97523dcf2fd27", Pod:"coredns-668d6bf9bc-wc8lj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5537a440ea5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:22.356919 containerd[1502]: 2025-11-08 01:16:22.216 [INFO][5009] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Nov 8 01:16:22.356919 containerd[1502]: 2025-11-08 01:16:22.216 [INFO][5009] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" iface="eth0" netns="" Nov 8 01:16:22.356919 containerd[1502]: 2025-11-08 01:16:22.216 [INFO][5009] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Nov 8 01:16:22.356919 containerd[1502]: 2025-11-08 01:16:22.216 [INFO][5009] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Nov 8 01:16:22.356919 containerd[1502]: 2025-11-08 01:16:22.269 [INFO][5034] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" HandleID="k8s-pod-network.532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:22.356919 containerd[1502]: 2025-11-08 01:16:22.270 [INFO][5034] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:22.356919 containerd[1502]: 2025-11-08 01:16:22.270 [INFO][5034] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:22.356919 containerd[1502]: 2025-11-08 01:16:22.302 [WARNING][5034] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" HandleID="k8s-pod-network.532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:22.356919 containerd[1502]: 2025-11-08 01:16:22.302 [INFO][5034] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" HandleID="k8s-pod-network.532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--wc8lj-eth0" Nov 8 01:16:22.356919 containerd[1502]: 2025-11-08 01:16:22.350 [INFO][5034] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:22.356919 containerd[1502]: 2025-11-08 01:16:22.354 [INFO][5009] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0" Nov 8 01:16:22.358920 containerd[1502]: time="2025-11-08T01:16:22.358225485Z" level=info msg="TearDown network for sandbox \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\" successfully" Nov 8 01:16:22.357356 systemd-networkd[1416]: cali33e6f0003fe: Gained IPv6LL Nov 8 01:16:22.378105 containerd[1502]: time="2025-11-08T01:16:22.377634649Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:16:22.378105 containerd[1502]: time="2025-11-08T01:16:22.377734789Z" level=info msg="RemovePodSandbox \"532b2f1f63b6ab70a518924a1224e8f913849338a090822fd0e9d0f7930505d0\" returns successfully" Nov 8 01:16:22.382127 containerd[1502]: time="2025-11-08T01:16:22.382086959Z" level=info msg="StopPodSandbox for \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\"" Nov 8 01:16:22.447591 kubelet[2667]: I1108 01:16:22.447503 2667 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-wc8lj" podStartSLOduration=55.44747716 podStartE2EDuration="55.44747716s" podCreationTimestamp="2025-11-08 01:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 01:16:22.401414192 +0000 UTC m=+61.219520213" watchObservedRunningTime="2025-11-08 01:16:22.44747716 +0000 UTC m=+61.265583166" Nov 8 01:16:22.583291 containerd[1502]: 2025-11-08 01:16:22.497 [WARNING][5049] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0", GenerateName:"calico-apiserver-54cbc7f844-", Namespace:"calico-apiserver", SelfLink:"", UID:"1cfed565-fcb5-4110-9fc3-0c3a9aaca493", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54cbc7f844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b", Pod:"calico-apiserver-54cbc7f844-sccdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd430cccb71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:22.583291 containerd[1502]: 2025-11-08 01:16:22.497 [INFO][5049] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Nov 8 01:16:22.583291 containerd[1502]: 2025-11-08 01:16:22.498 [INFO][5049] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" iface="eth0" netns="" Nov 8 01:16:22.583291 containerd[1502]: 2025-11-08 01:16:22.498 [INFO][5049] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Nov 8 01:16:22.583291 containerd[1502]: 2025-11-08 01:16:22.498 [INFO][5049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Nov 8 01:16:22.583291 containerd[1502]: 2025-11-08 01:16:22.563 [INFO][5057] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" HandleID="k8s-pod-network.8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:22.583291 containerd[1502]: 2025-11-08 01:16:22.564 [INFO][5057] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:22.583291 containerd[1502]: 2025-11-08 01:16:22.564 [INFO][5057] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:22.583291 containerd[1502]: 2025-11-08 01:16:22.576 [WARNING][5057] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" HandleID="k8s-pod-network.8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:22.583291 containerd[1502]: 2025-11-08 01:16:22.576 [INFO][5057] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" HandleID="k8s-pod-network.8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:22.583291 containerd[1502]: 2025-11-08 01:16:22.579 [INFO][5057] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:22.583291 containerd[1502]: 2025-11-08 01:16:22.580 [INFO][5049] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Nov 8 01:16:22.583291 containerd[1502]: time="2025-11-08T01:16:22.583234560Z" level=info msg="TearDown network for sandbox \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\" successfully" Nov 8 01:16:22.583291 containerd[1502]: time="2025-11-08T01:16:22.583312403Z" level=info msg="StopPodSandbox for \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\" returns successfully" Nov 8 01:16:22.588344 containerd[1502]: time="2025-11-08T01:16:22.585760738Z" level=info msg="RemovePodSandbox for \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\"" Nov 8 01:16:22.588344 containerd[1502]: time="2025-11-08T01:16:22.585822785Z" level=info msg="Forcibly stopping sandbox \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\"" Nov 8 01:16:22.758565 containerd[1502]: time="2025-11-08T01:16:22.758306058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-wd97q,Uid:6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39,Namespace:calico-system,Attempt:1,} returns sandbox id \"9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053\"" Nov 8 01:16:22.766207 containerd[1502]: time="2025-11-08T01:16:22.764832005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 01:16:22.813353 containerd[1502]: 2025-11-08 01:16:22.700 [WARNING][5071] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0", GenerateName:"calico-apiserver-54cbc7f844-", Namespace:"calico-apiserver", SelfLink:"", UID:"1cfed565-fcb5-4110-9fc3-0c3a9aaca493", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54cbc7f844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"e5745334630461442d80140f41eb635917c0c2d9eb2a9dea19b11f2d2304441b", Pod:"calico-apiserver-54cbc7f844-sccdl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd430cccb71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:22.813353 containerd[1502]: 2025-11-08 01:16:22.700 [INFO][5071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Nov 8 01:16:22.813353 containerd[1502]: 2025-11-08 01:16:22.700 [INFO][5071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" iface="eth0" netns="" Nov 8 01:16:22.813353 containerd[1502]: 2025-11-08 01:16:22.700 [INFO][5071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Nov 8 01:16:22.813353 containerd[1502]: 2025-11-08 01:16:22.700 [INFO][5071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Nov 8 01:16:22.813353 containerd[1502]: 2025-11-08 01:16:22.777 [INFO][5079] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" HandleID="k8s-pod-network.8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:22.813353 containerd[1502]: 2025-11-08 01:16:22.777 [INFO][5079] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:22.813353 containerd[1502]: 2025-11-08 01:16:22.777 [INFO][5079] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:22.813353 containerd[1502]: 2025-11-08 01:16:22.799 [WARNING][5079] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" HandleID="k8s-pod-network.8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:22.813353 containerd[1502]: 2025-11-08 01:16:22.799 [INFO][5079] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" HandleID="k8s-pod-network.8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--sccdl-eth0" Nov 8 01:16:22.813353 containerd[1502]: 2025-11-08 01:16:22.802 [INFO][5079] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:22.813353 containerd[1502]: 2025-11-08 01:16:22.809 [INFO][5071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024" Nov 8 01:16:22.814949 containerd[1502]: time="2025-11-08T01:16:22.813413439Z" level=info msg="TearDown network for sandbox \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\" successfully" Nov 8 01:16:22.820241 containerd[1502]: time="2025-11-08T01:16:22.820195557Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:16:22.820349 containerd[1502]: time="2025-11-08T01:16:22.820267839Z" level=info msg="RemovePodSandbox \"8f8dcde7eaa8f12993c87874fc87994c8010d54a296b276dd6ae6b15f2278024\" returns successfully" Nov 8 01:16:22.822645 containerd[1502]: time="2025-11-08T01:16:22.822005889Z" level=info msg="StopPodSandbox for \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\"" Nov 8 01:16:22.940899 systemd-networkd[1416]: vxlan.calico: Link UP Nov 8 01:16:22.941658 systemd-networkd[1416]: vxlan.calico: Gained carrier Nov 8 01:16:23.058995 containerd[1502]: 2025-11-08 01:16:22.926 [WARNING][5101] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0", GenerateName:"calico-apiserver-f4d5bbb98-", Namespace:"calico-apiserver", SelfLink:"", UID:"a44b2afe-dc17-4635-9d12-87b1697a9f2b", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4d5bbb98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18", Pod:"calico-apiserver-f4d5bbb98-9znr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2be38051ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:23.058995 containerd[1502]: 2025-11-08 01:16:22.931 [INFO][5101] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Nov 8 01:16:23.058995 containerd[1502]: 2025-11-08 01:16:22.932 [INFO][5101] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" iface="eth0" netns="" Nov 8 01:16:23.058995 containerd[1502]: 2025-11-08 01:16:22.933 [INFO][5101] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Nov 8 01:16:23.058995 containerd[1502]: 2025-11-08 01:16:22.933 [INFO][5101] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Nov 8 01:16:23.058995 containerd[1502]: 2025-11-08 01:16:23.024 [INFO][5113] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" HandleID="k8s-pod-network.d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:23.058995 containerd[1502]: 2025-11-08 01:16:23.024 [INFO][5113] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:23.058995 containerd[1502]: 2025-11-08 01:16:23.024 [INFO][5113] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:23.058995 containerd[1502]: 2025-11-08 01:16:23.046 [WARNING][5113] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" HandleID="k8s-pod-network.d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:23.058995 containerd[1502]: 2025-11-08 01:16:23.046 [INFO][5113] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" HandleID="k8s-pod-network.d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:23.058995 containerd[1502]: 2025-11-08 01:16:23.049 [INFO][5113] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:23.058995 containerd[1502]: 2025-11-08 01:16:23.056 [INFO][5101] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Nov 8 01:16:23.062854 containerd[1502]: time="2025-11-08T01:16:23.060204479Z" level=info msg="TearDown network for sandbox \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\" successfully" Nov 8 01:16:23.062854 containerd[1502]: time="2025-11-08T01:16:23.060271870Z" level=info msg="StopPodSandbox for \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\" returns successfully" Nov 8 01:16:23.062854 containerd[1502]: time="2025-11-08T01:16:23.060993649Z" level=info msg="RemovePodSandbox for \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\"" Nov 8 01:16:23.062854 containerd[1502]: time="2025-11-08T01:16:23.061029546Z" level=info msg="Forcibly stopping sandbox \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\"" Nov 8 01:16:23.062342 systemd-networkd[1416]: cali756d0b101d1: Gained IPv6LL Nov 8 01:16:23.090187 containerd[1502]: time="2025-11-08T01:16:23.089429709Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:23.101204 containerd[1502]: time="2025-11-08T01:16:23.100731079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 01:16:23.101204 containerd[1502]: time="2025-11-08T01:16:23.101109216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 01:16:23.102106 kubelet[2667]: E1108 01:16:23.102015 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:16:23.102399 kubelet[2667]: E1108 01:16:23.102253 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:16:23.104680 kubelet[2667]: E1108 01:16:23.103934 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95vxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wd97q_calico-system(6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:23.105532 kubelet[2667]: E1108 01:16:23.105431 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wd97q" podUID="6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39" Nov 8 01:16:23.243220 containerd[1502]: 2025-11-08 01:16:23.147 [WARNING][5144] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0", GenerateName:"calico-apiserver-f4d5bbb98-", Namespace:"calico-apiserver", SelfLink:"", UID:"a44b2afe-dc17-4635-9d12-87b1697a9f2b", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f4d5bbb98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"282139f46bdd5487e0b4656cb9c61e7c43d8f78b296df30c5d96cd0459c90c18", Pod:"calico-apiserver-f4d5bbb98-9znr7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid2be38051ec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:23.243220 containerd[1502]: 2025-11-08 01:16:23.148 [INFO][5144] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Nov 8 01:16:23.243220 containerd[1502]: 2025-11-08 01:16:23.148 [INFO][5144] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" iface="eth0" netns="" Nov 8 01:16:23.243220 containerd[1502]: 2025-11-08 01:16:23.148 [INFO][5144] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Nov 8 01:16:23.243220 containerd[1502]: 2025-11-08 01:16:23.148 [INFO][5144] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Nov 8 01:16:23.243220 containerd[1502]: 2025-11-08 01:16:23.224 [INFO][5156] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" HandleID="k8s-pod-network.d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:23.243220 containerd[1502]: 2025-11-08 01:16:23.224 [INFO][5156] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:23.243220 containerd[1502]: 2025-11-08 01:16:23.224 [INFO][5156] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:23.243220 containerd[1502]: 2025-11-08 01:16:23.236 [WARNING][5156] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" HandleID="k8s-pod-network.d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:23.243220 containerd[1502]: 2025-11-08 01:16:23.236 [INFO][5156] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" HandleID="k8s-pod-network.d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--f4d5bbb98--9znr7-eth0" Nov 8 01:16:23.243220 containerd[1502]: 2025-11-08 01:16:23.238 [INFO][5156] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:23.243220 containerd[1502]: 2025-11-08 01:16:23.241 [INFO][5144] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e" Nov 8 01:16:23.246132 containerd[1502]: time="2025-11-08T01:16:23.244403161Z" level=info msg="TearDown network for sandbox \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\" successfully" Nov 8 01:16:23.250378 containerd[1502]: time="2025-11-08T01:16:23.250341944Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:16:23.250645 containerd[1502]: time="2025-11-08T01:16:23.250575550Z" level=info msg="RemovePodSandbox \"d4e09f48fd1649a7688ce216b08bc7f5e92d0551a41e3486178e4d11a583320e\" returns successfully" Nov 8 01:16:23.252537 containerd[1502]: time="2025-11-08T01:16:23.252188772Z" level=info msg="StopPodSandbox for \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\"" Nov 8 01:16:23.415993 containerd[1502]: 2025-11-08 01:16:23.335 [WARNING][5171] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"51b46487-75c6-4a08-a5c4-0240abff3a0b", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858", Pod:"csi-node-driver-qxczs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a477e6f1ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:23.415993 containerd[1502]: 2025-11-08 01:16:23.336 [INFO][5171] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Nov 8 01:16:23.415993 containerd[1502]: 2025-11-08 01:16:23.336 [INFO][5171] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" iface="eth0" netns="" Nov 8 01:16:23.415993 containerd[1502]: 2025-11-08 01:16:23.336 [INFO][5171] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Nov 8 01:16:23.415993 containerd[1502]: 2025-11-08 01:16:23.336 [INFO][5171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Nov 8 01:16:23.415993 containerd[1502]: 2025-11-08 01:16:23.392 [INFO][5178] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" HandleID="k8s-pod-network.b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Workload="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:23.415993 containerd[1502]: 2025-11-08 01:16:23.393 [INFO][5178] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:23.415993 containerd[1502]: 2025-11-08 01:16:23.393 [INFO][5178] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:23.415993 containerd[1502]: 2025-11-08 01:16:23.403 [WARNING][5178] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" HandleID="k8s-pod-network.b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Workload="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:23.415993 containerd[1502]: 2025-11-08 01:16:23.403 [INFO][5178] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" HandleID="k8s-pod-network.b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Workload="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:23.415993 containerd[1502]: 2025-11-08 01:16:23.405 [INFO][5178] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:23.415993 containerd[1502]: 2025-11-08 01:16:23.408 [INFO][5171] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Nov 8 01:16:23.418904 containerd[1502]: time="2025-11-08T01:16:23.415993120Z" level=info msg="TearDown network for sandbox \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\" successfully" Nov 8 01:16:23.418904 containerd[1502]: time="2025-11-08T01:16:23.416032744Z" level=info msg="StopPodSandbox for \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\" returns successfully" Nov 8 01:16:23.423383 containerd[1502]: time="2025-11-08T01:16:23.422781189Z" level=info msg="RemovePodSandbox for \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\"" Nov 8 01:16:23.423383 containerd[1502]: time="2025-11-08T01:16:23.422951350Z" level=info msg="Forcibly stopping sandbox \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\"" Nov 8 01:16:23.576573 containerd[1502]: 2025-11-08 01:16:23.497 [WARNING][5192] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"51b46487-75c6-4a08-a5c4-0240abff3a0b", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"29719bc561866993379d07658641cc329328f1ab39a53a60332c19583a822858", Pod:"csi-node-driver-qxczs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.17.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2a477e6f1ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:23.576573 containerd[1502]: 2025-11-08 01:16:23.497 [INFO][5192] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Nov 8 01:16:23.576573 containerd[1502]: 2025-11-08 01:16:23.498 [INFO][5192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" iface="eth0" netns="" Nov 8 01:16:23.576573 containerd[1502]: 2025-11-08 01:16:23.498 [INFO][5192] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Nov 8 01:16:23.576573 containerd[1502]: 2025-11-08 01:16:23.498 [INFO][5192] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Nov 8 01:16:23.576573 containerd[1502]: 2025-11-08 01:16:23.557 [INFO][5199] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" HandleID="k8s-pod-network.b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Workload="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:23.576573 containerd[1502]: 2025-11-08 01:16:23.558 [INFO][5199] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:23.576573 containerd[1502]: 2025-11-08 01:16:23.558 [INFO][5199] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:23.576573 containerd[1502]: 2025-11-08 01:16:23.567 [WARNING][5199] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" HandleID="k8s-pod-network.b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Workload="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:23.576573 containerd[1502]: 2025-11-08 01:16:23.567 [INFO][5199] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" HandleID="k8s-pod-network.b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Workload="srv--1w3cb.gb1.brightbox.com-k8s-csi--node--driver--qxczs-eth0" Nov 8 01:16:23.576573 containerd[1502]: 2025-11-08 01:16:23.569 [INFO][5199] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:23.576573 containerd[1502]: 2025-11-08 01:16:23.571 [INFO][5192] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8" Nov 8 01:16:23.576573 containerd[1502]: time="2025-11-08T01:16:23.575774507Z" level=info msg="TearDown network for sandbox \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\" successfully" Nov 8 01:16:23.582325 containerd[1502]: time="2025-11-08T01:16:23.582240349Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:16:23.582457 containerd[1502]: time="2025-11-08T01:16:23.582401297Z" level=info msg="RemovePodSandbox \"b940e61325532ff0655ba5d3c120ff1561fb90645323709d9759edab723cffc8\" returns successfully" Nov 8 01:16:23.584640 containerd[1502]: time="2025-11-08T01:16:23.583579309Z" level=info msg="StopPodSandbox for \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\"" Nov 8 01:16:23.727504 containerd[1502]: 2025-11-08 01:16:23.651 [WARNING][5213] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0", GenerateName:"calico-apiserver-54cbc7f844-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e1f2813-87fb-41fd-ad67-d8abf3b908a6", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54cbc7f844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a", Pod:"calico-apiserver-54cbc7f844-zdscz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali860db06f897", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:23.727504 containerd[1502]: 2025-11-08 01:16:23.651 [INFO][5213] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Nov 8 01:16:23.727504 containerd[1502]: 2025-11-08 01:16:23.651 [INFO][5213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" iface="eth0" netns="" Nov 8 01:16:23.727504 containerd[1502]: 2025-11-08 01:16:23.651 [INFO][5213] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Nov 8 01:16:23.727504 containerd[1502]: 2025-11-08 01:16:23.651 [INFO][5213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Nov 8 01:16:23.727504 containerd[1502]: 2025-11-08 01:16:23.705 [INFO][5220] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" HandleID="k8s-pod-network.7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:23.727504 containerd[1502]: 2025-11-08 01:16:23.705 [INFO][5220] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:23.727504 containerd[1502]: 2025-11-08 01:16:23.705 [INFO][5220] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:23.727504 containerd[1502]: 2025-11-08 01:16:23.718 [WARNING][5220] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" HandleID="k8s-pod-network.7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:23.727504 containerd[1502]: 2025-11-08 01:16:23.719 [INFO][5220] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" HandleID="k8s-pod-network.7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:23.727504 containerd[1502]: 2025-11-08 01:16:23.721 [INFO][5220] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:23.727504 containerd[1502]: 2025-11-08 01:16:23.723 [INFO][5213] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Nov 8 01:16:23.727504 containerd[1502]: time="2025-11-08T01:16:23.727376673Z" level=info msg="TearDown network for sandbox \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\" successfully" Nov 8 01:16:23.727504 containerd[1502]: time="2025-11-08T01:16:23.727440493Z" level=info msg="StopPodSandbox for \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\" returns successfully" Nov 8 01:16:23.729852 containerd[1502]: time="2025-11-08T01:16:23.729715346Z" level=info msg="RemovePodSandbox for \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\"" Nov 8 01:16:23.729852 containerd[1502]: time="2025-11-08T01:16:23.729768119Z" level=info msg="Forcibly stopping sandbox \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\"" Nov 8 01:16:23.858982 containerd[1502]: 2025-11-08 01:16:23.784 [WARNING][5238] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0", GenerateName:"calico-apiserver-54cbc7f844-", Namespace:"calico-apiserver", SelfLink:"", UID:"3e1f2813-87fb-41fd-ad67-d8abf3b908a6", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54cbc7f844", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"1777312411692ccdbefe5386f512997be2ae8fba8b43fa35eef46a63f98d989a", Pod:"calico-apiserver-54cbc7f844-zdscz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.17.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali860db06f897", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:23.858982 containerd[1502]: 2025-11-08 01:16:23.785 [INFO][5238] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Nov 8 01:16:23.858982 containerd[1502]: 2025-11-08 01:16:23.785 [INFO][5238] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" iface="eth0" netns="" Nov 8 01:16:23.858982 containerd[1502]: 2025-11-08 01:16:23.785 [INFO][5238] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Nov 8 01:16:23.858982 containerd[1502]: 2025-11-08 01:16:23.785 [INFO][5238] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Nov 8 01:16:23.858982 containerd[1502]: 2025-11-08 01:16:23.834 [INFO][5251] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" HandleID="k8s-pod-network.7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:23.858982 containerd[1502]: 2025-11-08 01:16:23.835 [INFO][5251] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:23.858982 containerd[1502]: 2025-11-08 01:16:23.835 [INFO][5251] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:23.858982 containerd[1502]: 2025-11-08 01:16:23.848 [WARNING][5251] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" HandleID="k8s-pod-network.7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:23.858982 containerd[1502]: 2025-11-08 01:16:23.848 [INFO][5251] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" HandleID="k8s-pod-network.7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--apiserver--54cbc7f844--zdscz-eth0" Nov 8 01:16:23.858982 containerd[1502]: 2025-11-08 01:16:23.850 [INFO][5251] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:23.858982 containerd[1502]: 2025-11-08 01:16:23.855 [INFO][5238] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96" Nov 8 01:16:23.859764 containerd[1502]: time="2025-11-08T01:16:23.859047458Z" level=info msg="TearDown network for sandbox \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\" successfully" Nov 8 01:16:23.863351 containerd[1502]: time="2025-11-08T01:16:23.863302923Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:16:23.863438 containerd[1502]: time="2025-11-08T01:16:23.863374386Z" level=info msg="RemovePodSandbox \"7fa09bbb5f4d4b6726c92570c3a723705cb000724a568a4da48ce08f50f70b96\" returns successfully" Nov 8 01:16:23.864602 containerd[1502]: time="2025-11-08T01:16:23.864503898Z" level=info msg="StopPodSandbox for \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\"" Nov 8 01:16:23.994921 containerd[1502]: 2025-11-08 01:16:23.920 [WARNING][5270] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-whisker--545dcd8949--9clbl-eth0" Nov 8 01:16:23.994921 containerd[1502]: 2025-11-08 01:16:23.921 [INFO][5270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Nov 8 01:16:23.994921 containerd[1502]: 2025-11-08 01:16:23.921 [INFO][5270] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" iface="eth0" netns="" Nov 8 01:16:23.994921 containerd[1502]: 2025-11-08 01:16:23.921 [INFO][5270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Nov 8 01:16:23.994921 containerd[1502]: 2025-11-08 01:16:23.921 [INFO][5270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Nov 8 01:16:23.994921 containerd[1502]: 2025-11-08 01:16:23.972 [INFO][5280] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" HandleID="k8s-pod-network.9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Workload="srv--1w3cb.gb1.brightbox.com-k8s-whisker--545dcd8949--9clbl-eth0" Nov 8 01:16:23.994921 containerd[1502]: 2025-11-08 01:16:23.973 [INFO][5280] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:23.994921 containerd[1502]: 2025-11-08 01:16:23.973 [INFO][5280] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:23.994921 containerd[1502]: 2025-11-08 01:16:23.988 [WARNING][5280] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" HandleID="k8s-pod-network.9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Workload="srv--1w3cb.gb1.brightbox.com-k8s-whisker--545dcd8949--9clbl-eth0" Nov 8 01:16:23.994921 containerd[1502]: 2025-11-08 01:16:23.988 [INFO][5280] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" HandleID="k8s-pod-network.9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Workload="srv--1w3cb.gb1.brightbox.com-k8s-whisker--545dcd8949--9clbl-eth0" Nov 8 01:16:23.994921 containerd[1502]: 2025-11-08 01:16:23.990 [INFO][5280] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:23.994921 containerd[1502]: 2025-11-08 01:16:23.992 [INFO][5270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Nov 8 01:16:23.994921 containerd[1502]: time="2025-11-08T01:16:23.994279274Z" level=info msg="TearDown network for sandbox \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\" successfully" Nov 8 01:16:23.994921 containerd[1502]: time="2025-11-08T01:16:23.994314323Z" level=info msg="StopPodSandbox for \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\" returns successfully" Nov 8 01:16:23.997747 containerd[1502]: time="2025-11-08T01:16:23.996125176Z" level=info msg="RemovePodSandbox for \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\"" Nov 8 01:16:23.997747 containerd[1502]: time="2025-11-08T01:16:23.996163129Z" level=info msg="Forcibly stopping sandbox \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\"" Nov 8 01:16:24.020016 kubelet[2667]: E1108 01:16:24.019095 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wd97q" podUID="6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39" Nov 8 01:16:24.176879 containerd[1502]: 2025-11-08 01:16:24.093 [WARNING][5305] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" WorkloadEndpoint="srv--1w3cb.gb1.brightbox.com-k8s-whisker--545dcd8949--9clbl-eth0" Nov 8 01:16:24.176879 containerd[1502]: 2025-11-08 01:16:24.093 [INFO][5305] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Nov 8 01:16:24.176879 containerd[1502]: 2025-11-08 01:16:24.093 [INFO][5305] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" iface="eth0" netns="" Nov 8 01:16:24.176879 containerd[1502]: 2025-11-08 01:16:24.093 [INFO][5305] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Nov 8 01:16:24.176879 containerd[1502]: 2025-11-08 01:16:24.094 [INFO][5305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Nov 8 01:16:24.176879 containerd[1502]: 2025-11-08 01:16:24.150 [INFO][5320] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" HandleID="k8s-pod-network.9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Workload="srv--1w3cb.gb1.brightbox.com-k8s-whisker--545dcd8949--9clbl-eth0" Nov 8 01:16:24.176879 containerd[1502]: 2025-11-08 01:16:24.152 [INFO][5320] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:24.176879 containerd[1502]: 2025-11-08 01:16:24.152 [INFO][5320] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:24.176879 containerd[1502]: 2025-11-08 01:16:24.164 [WARNING][5320] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" HandleID="k8s-pod-network.9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Workload="srv--1w3cb.gb1.brightbox.com-k8s-whisker--545dcd8949--9clbl-eth0" Nov 8 01:16:24.176879 containerd[1502]: 2025-11-08 01:16:24.165 [INFO][5320] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" HandleID="k8s-pod-network.9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Workload="srv--1w3cb.gb1.brightbox.com-k8s-whisker--545dcd8949--9clbl-eth0" Nov 8 01:16:24.176879 containerd[1502]: 2025-11-08 01:16:24.170 [INFO][5320] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:24.176879 containerd[1502]: 2025-11-08 01:16:24.173 [INFO][5305] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6" Nov 8 01:16:24.176879 containerd[1502]: time="2025-11-08T01:16:24.175546576Z" level=info msg="TearDown network for sandbox \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\" successfully" Nov 8 01:16:24.188144 containerd[1502]: time="2025-11-08T01:16:24.187885827Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:16:24.188403 containerd[1502]: time="2025-11-08T01:16:24.188133410Z" level=info msg="RemovePodSandbox \"9ef3780ce0ca663734b656555080e9ec9883e2db79b570ca38e838b9c0412bc6\" returns successfully" Nov 8 01:16:24.189622 containerd[1502]: time="2025-11-08T01:16:24.189582857Z" level=info msg="StopPodSandbox for \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\"" Nov 8 01:16:24.343900 containerd[1502]: 2025-11-08 01:16:24.270 [WARNING][5343] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0", GenerateName:"calico-kube-controllers-5fb7bc4b99-", Namespace:"calico-system", SelfLink:"", UID:"bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fb7bc4b99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a", Pod:"calico-kube-controllers-5fb7bc4b99-gm555", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6d97cd7930f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:24.343900 containerd[1502]: 2025-11-08 01:16:24.274 [INFO][5343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Nov 8 01:16:24.343900 containerd[1502]: 2025-11-08 01:16:24.274 [INFO][5343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" iface="eth0" netns="" Nov 8 01:16:24.343900 containerd[1502]: 2025-11-08 01:16:24.274 [INFO][5343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Nov 8 01:16:24.343900 containerd[1502]: 2025-11-08 01:16:24.274 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Nov 8 01:16:24.343900 containerd[1502]: 2025-11-08 01:16:24.306 [INFO][5355] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" HandleID="k8s-pod-network.70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:24.343900 containerd[1502]: 2025-11-08 01:16:24.306 [INFO][5355] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:24.343900 containerd[1502]: 2025-11-08 01:16:24.306 [INFO][5355] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:24.343900 containerd[1502]: 2025-11-08 01:16:24.329 [WARNING][5355] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" HandleID="k8s-pod-network.70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:24.343900 containerd[1502]: 2025-11-08 01:16:24.329 [INFO][5355] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" HandleID="k8s-pod-network.70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:24.343900 containerd[1502]: 2025-11-08 01:16:24.339 [INFO][5355] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:24.343900 containerd[1502]: 2025-11-08 01:16:24.341 [INFO][5343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Nov 8 01:16:24.343900 containerd[1502]: time="2025-11-08T01:16:24.343726150Z" level=info msg="TearDown network for sandbox \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\" successfully" Nov 8 01:16:24.343900 containerd[1502]: time="2025-11-08T01:16:24.343770206Z" level=info msg="StopPodSandbox for \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\" returns successfully" Nov 8 01:16:24.347656 containerd[1502]: time="2025-11-08T01:16:24.346001328Z" level=info msg="RemovePodSandbox for \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\"" Nov 8 01:16:24.347656 containerd[1502]: time="2025-11-08T01:16:24.346075154Z" level=info msg="Forcibly stopping sandbox \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\"" Nov 8 01:16:24.452644 containerd[1502]: 2025-11-08 01:16:24.407 [WARNING][5369] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0", GenerateName:"calico-kube-controllers-5fb7bc4b99-", Namespace:"calico-system", SelfLink:"", UID:"bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fb7bc4b99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"84405fcbf4265566c174f5029cd6a83d9c7814d3dd2ceec262ac692f4d36d66a", Pod:"calico-kube-controllers-5fb7bc4b99-gm555", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.17.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6d97cd7930f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:24.452644 containerd[1502]: 2025-11-08 01:16:24.407 [INFO][5369] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Nov 8 01:16:24.452644 containerd[1502]: 2025-11-08 01:16:24.407 [INFO][5369] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" iface="eth0" netns="" Nov 8 01:16:24.452644 containerd[1502]: 2025-11-08 01:16:24.407 [INFO][5369] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Nov 8 01:16:24.452644 containerd[1502]: 2025-11-08 01:16:24.407 [INFO][5369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Nov 8 01:16:24.452644 containerd[1502]: 2025-11-08 01:16:24.435 [INFO][5376] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" HandleID="k8s-pod-network.70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:24.452644 containerd[1502]: 2025-11-08 01:16:24.436 [INFO][5376] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:24.452644 containerd[1502]: 2025-11-08 01:16:24.436 [INFO][5376] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:24.452644 containerd[1502]: 2025-11-08 01:16:24.446 [WARNING][5376] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" HandleID="k8s-pod-network.70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:24.452644 containerd[1502]: 2025-11-08 01:16:24.446 [INFO][5376] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" HandleID="k8s-pod-network.70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Workload="srv--1w3cb.gb1.brightbox.com-k8s-calico--kube--controllers--5fb7bc4b99--gm555-eth0" Nov 8 01:16:24.452644 containerd[1502]: 2025-11-08 01:16:24.448 [INFO][5376] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:24.452644 containerd[1502]: 2025-11-08 01:16:24.450 [INFO][5369] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc" Nov 8 01:16:24.454938 containerd[1502]: time="2025-11-08T01:16:24.452853311Z" level=info msg="TearDown network for sandbox \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\" successfully" Nov 8 01:16:24.458441 containerd[1502]: time="2025-11-08T01:16:24.458404549Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:16:24.458691 containerd[1502]: time="2025-11-08T01:16:24.458662310Z" level=info msg="RemovePodSandbox \"70bc521fb6b51b206df1441dd0fd2231ba9655e7aaeaafe1b62a09b7b60ef1cc\" returns successfully" Nov 8 01:16:24.459575 containerd[1502]: time="2025-11-08T01:16:24.459538617Z" level=info msg="StopPodSandbox for \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\"" Nov 8 01:16:24.563281 containerd[1502]: 2025-11-08 01:16:24.515 [WARNING][5390] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2dbb88ca-6a44-41bd-ba35-48c338cd1fe1", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954", Pod:"coredns-668d6bf9bc-698zh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali33e6f0003fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:24.563281 containerd[1502]: 2025-11-08 01:16:24.516 [INFO][5390] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Nov 8 01:16:24.563281 containerd[1502]: 2025-11-08 01:16:24.516 [INFO][5390] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" iface="eth0" netns="" Nov 8 01:16:24.563281 containerd[1502]: 2025-11-08 01:16:24.516 [INFO][5390] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Nov 8 01:16:24.563281 containerd[1502]: 2025-11-08 01:16:24.516 [INFO][5390] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Nov 8 01:16:24.563281 containerd[1502]: 2025-11-08 01:16:24.545 [INFO][5397] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" HandleID="k8s-pod-network.a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:24.563281 containerd[1502]: 2025-11-08 01:16:24.545 [INFO][5397] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:24.563281 containerd[1502]: 2025-11-08 01:16:24.545 [INFO][5397] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:24.563281 containerd[1502]: 2025-11-08 01:16:24.555 [WARNING][5397] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" HandleID="k8s-pod-network.a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:24.563281 containerd[1502]: 2025-11-08 01:16:24.556 [INFO][5397] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" HandleID="k8s-pod-network.a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:24.563281 containerd[1502]: 2025-11-08 01:16:24.558 [INFO][5397] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:24.563281 containerd[1502]: 2025-11-08 01:16:24.560 [INFO][5390] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Nov 8 01:16:24.564573 containerd[1502]: time="2025-11-08T01:16:24.563322121Z" level=info msg="TearDown network for sandbox \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\" successfully" Nov 8 01:16:24.564573 containerd[1502]: time="2025-11-08T01:16:24.563369469Z" level=info msg="StopPodSandbox for \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\" returns successfully" Nov 8 01:16:24.565340 containerd[1502]: time="2025-11-08T01:16:24.565307511Z" level=info msg="RemovePodSandbox for \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\"" Nov 8 01:16:24.565433 containerd[1502]: time="2025-11-08T01:16:24.565358608Z" level=info msg="Forcibly stopping sandbox \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\"" Nov 8 01:16:24.666231 containerd[1502]: 2025-11-08 01:16:24.618 [WARNING][5411] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"2dbb88ca-6a44-41bd-ba35-48c338cd1fe1", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"98dcefa71fc0a54ea29a57e288c14e9d9d8d550107bea193167f5dddde2ff954", Pod:"coredns-668d6bf9bc-698zh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.17.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali33e6f0003fe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:16:24.666231 containerd[1502]: 2025-11-08 01:16:24.618 [INFO][5411] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Nov 8 01:16:24.666231 containerd[1502]: 2025-11-08 01:16:24.618 [INFO][5411] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" iface="eth0" netns="" Nov 8 01:16:24.666231 containerd[1502]: 2025-11-08 01:16:24.618 [INFO][5411] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Nov 8 01:16:24.666231 containerd[1502]: 2025-11-08 01:16:24.618 [INFO][5411] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Nov 8 01:16:24.666231 containerd[1502]: 2025-11-08 01:16:24.649 [INFO][5418] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" HandleID="k8s-pod-network.a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:24.666231 containerd[1502]: 2025-11-08 01:16:24.649 [INFO][5418] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:16:24.666231 containerd[1502]: 2025-11-08 01:16:24.649 [INFO][5418] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:16:24.666231 containerd[1502]: 2025-11-08 01:16:24.659 [WARNING][5418] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" HandleID="k8s-pod-network.a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:24.666231 containerd[1502]: 2025-11-08 01:16:24.659 [INFO][5418] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" HandleID="k8s-pod-network.a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Workload="srv--1w3cb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--698zh-eth0" Nov 8 01:16:24.666231 containerd[1502]: 2025-11-08 01:16:24.661 [INFO][5418] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:16:24.666231 containerd[1502]: 2025-11-08 01:16:24.663 [INFO][5411] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31" Nov 8 01:16:24.666231 containerd[1502]: time="2025-11-08T01:16:24.666195883Z" level=info msg="TearDown network for sandbox \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\" successfully" Nov 8 01:16:24.676442 containerd[1502]: time="2025-11-08T01:16:24.676374767Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:16:24.676620 containerd[1502]: time="2025-11-08T01:16:24.676483130Z" level=info msg="RemovePodSandbox \"a79e3ac75c52b002b290f24e14d82debc763f4c710208ed0573f8c3897aa2d31\" returns successfully" Nov 8 01:16:24.789422 systemd-networkd[1416]: vxlan.calico: Gained IPv6LL Nov 8 01:16:31.374208 containerd[1502]: time="2025-11-08T01:16:31.374086136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:16:31.696144 containerd[1502]: time="2025-11-08T01:16:31.695782349Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:31.698291 containerd[1502]: time="2025-11-08T01:16:31.698109429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:16:31.698291 containerd[1502]: time="2025-11-08T01:16:31.698199550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:16:31.698613 kubelet[2667]: E1108 01:16:31.698539 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:16:31.699366 kubelet[2667]: E1108 01:16:31.698631 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:16:31.699366 kubelet[2667]: E1108 01:16:31.698945 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j56qx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54cbc7f844-zdscz_calico-apiserver(3e1f2813-87fb-41fd-ad67-d8abf3b908a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:31.700295 kubelet[2667]: E1108 01:16:31.700229 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" podUID="3e1f2813-87fb-41fd-ad67-d8abf3b908a6" Nov 8 01:16:33.371114 containerd[1502]: time="2025-11-08T01:16:33.370989534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:16:33.686962 containerd[1502]: time="2025-11-08T01:16:33.686570008Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:33.688397 containerd[1502]: time="2025-11-08T01:16:33.688251540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:16:33.688397 containerd[1502]: time="2025-11-08T01:16:33.688331519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:16:33.688601 kubelet[2667]: E1108 01:16:33.688528 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:16:33.689019 kubelet[2667]: E1108 01:16:33.688619 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:16:33.701389 kubelet[2667]: E1108 01:16:33.688798 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbr6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54cbc7f844-sccdl_calico-apiserver(1cfed565-fcb5-4110-9fc3-0c3a9aaca493): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:33.703215 kubelet[2667]: E1108 01:16:33.703123 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-sccdl" podUID="1cfed565-fcb5-4110-9fc3-0c3a9aaca493" Nov 8 01:16:35.373134 containerd[1502]: time="2025-11-08T01:16:35.371992299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 01:16:35.690014 containerd[1502]: time="2025-11-08T01:16:35.689378706Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:35.691012 containerd[1502]: time="2025-11-08T01:16:35.690955006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 01:16:35.691132 containerd[1502]: time="2025-11-08T01:16:35.691090782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 01:16:35.691753 kubelet[2667]: E1108 01:16:35.691343 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:16:35.691753 kubelet[2667]: E1108 01:16:35.691438 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:16:35.691753 kubelet[2667]: E1108 01:16:35.691629 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:70e36517ff7645e2bd57c80a14e0d94d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4xswz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f775cbcb9-f95x7_calico-system(e22bf778-56e1-456c-a095-d6acd02811e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:35.694437 containerd[1502]: time="2025-11-08T01:16:35.694401223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 01:16:36.003916 containerd[1502]: time="2025-11-08T01:16:36.003677544Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:36.005355 containerd[1502]: time="2025-11-08T01:16:36.005307252Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 01:16:36.005483 containerd[1502]: time="2025-11-08T01:16:36.005431340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 01:16:36.005755 kubelet[2667]: E1108 01:16:36.005700 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:16:36.005864 kubelet[2667]: E1108 01:16:36.005792 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:16:36.006046 kubelet[2667]: E1108 01:16:36.005979 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xswz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f775cbcb9-f95x7_calico-system(e22bf778-56e1-456c-a095-d6acd02811e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:36.008704 kubelet[2667]: E1108 01:16:36.008577 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f775cbcb9-f95x7" podUID="e22bf778-56e1-456c-a095-d6acd02811e3" Nov 8 01:16:36.373879 containerd[1502]: time="2025-11-08T01:16:36.373795081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:16:36.697081 containerd[1502]: time="2025-11-08T01:16:36.696857378Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:36.698535 containerd[1502]: time="2025-11-08T01:16:36.698479471Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:16:36.698687 containerd[1502]: time="2025-11-08T01:16:36.698629342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:16:36.698951 kubelet[2667]: E1108 01:16:36.698894 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:16:36.699721 kubelet[2667]: E1108 01:16:36.698975 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:16:36.699721 kubelet[2667]: E1108 01:16:36.699204 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxgvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f4d5bbb98-9znr7_calico-apiserver(a44b2afe-dc17-4635-9d12-87b1697a9f2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:36.700675 kubelet[2667]: E1108 01:16:36.700632 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4d5bbb98-9znr7" podUID="a44b2afe-dc17-4635-9d12-87b1697a9f2b" Nov 8 01:16:37.373748 containerd[1502]: time="2025-11-08T01:16:37.372819933Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 01:16:37.757864 containerd[1502]: time="2025-11-08T01:16:37.757595730Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:37.759225 containerd[1502]: time="2025-11-08T01:16:37.759152983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 01:16:37.759339 containerd[1502]: time="2025-11-08T01:16:37.759203004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 01:16:37.759553 kubelet[2667]: E1108 01:16:37.759483 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:16:37.759994 kubelet[2667]: E1108 01:16:37.759565 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:16:37.761242 kubelet[2667]: E1108 01:16:37.759937 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q6x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5fb7bc4b99-gm555_calico-system(bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:37.761456 containerd[1502]: time="2025-11-08T01:16:37.760319931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 01:16:37.761945 kubelet[2667]: E1108 01:16:37.761842 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fb7bc4b99-gm555" podUID="bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d" Nov 8 01:16:38.104036 containerd[1502]: time="2025-11-08T01:16:38.103954158Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:38.106929 containerd[1502]: time="2025-11-08T01:16:38.106879846Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 01:16:38.107062 containerd[1502]: time="2025-11-08T01:16:38.106992315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 01:16:38.107635 kubelet[2667]: E1108 01:16:38.107502 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:16:38.107843 kubelet[2667]: E1108 01:16:38.107728 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:16:38.108352 kubelet[2667]: E1108 01:16:38.108223 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xzb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qxczs_calico-system(51b46487-75c6-4a08-a5c4-0240abff3a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:38.111851 containerd[1502]: time="2025-11-08T01:16:38.111794608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 01:16:38.494338 containerd[1502]: time="2025-11-08T01:16:38.493825780Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:38.499520 containerd[1502]: time="2025-11-08T01:16:38.499362294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 01:16:38.499520 containerd[1502]: time="2025-11-08T01:16:38.499436099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 01:16:38.499818 kubelet[2667]: E1108 01:16:38.499772 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:16:38.499897 kubelet[2667]: E1108 01:16:38.499840 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:16:38.500785 kubelet[2667]: E1108 01:16:38.500231 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xzb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qxczs_calico-system(51b46487-75c6-4a08-a5c4-0240abff3a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:38.500949 containerd[1502]: time="2025-11-08T01:16:38.500287700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 01:16:38.504416 kubelet[2667]: E1108 01:16:38.501879 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:16:38.821725 containerd[1502]: time="2025-11-08T01:16:38.821615767Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:38.823779 containerd[1502]: time="2025-11-08T01:16:38.823705413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 01:16:38.824065 containerd[1502]: time="2025-11-08T01:16:38.823976714Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 01:16:38.824501 kubelet[2667]: E1108 01:16:38.824424 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:16:38.825132 kubelet[2667]: E1108 01:16:38.824543 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:16:38.826139 kubelet[2667]: E1108 01:16:38.825967 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95vxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wd97q_calico-system(6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:38.827291 kubelet[2667]: E1108 01:16:38.827216 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wd97q" podUID="6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39" Nov 8 01:16:40.374734 systemd[1]: Started sshd@7-10.244.23.242:22-139.178.68.195:47442.service - OpenSSH per-connection server daemon (139.178.68.195:47442). Nov 8 01:16:41.352219 sshd[5448]: Accepted publickey for core from 139.178.68.195 port 47442 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:16:41.356386 sshd[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:16:41.367774 systemd-logind[1488]: New session 10 of user core. Nov 8 01:16:41.376459 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 01:16:42.670089 sshd[5448]: pam_unix(sshd:session): session closed for user core Nov 8 01:16:42.677734 systemd[1]: sshd@7-10.244.23.242:22-139.178.68.195:47442.service: Deactivated successfully. Nov 8 01:16:42.683265 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 01:16:42.684941 systemd-logind[1488]: Session 10 logged out. Waiting for processes to exit. Nov 8 01:16:42.688268 systemd-logind[1488]: Removed session 10. Nov 8 01:16:44.374050 kubelet[2667]: E1108 01:16:44.373958 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" podUID="3e1f2813-87fb-41fd-ad67-d8abf3b908a6" Nov 8 01:16:45.370443 kubelet[2667]: E1108 01:16:45.370045 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-sccdl" podUID="1cfed565-fcb5-4110-9fc3-0c3a9aaca493" Nov 8 01:16:47.373018 kubelet[2667]: E1108 01:16:47.372871 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4d5bbb98-9znr7" podUID="a44b2afe-dc17-4635-9d12-87b1697a9f2b" Nov 8 01:16:47.839633 systemd[1]: Started sshd@8-10.244.23.242:22-139.178.68.195:35802.service - OpenSSH per-connection server daemon (139.178.68.195:35802). Nov 8 01:16:48.377790 kubelet[2667]: E1108 01:16:48.376495 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f775cbcb9-f95x7" podUID="e22bf778-56e1-456c-a095-d6acd02811e3" Nov 8 01:16:48.836023 sshd[5475]: Accepted publickey for core from 139.178.68.195 port 35802 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:16:48.838908 sshd[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:16:48.853611 systemd-logind[1488]: New session 11 of user core. Nov 8 01:16:48.861595 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 01:16:49.774154 sshd[5475]: pam_unix(sshd:session): session closed for user core Nov 8 01:16:49.779449 systemd-logind[1488]: Session 11 logged out. Waiting for processes to exit. Nov 8 01:16:49.780281 systemd[1]: sshd@8-10.244.23.242:22-139.178.68.195:35802.service: Deactivated successfully. Nov 8 01:16:49.783545 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 01:16:49.787122 systemd-logind[1488]: Removed session 11. Nov 8 01:16:50.375515 kubelet[2667]: E1108 01:16:50.375350 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:16:53.373430 kubelet[2667]: E1108 01:16:53.373303 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fb7bc4b99-gm555" podUID="bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d" Nov 8 01:16:54.370764 kubelet[2667]: E1108 01:16:54.370678 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wd97q" podUID="6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39" Nov 8 01:16:54.934576 systemd[1]: Started sshd@9-10.244.23.242:22-139.178.68.195:43842.service - OpenSSH per-connection server daemon (139.178.68.195:43842). Nov 8 01:16:56.049890 sshd[5513]: Accepted publickey for core from 139.178.68.195 port 43842 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:16:56.052132 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:16:56.061012 systemd-logind[1488]: New session 12 of user core. Nov 8 01:16:56.066645 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 01:16:56.802622 sshd[5513]: pam_unix(sshd:session): session closed for user core Nov 8 01:16:56.808624 systemd[1]: sshd@9-10.244.23.242:22-139.178.68.195:43842.service: Deactivated successfully. Nov 8 01:16:56.812156 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 01:16:56.813566 systemd-logind[1488]: Session 12 logged out. Waiting for processes to exit. Nov 8 01:16:56.815761 systemd-logind[1488]: Removed session 12. Nov 8 01:16:56.962561 systemd[1]: Started sshd@10-10.244.23.242:22-139.178.68.195:43856.service - OpenSSH per-connection server daemon (139.178.68.195:43856). Nov 8 01:16:57.900500 sshd[5526]: Accepted publickey for core from 139.178.68.195 port 43856 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:16:57.902657 sshd[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:16:57.912462 systemd-logind[1488]: New session 13 of user core. Nov 8 01:16:57.924484 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 01:16:58.717162 sshd[5526]: pam_unix(sshd:session): session closed for user core Nov 8 01:16:58.723095 systemd[1]: sshd@10-10.244.23.242:22-139.178.68.195:43856.service: Deactivated successfully. Nov 8 01:16:58.723587 systemd-logind[1488]: Session 13 logged out. Waiting for processes to exit. Nov 8 01:16:58.726501 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 01:16:58.730083 systemd-logind[1488]: Removed session 13. Nov 8 01:16:58.884651 systemd[1]: Started sshd@11-10.244.23.242:22-139.178.68.195:43870.service - OpenSSH per-connection server daemon (139.178.68.195:43870). Nov 8 01:16:59.372981 containerd[1502]: time="2025-11-08T01:16:59.371922102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:16:59.705512 containerd[1502]: time="2025-11-08T01:16:59.705124222Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:16:59.706770 containerd[1502]: time="2025-11-08T01:16:59.706601879Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:16:59.706770 containerd[1502]: time="2025-11-08T01:16:59.706681832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:16:59.707655 kubelet[2667]: E1108 01:16:59.707046 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:16:59.707655 kubelet[2667]: E1108 01:16:59.707161 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:16:59.707655 kubelet[2667]: E1108 01:16:59.707534 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j56qx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54cbc7f844-zdscz_calico-apiserver(3e1f2813-87fb-41fd-ad67-d8abf3b908a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:16:59.708775 kubelet[2667]: E1108 01:16:59.708680 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" podUID="3e1f2813-87fb-41fd-ad67-d8abf3b908a6" Nov 8 01:16:59.825619 sshd[5539]: Accepted publickey for core from 139.178.68.195 port 43870 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:16:59.828479 sshd[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:16:59.836661 systemd-logind[1488]: New session 14 of user core. Nov 8 01:16:59.841458 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 01:17:00.373358 containerd[1502]: time="2025-11-08T01:17:00.373118577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:17:00.629333 sshd[5539]: pam_unix(sshd:session): session closed for user core Nov 8 01:17:00.638679 systemd[1]: sshd@11-10.244.23.242:22-139.178.68.195:43870.service: Deactivated successfully. Nov 8 01:17:00.642924 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 01:17:00.644365 systemd-logind[1488]: Session 14 logged out. Waiting for processes to exit. Nov 8 01:17:00.646153 systemd-logind[1488]: Removed session 14. Nov 8 01:17:00.725454 containerd[1502]: time="2025-11-08T01:17:00.725376045Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:17:00.727268 containerd[1502]: time="2025-11-08T01:17:00.727081845Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:17:00.727268 containerd[1502]: time="2025-11-08T01:17:00.727139876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:17:00.728880 kubelet[2667]: E1108 01:17:00.727640 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:17:00.728880 kubelet[2667]: E1108 01:17:00.727718 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:17:00.728880 kubelet[2667]: E1108 01:17:00.728080 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxgvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f4d5bbb98-9znr7_calico-apiserver(a44b2afe-dc17-4635-9d12-87b1697a9f2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:17:00.729648 containerd[1502]: time="2025-11-08T01:17:00.728540951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:17:00.729729 kubelet[2667]: E1108 01:17:00.729332 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4d5bbb98-9znr7" podUID="a44b2afe-dc17-4635-9d12-87b1697a9f2b" Nov 8 01:17:01.053232 containerd[1502]: time="2025-11-08T01:17:01.052204864Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:17:01.060081 containerd[1502]: time="2025-11-08T01:17:01.056632572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:17:01.060081 containerd[1502]: time="2025-11-08T01:17:01.056808511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:17:01.060362 kubelet[2667]: E1108 01:17:01.057049 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:17:01.060362 kubelet[2667]: E1108 01:17:01.057118 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:17:01.060362 kubelet[2667]: E1108 01:17:01.057320 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbr6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54cbc7f844-sccdl_calico-apiserver(1cfed565-fcb5-4110-9fc3-0c3a9aaca493): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:17:01.060362 kubelet[2667]: E1108 01:17:01.058875 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-sccdl" podUID="1cfed565-fcb5-4110-9fc3-0c3a9aaca493" Nov 8 01:17:01.373957 containerd[1502]: time="2025-11-08T01:17:01.373439672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 01:17:01.706991 containerd[1502]: time="2025-11-08T01:17:01.706780020Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:17:01.712725 containerd[1502]: time="2025-11-08T01:17:01.712656989Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 01:17:01.712868 containerd[1502]: time="2025-11-08T01:17:01.712670921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 01:17:01.713785 kubelet[2667]: E1108 01:17:01.713163 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:17:01.713785 kubelet[2667]: E1108 01:17:01.713279 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:17:01.713785 kubelet[2667]: E1108 01:17:01.713496 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xzb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qxczs_calico-system(51b46487-75c6-4a08-a5c4-0240abff3a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 01:17:01.716944 containerd[1502]: time="2025-11-08T01:17:01.716818113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 01:17:02.042072 containerd[1502]: time="2025-11-08T01:17:02.041864188Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:17:02.043491 containerd[1502]: time="2025-11-08T01:17:02.043438812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 01:17:02.043808 containerd[1502]: time="2025-11-08T01:17:02.043583476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 01:17:02.043900 kubelet[2667]: E1108 01:17:02.043821 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:17:02.044399 kubelet[2667]: E1108 01:17:02.043907 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:17:02.044399 kubelet[2667]: E1108 01:17:02.044091 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xzb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qxczs_calico-system(51b46487-75c6-4a08-a5c4-0240abff3a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 01:17:02.046238 kubelet[2667]: E1108 01:17:02.045729 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:17:03.374411 containerd[1502]: time="2025-11-08T01:17:03.374332730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 01:17:03.730611 containerd[1502]: time="2025-11-08T01:17:03.730199297Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:17:03.732110 containerd[1502]: time="2025-11-08T01:17:03.731545467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 01:17:03.732110 containerd[1502]: time="2025-11-08T01:17:03.731661491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 01:17:03.733367 kubelet[2667]: E1108 01:17:03.732514 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:17:03.733367 kubelet[2667]: E1108 01:17:03.732641 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:17:03.733367 kubelet[2667]: E1108 01:17:03.732816 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:70e36517ff7645e2bd57c80a14e0d94d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4xswz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f775cbcb9-f95x7_calico-system(e22bf778-56e1-456c-a095-d6acd02811e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 01:17:03.736279 containerd[1502]: time="2025-11-08T01:17:03.735858253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 01:17:04.051686 containerd[1502]: time="2025-11-08T01:17:04.051403077Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:17:04.053762 containerd[1502]: time="2025-11-08T01:17:04.053582195Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 01:17:04.053762 containerd[1502]: time="2025-11-08T01:17:04.053684841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 01:17:04.054492 kubelet[2667]: E1108 01:17:04.054137 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:17:04.054492 kubelet[2667]: E1108 01:17:04.054237 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:17:04.054492 kubelet[2667]: E1108 01:17:04.054419 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xswz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f775cbcb9-f95x7_calico-system(e22bf778-56e1-456c-a095-d6acd02811e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 01:17:04.056156 kubelet[2667]: E1108 01:17:04.056067 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f775cbcb9-f95x7" podUID="e22bf778-56e1-456c-a095-d6acd02811e3" Nov 8 01:17:04.375359 containerd[1502]: time="2025-11-08T01:17:04.374927383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 01:17:04.701128 containerd[1502]: time="2025-11-08T01:17:04.700709867Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:17:04.702160 containerd[1502]: time="2025-11-08T01:17:04.702007737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 01:17:04.702160 containerd[1502]: time="2025-11-08T01:17:04.702096581Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 01:17:04.702469 kubelet[2667]: E1108 01:17:04.702399 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:17:04.703265 kubelet[2667]: E1108 01:17:04.702496 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 01:17:04.703265 kubelet[2667]: E1108 01:17:04.702835 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q6x9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5fb7bc4b99-gm555_calico-system(bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 01:17:04.704343 kubelet[2667]: E1108 01:17:04.704264 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fb7bc4b99-gm555" podUID="bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d" Nov 8 01:17:05.803622 systemd[1]: Started sshd@12-10.244.23.242:22-139.178.68.195:54862.service - OpenSSH per-connection server daemon (139.178.68.195:54862). Nov 8 01:17:06.734221 sshd[5567]: Accepted publickey for core from 139.178.68.195 port 54862 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:17:06.735414 sshd[5567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:17:06.744628 systemd-logind[1488]: New session 15 of user core. Nov 8 01:17:06.752372 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 01:17:07.497818 sshd[5567]: pam_unix(sshd:session): session closed for user core Nov 8 01:17:07.503440 systemd[1]: sshd@12-10.244.23.242:22-139.178.68.195:54862.service: Deactivated successfully. Nov 8 01:17:07.507695 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 01:17:07.509370 systemd-logind[1488]: Session 15 logged out. Waiting for processes to exit. Nov 8 01:17:07.511998 systemd-logind[1488]: Removed session 15. Nov 8 01:17:09.373067 containerd[1502]: time="2025-11-08T01:17:09.372982470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 01:17:09.695552 containerd[1502]: time="2025-11-08T01:17:09.694465007Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:17:09.696527 containerd[1502]: time="2025-11-08T01:17:09.696349450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 01:17:09.696527 containerd[1502]: time="2025-11-08T01:17:09.696413793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 01:17:09.696765 kubelet[2667]: E1108 01:17:09.696688 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:17:09.698668 kubelet[2667]: E1108 01:17:09.696789 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 01:17:09.698668 kubelet[2667]: E1108 01:17:09.697139 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-95vxr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-wd97q_calico-system(6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 01:17:09.699011 kubelet[2667]: E1108 01:17:09.698951 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wd97q" podUID="6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39" Nov 8 01:17:12.662347 systemd[1]: Started sshd@13-10.244.23.242:22-139.178.68.195:54872.service - OpenSSH per-connection server daemon (139.178.68.195:54872). Nov 8 01:17:13.371421 kubelet[2667]: E1108 01:17:13.371145 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" podUID="3e1f2813-87fb-41fd-ad67-d8abf3b908a6" Nov 8 01:17:13.372995 kubelet[2667]: E1108 01:17:13.371637 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-sccdl" podUID="1cfed565-fcb5-4110-9fc3-0c3a9aaca493" Nov 8 01:17:13.588721 sshd[5580]: Accepted publickey for core from 139.178.68.195 port 54872 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:17:13.593188 sshd[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:17:13.604829 systemd-logind[1488]: New session 16 of user core. Nov 8 01:17:13.610441 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 01:17:14.386670 sshd[5580]: pam_unix(sshd:session): session closed for user core Nov 8 01:17:14.393805 systemd-logind[1488]: Session 16 logged out. Waiting for processes to exit. Nov 8 01:17:14.395375 systemd[1]: sshd@13-10.244.23.242:22-139.178.68.195:54872.service: Deactivated successfully. Nov 8 01:17:14.399838 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 01:17:14.402937 systemd-logind[1488]: Removed session 16. Nov 8 01:17:15.375502 kubelet[2667]: E1108 01:17:15.374359 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4d5bbb98-9znr7" podUID="a44b2afe-dc17-4635-9d12-87b1697a9f2b" Nov 8 01:17:15.381315 kubelet[2667]: E1108 01:17:15.380998 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:17:17.374237 kubelet[2667]: E1108 01:17:17.373624 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fb7bc4b99-gm555" podUID="bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d" Nov 8 01:17:17.376255 kubelet[2667]: E1108 01:17:17.375718 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f775cbcb9-f95x7" podUID="e22bf778-56e1-456c-a095-d6acd02811e3" Nov 8 01:17:19.556790 systemd[1]: Started sshd@14-10.244.23.242:22-139.178.68.195:36958.service - OpenSSH per-connection server daemon (139.178.68.195:36958). Nov 8 01:17:20.523657 sshd[5615]: Accepted publickey for core from 139.178.68.195 port 36958 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:17:20.526563 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:17:20.536019 systemd-logind[1488]: New session 17 of user core. Nov 8 01:17:20.543454 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 01:17:21.312471 sshd[5615]: pam_unix(sshd:session): session closed for user core Nov 8 01:17:21.317055 systemd[1]: sshd@14-10.244.23.242:22-139.178.68.195:36958.service: Deactivated successfully. Nov 8 01:17:21.321713 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 01:17:21.325079 systemd-logind[1488]: Session 17 logged out. Waiting for processes to exit. Nov 8 01:17:21.327619 systemd-logind[1488]: Removed session 17. Nov 8 01:17:21.483525 systemd[1]: Started sshd@15-10.244.23.242:22-139.178.68.195:36974.service - OpenSSH per-connection server daemon (139.178.68.195:36974). Nov 8 01:17:22.414415 sshd[5630]: Accepted publickey for core from 139.178.68.195 port 36974 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:17:22.416900 sshd[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:17:22.423296 systemd-logind[1488]: New session 18 of user core. Nov 8 01:17:22.428399 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 01:17:23.372408 kubelet[2667]: E1108 01:17:23.371620 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wd97q" podUID="6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39" Nov 8 01:17:23.436395 sshd[5630]: pam_unix(sshd:session): session closed for user core Nov 8 01:17:23.445196 systemd[1]: sshd@15-10.244.23.242:22-139.178.68.195:36974.service: Deactivated successfully. Nov 8 01:17:23.448052 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 01:17:23.449363 systemd-logind[1488]: Session 18 logged out. Waiting for processes to exit. Nov 8 01:17:23.451890 systemd-logind[1488]: Removed session 18. Nov 8 01:17:23.610715 systemd[1]: Started sshd@16-10.244.23.242:22-139.178.68.195:52552.service - OpenSSH per-connection server daemon (139.178.68.195:52552). Nov 8 01:17:24.565686 sshd[5642]: Accepted publickey for core from 139.178.68.195 port 52552 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:17:24.568777 sshd[5642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:17:24.576464 systemd-logind[1488]: New session 19 of user core. Nov 8 01:17:24.584442 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 01:17:24.683516 containerd[1502]: time="2025-11-08T01:17:24.683436835Z" level=info msg="StopPodSandbox for \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\"" Nov 8 01:17:24.893349 containerd[1502]: 2025-11-08 01:17:24.815 [WARNING][5653] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39", ResourceVersion:"1490", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053", Pod:"goldmane-666569f655-wd97q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali756d0b101d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:17:24.893349 containerd[1502]: 2025-11-08 01:17:24.816 [INFO][5653] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Nov 8 01:17:24.893349 containerd[1502]: 2025-11-08 01:17:24.816 [INFO][5653] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" iface="eth0" netns="" Nov 8 01:17:24.893349 containerd[1502]: 2025-11-08 01:17:24.816 [INFO][5653] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Nov 8 01:17:24.893349 containerd[1502]: 2025-11-08 01:17:24.816 [INFO][5653] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Nov 8 01:17:24.893349 containerd[1502]: 2025-11-08 01:17:24.872 [INFO][5660] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" HandleID="k8s-pod-network.8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:17:24.893349 containerd[1502]: 2025-11-08 01:17:24.874 [INFO][5660] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:17:24.893349 containerd[1502]: 2025-11-08 01:17:24.874 [INFO][5660] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:17:24.893349 containerd[1502]: 2025-11-08 01:17:24.885 [WARNING][5660] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" HandleID="k8s-pod-network.8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:17:24.893349 containerd[1502]: 2025-11-08 01:17:24.885 [INFO][5660] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" HandleID="k8s-pod-network.8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:17:24.893349 containerd[1502]: 2025-11-08 01:17:24.887 [INFO][5660] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:17:24.893349 containerd[1502]: 2025-11-08 01:17:24.890 [INFO][5653] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Nov 8 01:17:24.895454 containerd[1502]: time="2025-11-08T01:17:24.893341349Z" level=info msg="TearDown network for sandbox \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\" successfully" Nov 8 01:17:24.895454 containerd[1502]: time="2025-11-08T01:17:24.893432512Z" level=info msg="StopPodSandbox for \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\" returns successfully" Nov 8 01:17:24.895454 containerd[1502]: time="2025-11-08T01:17:24.894979851Z" level=info msg="RemovePodSandbox for \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\"" Nov 8 01:17:24.895454 containerd[1502]: time="2025-11-08T01:17:24.895056110Z" level=info msg="Forcibly stopping sandbox \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\"" Nov 8 01:17:25.017789 containerd[1502]: 2025-11-08 01:17:24.969 [WARNING][5674] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39", ResourceVersion:"1490", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 1, 15, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-1w3cb.gb1.brightbox.com", ContainerID:"9bef7c6b7721f2e11a98007f88c0b4b54f6e5be8ba983c3a3f66339f2383e053", Pod:"goldmane-666569f655-wd97q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.17.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali756d0b101d1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 01:17:25.017789 containerd[1502]: 2025-11-08 01:17:24.970 [INFO][5674] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Nov 8 01:17:25.017789 containerd[1502]: 2025-11-08 01:17:24.970 [INFO][5674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" iface="eth0" netns="" Nov 8 01:17:25.017789 containerd[1502]: 2025-11-08 01:17:24.970 [INFO][5674] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Nov 8 01:17:25.017789 containerd[1502]: 2025-11-08 01:17:24.970 [INFO][5674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Nov 8 01:17:25.017789 containerd[1502]: 2025-11-08 01:17:24.998 [INFO][5681] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" HandleID="k8s-pod-network.8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:17:25.017789 containerd[1502]: 2025-11-08 01:17:24.998 [INFO][5681] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 01:17:25.017789 containerd[1502]: 2025-11-08 01:17:24.998 [INFO][5681] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 01:17:25.017789 containerd[1502]: 2025-11-08 01:17:25.009 [WARNING][5681] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" HandleID="k8s-pod-network.8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:17:25.017789 containerd[1502]: 2025-11-08 01:17:25.009 [INFO][5681] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" HandleID="k8s-pod-network.8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Workload="srv--1w3cb.gb1.brightbox.com-k8s-goldmane--666569f655--wd97q-eth0" Nov 8 01:17:25.017789 containerd[1502]: 2025-11-08 01:17:25.013 [INFO][5681] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 01:17:25.017789 containerd[1502]: 2025-11-08 01:17:25.015 [INFO][5674] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e" Nov 8 01:17:25.019664 containerd[1502]: time="2025-11-08T01:17:25.017851318Z" level=info msg="TearDown network for sandbox \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\" successfully" Nov 8 01:17:25.024155 containerd[1502]: time="2025-11-08T01:17:25.024065418Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 01:17:25.024291 containerd[1502]: time="2025-11-08T01:17:25.024208709Z" level=info msg="RemovePodSandbox \"8cfb81260e9eefd98c5c75d45eb7b314355be1ca88d5d84e16dd2436a4dbdf6e\" returns successfully" Nov 8 01:17:26.156339 sshd[5642]: pam_unix(sshd:session): session closed for user core Nov 8 01:17:26.169407 systemd[1]: sshd@16-10.244.23.242:22-139.178.68.195:52552.service: Deactivated successfully. Nov 8 01:17:26.175064 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 01:17:26.176652 systemd-logind[1488]: Session 19 logged out. Waiting for processes to exit. Nov 8 01:17:26.179772 systemd-logind[1488]: Removed session 19. Nov 8 01:17:26.302548 systemd[1]: Started sshd@17-10.244.23.242:22-139.178.68.195:52558.service - OpenSSH per-connection server daemon (139.178.68.195:52558). Nov 8 01:17:26.371757 kubelet[2667]: E1108 01:17:26.371555 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-sccdl" podUID="1cfed565-fcb5-4110-9fc3-0c3a9aaca493" Nov 8 01:17:26.371757 kubelet[2667]: E1108 01:17:26.371679 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" podUID="3e1f2813-87fb-41fd-ad67-d8abf3b908a6" Nov 8 01:17:27.246242 sshd[5704]: Accepted publickey for core from 139.178.68.195 port 52558 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:17:27.249553 sshd[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:17:27.258793 systemd-logind[1488]: New session 20 of user core. Nov 8 01:17:27.265456 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 01:17:28.315437 sshd[5704]: pam_unix(sshd:session): session closed for user core Nov 8 01:17:28.321673 systemd[1]: sshd@17-10.244.23.242:22-139.178.68.195:52558.service: Deactivated successfully. Nov 8 01:17:28.325794 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 01:17:28.327465 systemd-logind[1488]: Session 20 logged out. Waiting for processes to exit. Nov 8 01:17:28.328811 systemd-logind[1488]: Removed session 20. Nov 8 01:17:28.472526 systemd[1]: Started sshd@18-10.244.23.242:22-139.178.68.195:52574.service - OpenSSH per-connection server daemon (139.178.68.195:52574). Nov 8 01:17:29.375542 kubelet[2667]: E1108 01:17:29.375205 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:17:29.376783 kubelet[2667]: E1108 01:17:29.375659 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4d5bbb98-9znr7" podUID="a44b2afe-dc17-4635-9d12-87b1697a9f2b" Nov 8 01:17:29.391104 sshd[5716]: Accepted publickey for core from 139.178.68.195 port 52574 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:17:29.395577 sshd[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:17:29.411396 systemd-logind[1488]: New session 21 of user core. Nov 8 01:17:29.420640 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 01:17:30.111570 sshd[5716]: pam_unix(sshd:session): session closed for user core Nov 8 01:17:30.117571 systemd-logind[1488]: Session 21 logged out. Waiting for processes to exit. Nov 8 01:17:30.118924 systemd[1]: sshd@18-10.244.23.242:22-139.178.68.195:52574.service: Deactivated successfully. Nov 8 01:17:30.121911 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 01:17:30.123755 systemd-logind[1488]: Removed session 21. Nov 8 01:17:31.378346 kubelet[2667]: E1108 01:17:31.378120 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f775cbcb9-f95x7" podUID="e22bf778-56e1-456c-a095-d6acd02811e3" Nov 8 01:17:32.370748 kubelet[2667]: E1108 01:17:32.370224 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fb7bc4b99-gm555" podUID="bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d" Nov 8 01:17:35.276684 systemd[1]: Started sshd@19-10.244.23.242:22-139.178.68.195:50518.service - OpenSSH per-connection server daemon (139.178.68.195:50518). Nov 8 01:17:36.189970 sshd[5733]: Accepted publickey for core from 139.178.68.195 port 50518 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:17:36.191052 sshd[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:17:36.202524 systemd-logind[1488]: New session 22 of user core. Nov 8 01:17:36.211527 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 01:17:36.957161 sshd[5733]: pam_unix(sshd:session): session closed for user core Nov 8 01:17:36.962042 systemd[1]: sshd@19-10.244.23.242:22-139.178.68.195:50518.service: Deactivated successfully. Nov 8 01:17:36.965594 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 01:17:36.966976 systemd-logind[1488]: Session 22 logged out. Waiting for processes to exit. Nov 8 01:17:36.969385 systemd-logind[1488]: Removed session 22. Nov 8 01:17:37.372500 kubelet[2667]: E1108 01:17:37.372359 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" podUID="3e1f2813-87fb-41fd-ad67-d8abf3b908a6" Nov 8 01:17:38.371654 kubelet[2667]: E1108 01:17:38.371138 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wd97q" podUID="6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39" Nov 8 01:17:38.371654 kubelet[2667]: E1108 01:17:38.371552 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-sccdl" podUID="1cfed565-fcb5-4110-9fc3-0c3a9aaca493" Nov 8 01:17:42.126639 systemd[1]: Started sshd@20-10.244.23.242:22-139.178.68.195:50526.service - OpenSSH per-connection server daemon (139.178.68.195:50526). Nov 8 01:17:42.391649 containerd[1502]: time="2025-11-08T01:17:42.390836215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:17:42.752360 containerd[1502]: time="2025-11-08T01:17:42.752110322Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:17:42.753704 containerd[1502]: time="2025-11-08T01:17:42.753634499Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:17:42.754383 containerd[1502]: time="2025-11-08T01:17:42.753684419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:17:42.754469 kubelet[2667]: E1108 01:17:42.754016 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:17:42.754469 kubelet[2667]: E1108 01:17:42.754134 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:17:42.764497 kubelet[2667]: E1108 01:17:42.764407 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vxgvj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-f4d5bbb98-9znr7_calico-apiserver(a44b2afe-dc17-4635-9d12-87b1697a9f2b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:17:42.766456 kubelet[2667]: E1108 01:17:42.766376 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4d5bbb98-9znr7" podUID="a44b2afe-dc17-4635-9d12-87b1697a9f2b" Nov 8 01:17:43.115220 sshd[5746]: Accepted publickey for core from 139.178.68.195 port 50526 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:17:43.121618 sshd[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:17:43.132514 systemd-logind[1488]: New session 23 of user core. Nov 8 01:17:43.137407 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 01:17:43.373409 containerd[1502]: time="2025-11-08T01:17:43.371789145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 01:17:43.373597 kubelet[2667]: E1108 01:17:43.372273 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5fb7bc4b99-gm555" podUID="bcce8685-942f-4e9d-bdd3-fc9f68bc3c6d" Nov 8 01:17:43.693287 containerd[1502]: time="2025-11-08T01:17:43.692992321Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:17:43.696428 containerd[1502]: time="2025-11-08T01:17:43.696322053Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 01:17:43.696784 containerd[1502]: time="2025-11-08T01:17:43.696565452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 01:17:43.697061 kubelet[2667]: E1108 01:17:43.696977 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:17:43.697216 kubelet[2667]: E1108 01:17:43.697082 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 01:17:43.697409 kubelet[2667]: E1108 01:17:43.697336 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xzb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qxczs_calico-system(51b46487-75c6-4a08-a5c4-0240abff3a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 01:17:43.702277 containerd[1502]: time="2025-11-08T01:17:43.701668133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 01:17:44.017837 containerd[1502]: time="2025-11-08T01:17:44.017294131Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:17:44.021493 containerd[1502]: time="2025-11-08T01:17:44.021384056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 01:17:44.021865 containerd[1502]: time="2025-11-08T01:17:44.021613073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 01:17:44.024002 kubelet[2667]: E1108 01:17:44.023617 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:17:44.024002 kubelet[2667]: E1108 01:17:44.023746 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 01:17:44.035682 kubelet[2667]: E1108 01:17:44.035448 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7xzb6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qxczs_calico-system(51b46487-75c6-4a08-a5c4-0240abff3a0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 01:17:44.036931 kubelet[2667]: E1108 01:17:44.036815 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b" Nov 8 01:17:44.443569 sshd[5746]: pam_unix(sshd:session): session closed for user core Nov 8 01:17:44.451720 systemd-logind[1488]: Session 23 logged out. Waiting for processes to exit. Nov 8 01:17:44.454144 systemd[1]: sshd@20-10.244.23.242:22-139.178.68.195:50526.service: Deactivated successfully. Nov 8 01:17:44.459281 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 01:17:44.463552 systemd-logind[1488]: Removed session 23. Nov 8 01:17:46.374545 containerd[1502]: time="2025-11-08T01:17:46.374468152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 01:17:46.700405 containerd[1502]: time="2025-11-08T01:17:46.698663686Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:17:46.701523 containerd[1502]: time="2025-11-08T01:17:46.701324683Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 01:17:46.701676 containerd[1502]: time="2025-11-08T01:17:46.701367999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 01:17:46.713390 kubelet[2667]: E1108 01:17:46.713295 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:17:46.722922 kubelet[2667]: E1108 01:17:46.722830 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 01:17:46.724450 kubelet[2667]: E1108 01:17:46.723151 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:70e36517ff7645e2bd57c80a14e0d94d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4xswz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f775cbcb9-f95x7_calico-system(e22bf778-56e1-456c-a095-d6acd02811e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 01:17:46.733251 containerd[1502]: time="2025-11-08T01:17:46.732891017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 01:17:47.099307 containerd[1502]: time="2025-11-08T01:17:47.099229894Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:17:47.101208 containerd[1502]: time="2025-11-08T01:17:47.100425970Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 01:17:47.101208 containerd[1502]: time="2025-11-08T01:17:47.100529887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 01:17:47.101360 kubelet[2667]: E1108 01:17:47.100930 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:17:47.101360 kubelet[2667]: E1108 01:17:47.100997 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 01:17:47.101360 kubelet[2667]: E1108 01:17:47.101190 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4xswz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5f775cbcb9-f95x7_calico-system(e22bf778-56e1-456c-a095-d6acd02811e3): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 01:17:47.106189 kubelet[2667]: E1108 01:17:47.104006 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5f775cbcb9-f95x7" podUID="e22bf778-56e1-456c-a095-d6acd02811e3" Nov 8 01:17:49.406789 kubelet[2667]: E1108 01:17:49.406684 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-wd97q" podUID="6540e6ee-2026-4c3d-b7ab-1e85d3d9ab39" Nov 8 01:17:49.609665 systemd[1]: Started sshd@21-10.244.23.242:22-139.178.68.195:42092.service - OpenSSH per-connection server daemon (139.178.68.195:42092). Nov 8 01:17:50.382211 containerd[1502]: time="2025-11-08T01:17:50.380858122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:17:50.619248 sshd[5788]: Accepted publickey for core from 139.178.68.195 port 42092 ssh2: RSA SHA256:WintwEBR9u8w0CRLAsglLTHEK+D8nXhu++7OvvIo3oo Nov 8 01:17:50.625536 sshd[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 01:17:50.641216 systemd-logind[1488]: New session 24 of user core. Nov 8 01:17:50.649367 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 01:17:50.711196 containerd[1502]: time="2025-11-08T01:17:50.710666812Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:17:50.713879 containerd[1502]: time="2025-11-08T01:17:50.713824014Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:17:50.714476 containerd[1502]: time="2025-11-08T01:17:50.714013602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:17:50.721602 kubelet[2667]: E1108 01:17:50.721490 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:17:50.728993 kubelet[2667]: E1108 01:17:50.726237 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:17:50.731188 kubelet[2667]: E1108 01:17:50.729603 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j56qx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54cbc7f844-zdscz_calico-apiserver(3e1f2813-87fb-41fd-ad67-d8abf3b908a6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:17:50.734337 kubelet[2667]: E1108 01:17:50.734282 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-zdscz" podUID="3e1f2813-87fb-41fd-ad67-d8abf3b908a6" Nov 8 01:17:50.734637 containerd[1502]: time="2025-11-08T01:17:50.734601862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 01:17:51.078968 containerd[1502]: time="2025-11-08T01:17:51.078391186Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 01:17:51.080048 containerd[1502]: time="2025-11-08T01:17:51.079990385Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 01:17:51.080446 containerd[1502]: time="2025-11-08T01:17:51.080240940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 01:17:51.080798 kubelet[2667]: E1108 01:17:51.080639 2667 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:17:51.080798 kubelet[2667]: E1108 01:17:51.080726 2667 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 01:17:51.081078 kubelet[2667]: E1108 01:17:51.080988 2667 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbr6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54cbc7f844-sccdl_calico-apiserver(1cfed565-fcb5-4110-9fc3-0c3a9aaca493): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 01:17:51.082554 kubelet[2667]: E1108 01:17:51.082507 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54cbc7f844-sccdl" podUID="1cfed565-fcb5-4110-9fc3-0c3a9aaca493" Nov 8 01:17:51.833532 sshd[5788]: pam_unix(sshd:session): session closed for user core Nov 8 01:17:51.842596 systemd[1]: sshd@21-10.244.23.242:22-139.178.68.195:42092.service: Deactivated successfully. Nov 8 01:17:51.850495 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 01:17:51.854649 systemd-logind[1488]: Session 24 logged out. Waiting for processes to exit. Nov 8 01:17:51.858805 systemd-logind[1488]: Removed session 24. Nov 8 01:17:54.376225 kubelet[2667]: E1108 01:17:54.372517 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-f4d5bbb98-9znr7" podUID="a44b2afe-dc17-4635-9d12-87b1697a9f2b" Nov 8 01:17:54.376225 kubelet[2667]: E1108 01:17:54.374905 2667 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qxczs" podUID="51b46487-75c6-4a08-a5c4-0240abff3a0b"