Nov 1 01:50:22.048858 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 01:50:22.048896 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:50:22.048911 kernel: BIOS-provided physical RAM map: Nov 1 01:50:22.048927 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Nov 1 01:50:22.048938 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Nov 1 01:50:22.048948 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Nov 1 01:50:22.048960 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Nov 1 01:50:22.048971 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Nov 1 01:50:22.048982 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Nov 1 01:50:22.048993 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Nov 1 01:50:22.049004 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 01:50:22.049105 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Nov 1 01:50:22.049126 kernel: NX (Execute Disable) protection: active Nov 1 01:50:22.049137 kernel: APIC: Static calls initialized Nov 1 01:50:22.049163 kernel: SMBIOS 2.8 present. Nov 1 01:50:22.049183 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Nov 1 01:50:22.049195 kernel: Hypervisor detected: KVM Nov 1 01:50:22.049213 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 01:50:22.049225 kernel: kvm-clock: using sched offset of 4385431226 cycles Nov 1 01:50:22.049238 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 01:50:22.049250 kernel: tsc: Detected 2499.998 MHz processor Nov 1 01:50:22.049262 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 01:50:22.049275 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 01:50:22.049287 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Nov 1 01:50:22.049299 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Nov 1 01:50:22.049311 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 01:50:22.049327 kernel: Using GB pages for direct mapping Nov 1 01:50:22.049340 kernel: ACPI: Early table checksum verification disabled Nov 1 01:50:22.049352 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Nov 1 01:50:22.049364 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 01:50:22.049376 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 01:50:22.049388 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 01:50:22.049399 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Nov 1 01:50:22.049411 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 01:50:22.049423 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 01:50:22.049440 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 01:50:22.049452 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 01:50:22.049464 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Nov 1 01:50:22.049476 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Nov 1 01:50:22.049488 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Nov 1 01:50:22.049506 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Nov 1 01:50:22.049519 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Nov 1 01:50:22.049536 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Nov 1 01:50:22.049561 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Nov 1 01:50:22.049573 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 01:50:22.049586 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 01:50:22.049598 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Nov 1 01:50:22.049610 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Nov 1 01:50:22.049623 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Nov 1 01:50:22.049641 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Nov 1 01:50:22.049654 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Nov 1 01:50:22.049666 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Nov 1 01:50:22.049678 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Nov 1 01:50:22.049691 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Nov 1 01:50:22.049703 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Nov 1 01:50:22.049715 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Nov 1 01:50:22.049727 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Nov 1 01:50:22.049740 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Nov 1 01:50:22.049752 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Nov 1 01:50:22.049769 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Nov 1 01:50:22.049782 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Nov 1 01:50:22.049795 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Nov 1 01:50:22.049807 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Nov 1 01:50:22.049820 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Nov 1 01:50:22.049832 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Nov 1 01:50:22.049845 kernel: Zone ranges: Nov 1 01:50:22.049858 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 01:50:22.049870 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Nov 1 01:50:22.049887 kernel: Normal empty Nov 1 01:50:22.049900 kernel: Movable zone start for each node Nov 1 01:50:22.049912 kernel: Early memory node ranges Nov 1 01:50:22.049925 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Nov 1 01:50:22.049937 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Nov 1 01:50:22.049949 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Nov 1 01:50:22.049962 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 01:50:22.049974 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Nov 1 01:50:22.049987 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Nov 1 01:50:22.049999 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 01:50:22.050029 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 01:50:22.050042 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 01:50:22.050055 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 01:50:22.050067 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 01:50:22.050080 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 01:50:22.050092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 01:50:22.050105 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 01:50:22.050117 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 01:50:22.050129 kernel: TSC deadline timer available Nov 1 01:50:22.050148 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Nov 1 01:50:22.050161 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 01:50:22.050173 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Nov 1 01:50:22.050186 kernel: Booting paravirtualized kernel on KVM Nov 1 01:50:22.050198 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 01:50:22.050211 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Nov 1 01:50:22.050224 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Nov 1 01:50:22.050236 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Nov 1 01:50:22.050249 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Nov 1 01:50:22.050266 kernel: kvm-guest: PV spinlocks enabled Nov 1 01:50:22.050279 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 01:50:22.050293 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:50:22.050306 kernel: random: crng init done Nov 1 01:50:22.050318 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 01:50:22.050331 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 01:50:22.050343 kernel: Fallback order for Node 0: 0 Nov 1 01:50:22.050356 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Nov 1 01:50:22.050373 kernel: Policy zone: DMA32 Nov 1 01:50:22.050386 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 01:50:22.050399 kernel: software IO TLB: area num 16. Nov 1 01:50:22.050411 kernel: Memory: 1901532K/2096616K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 194824K reserved, 0K cma-reserved) Nov 1 01:50:22.050424 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Nov 1 01:50:22.050437 kernel: Kernel/User page tables isolation: enabled Nov 1 01:50:22.050449 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 01:50:22.050462 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 01:50:22.050474 kernel: Dynamic Preempt: voluntary Nov 1 01:50:22.050492 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 01:50:22.050505 kernel: rcu: RCU event tracing is enabled. Nov 1 01:50:22.050518 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Nov 1 01:50:22.050531 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 01:50:22.050556 kernel: Rude variant of Tasks RCU enabled. Nov 1 01:50:22.050585 kernel: Tracing variant of Tasks RCU enabled. Nov 1 01:50:22.050598 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 01:50:22.050612 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Nov 1 01:50:22.050625 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Nov 1 01:50:22.050638 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 01:50:22.050651 kernel: Console: colour VGA+ 80x25 Nov 1 01:50:22.050663 kernel: printk: console [tty0] enabled Nov 1 01:50:22.050682 kernel: printk: console [ttyS0] enabled Nov 1 01:50:22.050695 kernel: ACPI: Core revision 20230628 Nov 1 01:50:22.050708 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 01:50:22.050721 kernel: x2apic enabled Nov 1 01:50:22.050735 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 01:50:22.050753 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 1 01:50:22.050766 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Nov 1 01:50:22.050780 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 01:50:22.050793 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Nov 1 01:50:22.050806 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Nov 1 01:50:22.050819 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 01:50:22.050832 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 01:50:22.050845 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 01:50:22.050859 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Nov 1 01:50:22.050872 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 01:50:22.050890 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 01:50:22.050903 kernel: MDS: Mitigation: Clear CPU buffers Nov 1 01:50:22.050916 kernel: MMIO Stale Data: Unknown: No mitigations Nov 1 01:50:22.050929 kernel: SRBDS: Unknown: Dependent on hypervisor status Nov 1 01:50:22.050942 kernel: active return thunk: its_return_thunk Nov 1 01:50:22.050954 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 01:50:22.050968 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 01:50:22.050981 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 01:50:22.050993 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 01:50:22.051006 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 01:50:22.051044 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Nov 1 01:50:22.051064 kernel: Freeing SMP alternatives memory: 32K Nov 1 01:50:22.051077 kernel: pid_max: default: 32768 minimum: 301 Nov 1 01:50:22.051090 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 01:50:22.051103 kernel: landlock: Up and running. Nov 1 01:50:22.051116 kernel: SELinux: Initializing. Nov 1 01:50:22.051129 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 01:50:22.051142 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Nov 1 01:50:22.051155 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Nov 1 01:50:22.051169 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:50:22.051183 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:50:22.051201 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Nov 1 01:50:22.051215 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Nov 1 01:50:22.051228 kernel: signal: max sigframe size: 1776 Nov 1 01:50:22.051241 kernel: rcu: Hierarchical SRCU implementation. Nov 1 01:50:22.051254 kernel: rcu: Max phase no-delay instances is 400. Nov 1 01:50:22.051268 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 01:50:22.051281 kernel: smp: Bringing up secondary CPUs ... Nov 1 01:50:22.051294 kernel: smpboot: x86: Booting SMP configuration: Nov 1 01:50:22.051307 kernel: .... node #0, CPUs: #1 Nov 1 01:50:22.051326 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Nov 1 01:50:22.051339 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 01:50:22.051352 kernel: smpboot: Max logical packages: 16 Nov 1 01:50:22.051365 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Nov 1 01:50:22.051378 kernel: devtmpfs: initialized Nov 1 01:50:22.051391 kernel: x86/mm: Memory block size: 128MB Nov 1 01:50:22.051404 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 01:50:22.051418 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Nov 1 01:50:22.051431 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 01:50:22.051449 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 01:50:22.051462 kernel: audit: initializing netlink subsys (disabled) Nov 1 01:50:22.051476 kernel: audit: type=2000 audit(1761961820.122:1): state=initialized audit_enabled=0 res=1 Nov 1 01:50:22.051488 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 01:50:22.051502 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 01:50:22.051515 kernel: cpuidle: using governor menu Nov 1 01:50:22.051528 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 01:50:22.051550 kernel: dca service started, version 1.12.1 Nov 1 01:50:22.051566 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Nov 1 01:50:22.051585 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Nov 1 01:50:22.051598 kernel: PCI: Using configuration type 1 for base access Nov 1 01:50:22.051611 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 01:50:22.051624 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 01:50:22.051638 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 01:50:22.051651 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 01:50:22.051664 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 01:50:22.051677 kernel: ACPI: Added _OSI(Module Device) Nov 1 01:50:22.051690 kernel: ACPI: Added _OSI(Processor Device) Nov 1 01:50:22.051708 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 01:50:22.051721 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 01:50:22.051734 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 01:50:22.051747 kernel: ACPI: Interpreter enabled Nov 1 01:50:22.051760 kernel: ACPI: PM: (supports S0 S5) Nov 1 01:50:22.051774 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 01:50:22.051787 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 01:50:22.051800 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 01:50:22.051813 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 01:50:22.051831 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 01:50:22.052113 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 01:50:22.052304 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 1 01:50:22.052479 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 1 01:50:22.052499 kernel: PCI host bridge to bus 0000:00 Nov 1 01:50:22.052694 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 01:50:22.052856 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 01:50:22.053042 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 01:50:22.053206 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Nov 1 01:50:22.053361 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Nov 1 01:50:22.053520 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Nov 1 01:50:22.053689 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 01:50:22.053890 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Nov 1 01:50:22.054145 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Nov 1 01:50:22.054324 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Nov 1 01:50:22.054497 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Nov 1 01:50:22.054685 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Nov 1 01:50:22.054859 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 01:50:22.055067 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Nov 1 01:50:22.055244 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Nov 1 01:50:22.055440 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Nov 1 01:50:22.055631 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Nov 1 01:50:22.055819 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Nov 1 01:50:22.055993 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Nov 1 01:50:22.056854 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Nov 1 01:50:22.057378 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Nov 1 01:50:22.057591 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Nov 1 01:50:22.057767 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Nov 1 01:50:22.057947 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Nov 1 01:50:22.058145 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Nov 1 01:50:22.058326 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Nov 1 01:50:22.058498 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Nov 1 01:50:22.060862 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Nov 1 01:50:22.061088 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Nov 1 01:50:22.061280 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Nov 1 01:50:22.061455 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Nov 1 01:50:22.061646 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Nov 1 01:50:22.061817 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Nov 1 01:50:22.061989 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Nov 1 01:50:22.063622 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Nov 1 01:50:22.063807 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Nov 1 01:50:22.063984 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Nov 1 01:50:22.064176 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Nov 1 01:50:22.064366 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Nov 1 01:50:22.064551 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 01:50:22.064743 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Nov 1 01:50:22.066116 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Nov 1 01:50:22.066321 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Nov 1 01:50:22.066509 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Nov 1 01:50:22.066698 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Nov 1 01:50:22.066897 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Nov 1 01:50:22.068128 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Nov 1 01:50:22.068325 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 1 01:50:22.068501 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 1 01:50:22.068690 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 01:50:22.068878 kernel: pci_bus 0000:02: extended config space not accessible Nov 1 01:50:22.069936 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Nov 1 01:50:22.070166 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Nov 1 01:50:22.070358 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 1 01:50:22.070553 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 01:50:22.070753 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Nov 1 01:50:22.070935 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Nov 1 01:50:22.073175 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 1 01:50:22.073359 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 01:50:22.073535 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 01:50:22.073743 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Nov 1 01:50:22.073935 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Nov 1 01:50:22.074156 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 1 01:50:22.074329 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 01:50:22.074500 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 01:50:22.074686 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 1 01:50:22.074860 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 01:50:22.075062 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 01:50:22.075252 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 1 01:50:22.075427 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 01:50:22.075617 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 01:50:22.075795 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 1 01:50:22.075968 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 01:50:22.077205 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 01:50:22.077389 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 1 01:50:22.077579 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 01:50:22.077762 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 01:50:22.077939 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 1 01:50:22.078183 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 01:50:22.078354 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 01:50:22.078375 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 01:50:22.078390 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 01:50:22.078403 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 01:50:22.078417 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 01:50:22.078430 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 01:50:22.078451 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 01:50:22.078465 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 01:50:22.078478 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 01:50:22.078492 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 01:50:22.078505 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 01:50:22.078518 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 01:50:22.078532 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 01:50:22.078558 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 01:50:22.078572 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 01:50:22.078592 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 01:50:22.078605 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 01:50:22.078618 kernel: iommu: Default domain type: Translated Nov 1 01:50:22.078632 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 01:50:22.078645 kernel: PCI: Using ACPI for IRQ routing Nov 1 01:50:22.078658 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 01:50:22.078672 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Nov 1 01:50:22.078685 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Nov 1 01:50:22.078855 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 01:50:22.080077 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 01:50:22.080258 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 01:50:22.080280 kernel: vgaarb: loaded Nov 1 01:50:22.080294 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 01:50:22.080307 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 01:50:22.080320 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 01:50:22.080334 kernel: pnp: PnP ACPI init Nov 1 01:50:22.080515 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Nov 1 01:50:22.080557 kernel: pnp: PnP ACPI: found 5 devices Nov 1 01:50:22.080572 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 01:50:22.080586 kernel: NET: Registered PF_INET protocol family Nov 1 01:50:22.080599 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 01:50:22.080613 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Nov 1 01:50:22.080626 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 01:50:22.080640 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 01:50:22.080653 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Nov 1 01:50:22.080672 kernel: TCP: Hash tables configured (established 16384 bind 16384) Nov 1 01:50:22.080686 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 01:50:22.080700 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Nov 1 01:50:22.080713 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 01:50:22.080727 kernel: NET: Registered PF_XDP protocol family Nov 1 01:50:22.080903 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Nov 1 01:50:22.081128 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Nov 1 01:50:22.081301 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Nov 1 01:50:22.081481 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Nov 1 01:50:22.081668 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Nov 1 01:50:22.081842 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Nov 1 01:50:22.084006 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Nov 1 01:50:22.084292 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Nov 1 01:50:22.084473 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Nov 1 01:50:22.084678 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Nov 1 01:50:22.084857 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Nov 1 01:50:22.085086 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Nov 1 01:50:22.085262 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Nov 1 01:50:22.085434 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Nov 1 01:50:22.085620 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Nov 1 01:50:22.085792 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Nov 1 01:50:22.085972 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Nov 1 01:50:22.086207 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Nov 1 01:50:22.086382 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Nov 1 01:50:22.086566 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Nov 1 01:50:22.086740 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Nov 1 01:50:22.086915 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 01:50:22.087113 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Nov 1 01:50:22.087289 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Nov 1 01:50:22.087462 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Nov 1 01:50:22.087651 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 01:50:22.087835 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Nov 1 01:50:22.088030 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Nov 1 01:50:22.088211 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Nov 1 01:50:22.088393 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 01:50:22.088581 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Nov 1 01:50:22.088764 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Nov 1 01:50:22.088948 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Nov 1 01:50:22.089179 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 01:50:22.089352 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Nov 1 01:50:22.089523 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Nov 1 01:50:22.089705 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Nov 1 01:50:22.089875 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 01:50:22.091114 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Nov 1 01:50:22.091298 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Nov 1 01:50:22.091482 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Nov 1 01:50:22.091671 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 01:50:22.091845 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Nov 1 01:50:22.092027 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Nov 1 01:50:22.092202 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Nov 1 01:50:22.092386 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 01:50:22.092574 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Nov 1 01:50:22.097311 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Nov 1 01:50:22.097515 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Nov 1 01:50:22.097707 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 01:50:22.097873 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 01:50:22.098045 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 01:50:22.098204 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 01:50:22.098359 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Nov 1 01:50:22.098523 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Nov 1 01:50:22.098691 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Nov 1 01:50:22.098868 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Nov 1 01:50:22.104081 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Nov 1 01:50:22.104457 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Nov 1 01:50:22.104674 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Nov 1 01:50:22.104867 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Nov 1 01:50:22.105123 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Nov 1 01:50:22.105291 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Nov 1 01:50:22.105478 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Nov 1 01:50:22.105660 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Nov 1 01:50:22.105837 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Nov 1 01:50:22.107805 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Nov 1 01:50:22.107999 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Nov 1 01:50:22.108251 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Nov 1 01:50:22.108434 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Nov 1 01:50:22.108613 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Nov 1 01:50:22.108776 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Nov 1 01:50:22.108946 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Nov 1 01:50:22.109143 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Nov 1 01:50:22.109312 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Nov 1 01:50:22.109481 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Nov 1 01:50:22.109655 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Nov 1 01:50:22.109816 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Nov 1 01:50:22.109988 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Nov 1 01:50:22.110241 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Nov 1 01:50:22.110403 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Nov 1 01:50:22.110432 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 01:50:22.110447 kernel: PCI: CLS 0 bytes, default 64 Nov 1 01:50:22.110462 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 01:50:22.110476 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Nov 1 01:50:22.110490 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 01:50:22.110505 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Nov 1 01:50:22.110519 kernel: Initialise system trusted keyrings Nov 1 01:50:22.110533 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Nov 1 01:50:22.110564 kernel: Key type asymmetric registered Nov 1 01:50:22.110580 kernel: Asymmetric key parser 'x509' registered Nov 1 01:50:22.110593 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 01:50:22.110608 kernel: io scheduler mq-deadline registered Nov 1 01:50:22.110621 kernel: io scheduler kyber registered Nov 1 01:50:22.110635 kernel: io scheduler bfq registered Nov 1 01:50:22.110811 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Nov 1 01:50:22.110989 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Nov 1 01:50:22.111238 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:50:22.111420 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Nov 1 01:50:22.111607 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Nov 1 01:50:22.111778 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:50:22.111952 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Nov 1 01:50:22.112213 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Nov 1 01:50:22.112395 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:50:22.112591 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Nov 1 01:50:22.112762 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Nov 1 01:50:22.112933 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:50:22.113160 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Nov 1 01:50:22.113330 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Nov 1 01:50:22.113498 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:50:22.113690 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Nov 1 01:50:22.113863 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Nov 1 01:50:22.114066 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:50:22.114240 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Nov 1 01:50:22.114414 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Nov 1 01:50:22.114597 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:50:22.114777 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Nov 1 01:50:22.114949 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Nov 1 01:50:22.115149 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Nov 1 01:50:22.115172 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 01:50:22.115187 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 01:50:22.115202 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 01:50:22.115223 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 01:50:22.115245 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 01:50:22.115259 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 01:50:22.115274 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 01:50:22.115288 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 01:50:22.115469 kernel: rtc_cmos 00:03: RTC can wake from S4 Nov 1 01:50:22.115492 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 1 01:50:22.115662 kernel: rtc_cmos 00:03: registered as rtc0 Nov 1 01:50:22.115831 kernel: rtc_cmos 00:03: setting system clock to 2025-11-01T01:50:21 UTC (1761961821) Nov 1 01:50:22.115990 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Nov 1 01:50:22.116023 kernel: intel_pstate: CPU model not supported Nov 1 01:50:22.116054 kernel: NET: Registered PF_INET6 protocol family Nov 1 01:50:22.116081 kernel: Segment Routing with IPv6 Nov 1 01:50:22.116095 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 01:50:22.116109 kernel: NET: Registered PF_PACKET protocol family Nov 1 01:50:22.116122 kernel: Key type dns_resolver registered Nov 1 01:50:22.116148 kernel: IPI shorthand broadcast: enabled Nov 1 01:50:22.116170 kernel: sched_clock: Marking stable (1169004543, 238965517)->(1653502038, -245531978) Nov 1 01:50:22.116184 kernel: registered taskstats version 1 Nov 1 01:50:22.116198 kernel: Loading compiled-in X.509 certificates Nov 1 01:50:22.116213 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 01:50:22.116226 kernel: Key type .fscrypt registered Nov 1 01:50:22.116240 kernel: Key type fscrypt-provisioning registered Nov 1 01:50:22.116254 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 01:50:22.116268 kernel: ima: Allocated hash algorithm: sha1 Nov 1 01:50:22.116282 kernel: ima: No architecture policies found Nov 1 01:50:22.116301 kernel: clk: Disabling unused clocks Nov 1 01:50:22.116315 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 01:50:22.116330 kernel: Write protecting the kernel read-only data: 36864k Nov 1 01:50:22.116344 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 01:50:22.116357 kernel: Run /init as init process Nov 1 01:50:22.116371 kernel: with arguments: Nov 1 01:50:22.116385 kernel: /init Nov 1 01:50:22.116399 kernel: with environment: Nov 1 01:50:22.116413 kernel: HOME=/ Nov 1 01:50:22.116426 kernel: TERM=linux Nov 1 01:50:22.116448 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 01:50:22.116466 systemd[1]: Detected virtualization kvm. Nov 1 01:50:22.116481 systemd[1]: Detected architecture x86-64. Nov 1 01:50:22.116495 systemd[1]: Running in initrd. Nov 1 01:50:22.116510 systemd[1]: No hostname configured, using default hostname. Nov 1 01:50:22.116524 systemd[1]: Hostname set to . Nov 1 01:50:22.116549 systemd[1]: Initializing machine ID from VM UUID. Nov 1 01:50:22.116572 systemd[1]: Queued start job for default target initrd.target. Nov 1 01:50:22.116587 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:50:22.116602 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:50:22.116617 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 01:50:22.116633 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 01:50:22.116648 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 01:50:22.116668 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 01:50:22.116689 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 01:50:22.116705 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 01:50:22.116720 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:50:22.116735 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:50:22.116749 systemd[1]: Reached target paths.target - Path Units. Nov 1 01:50:22.116764 systemd[1]: Reached target slices.target - Slice Units. Nov 1 01:50:22.116779 systemd[1]: Reached target swap.target - Swaps. Nov 1 01:50:22.116794 systemd[1]: Reached target timers.target - Timer Units. Nov 1 01:50:22.116814 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:50:22.116829 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:50:22.116844 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 01:50:22.116858 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 01:50:22.116873 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:50:22.116888 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 01:50:22.116903 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:50:22.116918 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 01:50:22.116933 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 01:50:22.116953 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 01:50:22.116968 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 01:50:22.116983 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 01:50:22.116998 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 01:50:22.117060 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 01:50:22.117078 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:50:22.117139 systemd-journald[202]: Collecting audit messages is disabled. Nov 1 01:50:22.117178 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 01:50:22.117194 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:50:22.117209 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 01:50:22.117230 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 01:50:22.117246 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 01:50:22.117260 kernel: Bridge firewalling registered Nov 1 01:50:22.117275 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 01:50:22.117290 systemd-journald[202]: Journal started Nov 1 01:50:22.117322 systemd-journald[202]: Runtime Journal (/run/log/journal/d2d397ef342742df97f2a9ff53319ed7) is 4.7M, max 38.0M, 33.2M free. Nov 1 01:50:22.059006 systemd-modules-load[203]: Inserted module 'overlay' Nov 1 01:50:22.177353 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 01:50:22.107046 systemd-modules-load[203]: Inserted module 'br_netfilter' Nov 1 01:50:22.178424 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:50:22.179914 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:50:22.192280 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:50:22.195200 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 01:50:22.200271 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 01:50:22.205172 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 01:50:22.230708 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:50:22.236819 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:50:22.239948 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:50:22.249324 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 01:50:22.251619 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:50:22.255218 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 01:50:22.285850 dracut-cmdline[236]: dracut-dracut-053 Nov 1 01:50:22.288442 systemd-resolved[234]: Positive Trust Anchors: Nov 1 01:50:22.288488 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:50:22.288554 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 01:50:22.294318 systemd-resolved[234]: Defaulting to hostname 'linux'. Nov 1 01:50:22.297391 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 01:50:22.300083 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 01:50:22.302374 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:50:22.402135 kernel: SCSI subsystem initialized Nov 1 01:50:22.415137 kernel: Loading iSCSI transport class v2.0-870. Nov 1 01:50:22.429063 kernel: iscsi: registered transport (tcp) Nov 1 01:50:22.455522 kernel: iscsi: registered transport (qla4xxx) Nov 1 01:50:22.455626 kernel: QLogic iSCSI HBA Driver Nov 1 01:50:22.509771 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 01:50:22.518217 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 01:50:22.550114 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 01:50:22.550174 kernel: device-mapper: uevent: version 1.0.3 Nov 1 01:50:22.554041 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 01:50:22.601061 kernel: raid6: sse2x4 gen() 14025 MB/s Nov 1 01:50:22.619053 kernel: raid6: sse2x2 gen() 9648 MB/s Nov 1 01:50:22.637671 kernel: raid6: sse2x1 gen() 10212 MB/s Nov 1 01:50:22.637721 kernel: raid6: using algorithm sse2x4 gen() 14025 MB/s Nov 1 01:50:22.656783 kernel: raid6: .... xor() 7718 MB/s, rmw enabled Nov 1 01:50:22.656844 kernel: raid6: using ssse3x2 recovery algorithm Nov 1 01:50:22.683172 kernel: xor: automatically using best checksumming function avx Nov 1 01:50:22.882130 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 01:50:22.898883 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:50:22.911338 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:50:22.927823 systemd-udevd[419]: Using default interface naming scheme 'v255'. Nov 1 01:50:22.935074 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:50:22.943217 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 01:50:22.965042 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Nov 1 01:50:23.006431 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:50:23.019360 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 01:50:23.133617 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:50:23.143428 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 01:50:23.168699 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 01:50:23.173700 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:50:23.174484 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:50:23.178098 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 01:50:23.188262 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 01:50:23.204730 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:50:23.271535 kernel: ACPI: bus type USB registered Nov 1 01:50:23.271598 kernel: usbcore: registered new interface driver usbfs Nov 1 01:50:23.278029 kernel: usbcore: registered new interface driver hub Nov 1 01:50:23.282035 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Nov 1 01:50:23.286680 kernel: usbcore: registered new device driver usb Nov 1 01:50:23.286715 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 01:50:23.290668 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:50:23.291807 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:50:23.296470 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Nov 1 01:50:23.297196 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:50:23.299061 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:50:23.315362 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 01:50:23.315394 kernel: GPT:17805311 != 125829119 Nov 1 01:50:23.315413 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 01:50:23.315431 kernel: GPT:17805311 != 125829119 Nov 1 01:50:23.315448 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 01:50:23.315466 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 01:50:23.299243 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:50:23.302160 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:50:23.324260 kernel: AVX version of gcm_enc/dec engaged. Nov 1 01:50:23.324306 kernel: AES CTR mode by8 optimization enabled Nov 1 01:50:23.329680 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:50:23.362040 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 1 01:50:23.362334 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Nov 1 01:50:23.371070 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Nov 1 01:50:23.376049 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Nov 1 01:50:23.376308 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Nov 1 01:50:23.376536 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Nov 1 01:50:23.376752 kernel: hub 1-0:1.0: USB hub found Nov 1 01:50:23.382349 kernel: hub 1-0:1.0: 4 ports detected Nov 1 01:50:23.382669 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (472) Nov 1 01:50:23.386034 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Nov 1 01:50:23.386288 kernel: hub 2-0:1.0: USB hub found Nov 1 01:50:23.386512 kernel: hub 2-0:1.0: 4 ports detected Nov 1 01:50:23.410046 kernel: libata version 3.00 loaded. Nov 1 01:50:23.416037 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (478) Nov 1 01:50:23.442040 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 01:50:23.442393 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 01:50:23.444033 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Nov 1 01:50:23.444262 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 01:50:23.450030 kernel: scsi host0: ahci Nov 1 01:50:23.451044 kernel: scsi host1: ahci Nov 1 01:50:23.454047 kernel: scsi host2: ahci Nov 1 01:50:23.455043 kernel: scsi host3: ahci Nov 1 01:50:23.457059 kernel: scsi host4: ahci Nov 1 01:50:23.459043 kernel: scsi host5: ahci Nov 1 01:50:23.459265 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Nov 1 01:50:23.459287 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Nov 1 01:50:23.459305 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Nov 1 01:50:23.459323 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Nov 1 01:50:23.459340 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Nov 1 01:50:23.459367 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Nov 1 01:50:23.459950 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 01:50:23.536759 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 1 01:50:23.538963 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:50:23.547589 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 01:50:23.554948 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 01:50:23.561966 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 01:50:23.569372 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 01:50:23.573216 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 01:50:23.581669 disk-uuid[562]: Primary Header is updated. Nov 1 01:50:23.581669 disk-uuid[562]: Secondary Entries is updated. Nov 1 01:50:23.581669 disk-uuid[562]: Secondary Header is updated. Nov 1 01:50:23.587043 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 01:50:23.598159 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 01:50:23.608105 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 01:50:23.612759 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:50:23.630235 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Nov 1 01:50:23.771256 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 01:50:23.771332 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 01:50:23.778032 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 01:50:23.778081 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 01:50:23.778103 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 01:50:23.779331 kernel: ata3: SATA link down (SStatus 0 SControl 300) Nov 1 01:50:23.809040 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 01:50:23.819302 kernel: usbcore: registered new interface driver usbhid Nov 1 01:50:23.819353 kernel: usbhid: USB HID core driver Nov 1 01:50:23.827725 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Nov 1 01:50:23.827765 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Nov 1 01:50:24.605557 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 01:50:24.605632 disk-uuid[563]: The operation has completed successfully. Nov 1 01:50:24.670507 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 01:50:24.670690 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 01:50:24.697376 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 01:50:24.702246 sh[585]: Success Nov 1 01:50:24.720399 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Nov 1 01:50:24.784662 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 01:50:24.794150 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 01:50:24.797272 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 01:50:24.832073 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 01:50:24.832168 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:50:24.832189 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 01:50:24.832208 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 01:50:24.835038 kernel: BTRFS info (device dm-0): using free space tree Nov 1 01:50:24.845849 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 01:50:24.848343 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 01:50:24.855349 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 01:50:24.859232 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 01:50:24.878858 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:50:24.878927 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:50:24.878948 kernel: BTRFS info (device vda6): using free space tree Nov 1 01:50:24.884044 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 01:50:24.896149 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 1 01:50:24.898807 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:50:24.912130 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 01:50:24.920911 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 01:50:25.047990 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:50:25.057191 ignition[679]: Ignition 2.19.0 Nov 1 01:50:25.057561 ignition[679]: Stage: fetch-offline Nov 1 01:50:25.057644 ignition[679]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:50:25.057664 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 01:50:25.057836 ignition[679]: parsed url from cmdline: "" Nov 1 01:50:25.062261 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 01:50:25.057843 ignition[679]: no config URL provided Nov 1 01:50:25.065300 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:50:25.057853 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 01:50:25.057869 ignition[679]: no config at "/usr/lib/ignition/user.ign" Nov 1 01:50:25.057878 ignition[679]: failed to fetch config: resource requires networking Nov 1 01:50:25.058159 ignition[679]: Ignition finished successfully Nov 1 01:50:25.089379 systemd-networkd[772]: lo: Link UP Nov 1 01:50:25.089398 systemd-networkd[772]: lo: Gained carrier Nov 1 01:50:25.091800 systemd-networkd[772]: Enumeration completed Nov 1 01:50:25.092344 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 01:50:25.092350 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:50:25.093639 systemd-networkd[772]: eth0: Link UP Nov 1 01:50:25.093644 systemd-networkd[772]: eth0: Gained carrier Nov 1 01:50:25.093656 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 01:50:25.094007 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 01:50:25.096662 systemd[1]: Reached target network.target - Network. Nov 1 01:50:25.109337 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 01:50:25.117105 systemd-networkd[772]: eth0: DHCPv4 address 10.230.17.2/30, gateway 10.230.17.1 acquired from 10.230.17.1 Nov 1 01:50:25.130867 ignition[776]: Ignition 2.19.0 Nov 1 01:50:25.131933 ignition[776]: Stage: fetch Nov 1 01:50:25.132237 ignition[776]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:50:25.132258 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 01:50:25.132402 ignition[776]: parsed url from cmdline: "" Nov 1 01:50:25.132409 ignition[776]: no config URL provided Nov 1 01:50:25.132419 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 01:50:25.132436 ignition[776]: no config at "/usr/lib/ignition/user.ign" Nov 1 01:50:25.132694 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Nov 1 01:50:25.132876 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Nov 1 01:50:25.132925 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Nov 1 01:50:25.150084 ignition[776]: GET result: OK Nov 1 01:50:25.150959 ignition[776]: parsing config with SHA512: 20a19af91ec6fc4099c5db64bc555c241cd83224745c1b4ee3f97a9ed69b412377c990b586ae2a091adfe5e6210d771d2c748190fe63fab8e1fc0f734f9c7f86 Nov 1 01:50:25.157184 unknown[776]: fetched base config from "system" Nov 1 01:50:25.157198 unknown[776]: fetched base config from "system" Nov 1 01:50:25.157755 ignition[776]: fetch: fetch complete Nov 1 01:50:25.157207 unknown[776]: fetched user config from "openstack" Nov 1 01:50:25.157764 ignition[776]: fetch: fetch passed Nov 1 01:50:25.159838 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 01:50:25.157837 ignition[776]: Ignition finished successfully Nov 1 01:50:25.169265 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 01:50:25.189704 ignition[783]: Ignition 2.19.0 Nov 1 01:50:25.189719 ignition[783]: Stage: kargs Nov 1 01:50:25.189981 ignition[783]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:50:25.190001 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 01:50:25.191415 ignition[783]: kargs: kargs passed Nov 1 01:50:25.193368 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 01:50:25.191509 ignition[783]: Ignition finished successfully Nov 1 01:50:25.199197 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 01:50:25.223469 ignition[789]: Ignition 2.19.0 Nov 1 01:50:25.223503 ignition[789]: Stage: disks Nov 1 01:50:25.223749 ignition[789]: no configs at "/usr/lib/ignition/base.d" Nov 1 01:50:25.226430 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 01:50:25.223770 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 01:50:25.228357 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 01:50:25.224890 ignition[789]: disks: disks passed Nov 1 01:50:25.230075 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 01:50:25.224961 ignition[789]: Ignition finished successfully Nov 1 01:50:25.231768 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 01:50:25.233081 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 01:50:25.234689 systemd[1]: Reached target basic.target - Basic System. Nov 1 01:50:25.241294 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 01:50:25.266108 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Nov 1 01:50:25.270554 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 01:50:25.281246 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 01:50:25.402038 kernel: EXT4-fs (vda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 01:50:25.403399 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 01:50:25.404786 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 01:50:25.413185 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:50:25.416146 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 01:50:25.417265 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 01:50:25.420177 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Nov 1 01:50:25.422118 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 01:50:25.422162 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:50:25.434094 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (805) Nov 1 01:50:25.436488 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 01:50:25.440040 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:50:25.440079 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:50:25.440099 kernel: BTRFS info (device vda6): using free space tree Nov 1 01:50:25.452355 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 01:50:25.460051 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 01:50:25.464226 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:50:25.534036 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 01:50:25.541820 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Nov 1 01:50:25.552103 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 01:50:25.558453 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 01:50:25.665814 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 01:50:25.672197 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 01:50:25.685234 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 01:50:25.698071 kernel: BTRFS info (device vda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:50:25.720078 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 01:50:25.736381 ignition[922]: INFO : Ignition 2.19.0 Nov 1 01:50:25.736381 ignition[922]: INFO : Stage: mount Nov 1 01:50:25.738296 ignition[922]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:50:25.738296 ignition[922]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 01:50:25.740200 ignition[922]: INFO : mount: mount passed Nov 1 01:50:25.740200 ignition[922]: INFO : Ignition finished successfully Nov 1 01:50:25.740327 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 01:50:25.826801 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 01:50:26.407274 systemd-networkd[772]: eth0: Gained IPv6LL Nov 1 01:50:27.915845 systemd-networkd[772]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8440:24:19ff:fee6:1102/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8440:24:19ff:fee6:1102/64 assigned by NDisc. Nov 1 01:50:27.915863 systemd-networkd[772]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 1 01:50:32.596284 coreos-metadata[807]: Nov 01 01:50:32.596 WARN failed to locate config-drive, using the metadata service API instead Nov 1 01:50:32.620114 coreos-metadata[807]: Nov 01 01:50:32.620 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 1 01:50:32.635738 coreos-metadata[807]: Nov 01 01:50:32.635 INFO Fetch successful Nov 1 01:50:32.637095 coreos-metadata[807]: Nov 01 01:50:32.637 INFO wrote hostname srv-d9muf.gb1.brightbox.com to /sysroot/etc/hostname Nov 1 01:50:32.639563 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Nov 1 01:50:32.639832 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Nov 1 01:50:32.651159 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 01:50:32.667306 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 01:50:32.684034 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Nov 1 01:50:32.684113 kernel: BTRFS info (device vda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 01:50:32.686234 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 01:50:32.686269 kernel: BTRFS info (device vda6): using free space tree Nov 1 01:50:32.692076 kernel: BTRFS info (device vda6): auto enabling async discard Nov 1 01:50:32.695150 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 01:50:32.724766 ignition[957]: INFO : Ignition 2.19.0 Nov 1 01:50:32.724766 ignition[957]: INFO : Stage: files Nov 1 01:50:32.726689 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:50:32.726689 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 01:50:32.728488 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Nov 1 01:50:32.729838 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 01:50:32.729838 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 01:50:32.734413 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 01:50:32.735860 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 01:50:32.737030 unknown[957]: wrote ssh authorized keys file for user: core Nov 1 01:50:32.738060 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 01:50:32.739155 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 01:50:32.739155 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 01:50:32.918631 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 01:50:33.266099 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 01:50:33.266099 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 01:50:33.266099 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 01:50:33.266099 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:50:33.266099 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 01:50:33.266099 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:50:33.285421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 01:50:33.285421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:50:33.285421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 01:50:33.285421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:50:33.285421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 01:50:33.285421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:50:33.285421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:50:33.285421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:50:33.285421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 01:50:33.690706 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 01:50:35.575853 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 01:50:35.578374 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 01:50:35.578374 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:50:35.578374 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 01:50:35.578374 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 01:50:35.578374 ignition[957]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 01:50:35.578374 ignition[957]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 01:50:35.578374 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:50:35.590176 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 01:50:35.590176 ignition[957]: INFO : files: files passed Nov 1 01:50:35.590176 ignition[957]: INFO : Ignition finished successfully Nov 1 01:50:35.581676 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 01:50:35.592273 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 01:50:35.603467 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 01:50:35.608495 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 01:50:35.609341 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 01:50:35.626935 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:50:35.626935 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:50:35.631394 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 01:50:35.633402 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:50:35.635340 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 01:50:35.641294 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 01:50:35.686381 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 01:50:35.686552 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 01:50:35.688409 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 01:50:35.689754 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 01:50:35.691391 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 01:50:35.708408 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 01:50:35.726343 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:50:35.733225 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 01:50:35.757503 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:50:35.759934 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:50:35.760867 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 01:50:35.762545 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 01:50:35.762735 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 01:50:35.764572 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 01:50:35.765601 systemd[1]: Stopped target basic.target - Basic System. Nov 1 01:50:35.767068 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 01:50:35.768573 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 01:50:35.769993 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 01:50:35.771591 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 01:50:35.773190 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 01:50:35.774790 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 01:50:35.776327 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 01:50:35.777843 systemd[1]: Stopped target swap.target - Swaps. Nov 1 01:50:35.779299 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 01:50:35.779513 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 01:50:35.781331 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:50:35.782376 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:50:35.783782 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 01:50:35.783968 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:50:35.785436 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 01:50:35.785691 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 01:50:35.787578 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 01:50:35.787818 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 01:50:35.789443 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 01:50:35.789619 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 01:50:35.800264 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 01:50:35.805311 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 01:50:35.808199 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 01:50:35.809350 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:50:35.811951 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 01:50:35.814305 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 01:50:35.825367 ignition[1009]: INFO : Ignition 2.19.0 Nov 1 01:50:35.829894 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 01:50:35.833102 ignition[1009]: INFO : Stage: umount Nov 1 01:50:35.833102 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 01:50:35.833102 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Nov 1 01:50:35.833102 ignition[1009]: INFO : umount: umount passed Nov 1 01:50:35.833102 ignition[1009]: INFO : Ignition finished successfully Nov 1 01:50:35.830075 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 01:50:35.833835 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 01:50:35.834004 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 01:50:35.849186 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 01:50:35.852192 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 01:50:35.852392 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 01:50:35.853366 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 01:50:35.853435 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 01:50:35.854714 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 01:50:35.854785 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 01:50:35.856130 systemd[1]: Stopped target network.target - Network. Nov 1 01:50:35.857549 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 01:50:35.857658 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 01:50:35.859092 systemd[1]: Stopped target paths.target - Path Units. Nov 1 01:50:35.860439 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 01:50:35.865149 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:50:35.866302 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 01:50:35.867958 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 01:50:35.869416 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 01:50:35.869498 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 01:50:35.870768 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 01:50:35.870839 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 01:50:35.872096 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 01:50:35.872180 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 01:50:35.873522 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 01:50:35.873605 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 01:50:35.875263 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 01:50:35.877236 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 01:50:35.879295 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 01:50:35.879486 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 01:50:35.881195 systemd-networkd[772]: eth0: DHCPv6 lease lost Nov 1 01:50:35.884771 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 01:50:35.884980 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 01:50:35.893345 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 01:50:35.893575 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 01:50:35.897914 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 01:50:35.898220 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:50:35.899811 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 01:50:35.899905 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 01:50:35.906226 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 01:50:35.907032 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 01:50:35.907128 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 01:50:35.909501 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 01:50:35.909588 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:50:35.911484 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 01:50:35.911560 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 01:50:35.912407 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 01:50:35.912476 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:50:35.921339 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:50:35.941430 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 01:50:35.941700 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:50:35.943124 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 01:50:35.943197 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 01:50:35.944807 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 01:50:35.944869 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:50:35.946583 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 01:50:35.946656 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 01:50:35.948897 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 01:50:35.948970 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 01:50:35.950467 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 01:50:35.950542 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 01:50:35.958244 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 01:50:35.959067 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 01:50:35.959142 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:50:35.960727 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 01:50:35.960797 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:50:35.965321 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 01:50:35.965398 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:50:35.966226 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 01:50:35.966318 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:50:35.969045 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 01:50:35.969204 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 01:50:35.973145 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 01:50:35.973298 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 01:50:35.975705 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 01:50:35.982233 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 01:50:35.994939 systemd[1]: Switching root. Nov 1 01:50:36.031788 systemd-journald[202]: Journal stopped Nov 1 01:50:37.618072 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Nov 1 01:50:37.618243 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 01:50:37.618292 kernel: SELinux: policy capability open_perms=1 Nov 1 01:50:37.618314 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 01:50:37.618349 kernel: SELinux: policy capability always_check_network=0 Nov 1 01:50:37.618369 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 01:50:37.618400 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 01:50:37.618421 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 01:50:37.618446 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 01:50:37.618471 kernel: audit: type=1403 audit(1761961836.425:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 01:50:37.618502 systemd[1]: Successfully loaded SELinux policy in 57.762ms. Nov 1 01:50:37.618537 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.102ms. Nov 1 01:50:37.618567 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 01:50:37.618591 systemd[1]: Detected virtualization kvm. Nov 1 01:50:37.618624 systemd[1]: Detected architecture x86-64. Nov 1 01:50:37.618661 systemd[1]: Detected first boot. Nov 1 01:50:37.618683 systemd[1]: Hostname set to . Nov 1 01:50:37.618703 systemd[1]: Initializing machine ID from VM UUID. Nov 1 01:50:37.618730 zram_generator::config[1055]: No configuration found. Nov 1 01:50:37.618760 systemd[1]: Populated /etc with preset unit settings. Nov 1 01:50:37.618783 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 01:50:37.618804 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 01:50:37.618836 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 01:50:37.618865 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 01:50:37.618887 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 01:50:37.618924 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 01:50:37.618954 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 01:50:37.618976 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 01:50:37.618997 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 01:50:37.621093 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 01:50:37.621135 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 01:50:37.621174 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 01:50:37.621198 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 01:50:37.621220 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 01:50:37.621328 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 01:50:37.621363 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 01:50:37.621385 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 01:50:37.621406 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 01:50:37.621432 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 01:50:37.621454 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 01:50:37.621489 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 01:50:37.621512 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 01:50:37.621548 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 01:50:37.621571 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 01:50:37.621592 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 01:50:37.621613 systemd[1]: Reached target slices.target - Slice Units. Nov 1 01:50:37.621647 systemd[1]: Reached target swap.target - Swaps. Nov 1 01:50:37.621670 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 01:50:37.621690 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 01:50:37.621711 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 01:50:37.621739 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 01:50:37.621772 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 01:50:37.621812 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 01:50:37.621834 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 01:50:37.621856 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 01:50:37.621876 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 01:50:37.621897 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:50:37.621918 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 01:50:37.621939 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 01:50:37.621960 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 01:50:37.621988 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 01:50:37.622041 systemd[1]: Reached target machines.target - Containers. Nov 1 01:50:37.622066 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 01:50:37.622087 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:50:37.622115 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 01:50:37.622137 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 01:50:37.622158 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:50:37.622179 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 01:50:37.622199 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:50:37.622232 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 01:50:37.622271 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 01:50:37.622295 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 01:50:37.622316 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 01:50:37.622337 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 01:50:37.622359 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 01:50:37.622379 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 01:50:37.622399 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 01:50:37.622420 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 01:50:37.622453 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 01:50:37.622475 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 01:50:37.622496 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 01:50:37.622517 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 01:50:37.622544 systemd[1]: Stopped verity-setup.service. Nov 1 01:50:37.622603 systemd-journald[1154]: Collecting audit messages is disabled. Nov 1 01:50:37.622653 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:50:37.622689 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 01:50:37.622712 systemd-journald[1154]: Journal started Nov 1 01:50:37.622745 systemd-journald[1154]: Runtime Journal (/run/log/journal/d2d397ef342742df97f2a9ff53319ed7) is 4.7M, max 38.0M, 33.2M free. Nov 1 01:50:37.240610 systemd[1]: Queued start job for default target multi-user.target. Nov 1 01:50:37.631226 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 01:50:37.261253 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 01:50:37.262086 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 01:50:37.629102 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 01:50:37.630167 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 01:50:37.631026 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 01:50:37.632173 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 01:50:37.633571 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 01:50:37.635213 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 01:50:37.637279 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 01:50:37.638742 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 01:50:37.638998 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 01:50:37.640646 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:50:37.640887 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:50:37.643477 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:50:37.643711 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:50:37.645581 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 01:50:37.646968 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 01:50:37.651053 kernel: loop: module loaded Nov 1 01:50:37.649443 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 01:50:37.661352 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:50:37.663132 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 01:50:37.675727 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 01:50:37.692547 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 01:50:37.695161 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 01:50:37.695236 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 01:50:37.697800 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 01:50:37.704056 kernel: fuse: init (API version 7.39) Nov 1 01:50:37.708341 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 01:50:37.719889 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 01:50:37.723339 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:50:37.727343 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 01:50:37.733863 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 01:50:37.736136 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:50:37.746442 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 01:50:37.747436 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 01:50:37.750339 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 01:50:37.754204 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 01:50:37.765446 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 01:50:37.770527 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 01:50:37.772134 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 01:50:37.773526 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 01:50:37.776002 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 01:50:37.798205 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 01:50:37.833490 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 01:50:37.858742 systemd-journald[1154]: Time spent on flushing to /var/log/journal/d2d397ef342742df97f2a9ff53319ed7 is 134.116ms for 1136 entries. Nov 1 01:50:37.858742 systemd-journald[1154]: System Journal (/var/log/journal/d2d397ef342742df97f2a9ff53319ed7) is 8.0M, max 584.8M, 576.8M free. Nov 1 01:50:38.067648 systemd-journald[1154]: Received client request to flush runtime journal. Nov 1 01:50:38.067750 kernel: loop0: detected capacity change from 0 to 140768 Nov 1 01:50:38.067813 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 01:50:38.067853 kernel: ACPI: bus type drm_connector registered Nov 1 01:50:38.067889 kernel: loop1: detected capacity change from 0 to 142488 Nov 1 01:50:37.877522 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 01:50:37.879471 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 01:50:37.915701 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 01:50:38.006605 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 01:50:38.011537 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:50:38.012678 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 01:50:38.022829 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 01:50:38.024106 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 01:50:38.027771 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 01:50:38.041498 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 01:50:38.074595 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 01:50:38.075171 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Nov 1 01:50:38.075197 systemd-tmpfiles[1182]: ACLs are not supported, ignoring. Nov 1 01:50:38.096846 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 01:50:38.112039 kernel: loop2: detected capacity change from 0 to 224512 Nov 1 01:50:38.138049 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 01:50:38.139986 udevadm[1200]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 01:50:38.177347 kernel: loop3: detected capacity change from 0 to 8 Nov 1 01:50:38.212058 kernel: loop4: detected capacity change from 0 to 140768 Nov 1 01:50:38.252064 kernel: loop5: detected capacity change from 0 to 142488 Nov 1 01:50:38.256071 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 01:50:38.268790 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 01:50:38.311039 kernel: loop6: detected capacity change from 0 to 224512 Nov 1 01:50:38.310805 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Nov 1 01:50:38.310825 systemd-tmpfiles[1212]: ACLs are not supported, ignoring. Nov 1 01:50:38.341364 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 01:50:38.355222 kernel: loop7: detected capacity change from 0 to 8 Nov 1 01:50:38.366719 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Nov 1 01:50:38.368295 (sd-merge)[1210]: Merged extensions into '/usr'. Nov 1 01:50:38.379365 systemd[1]: Reloading requested from client PID 1181 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 01:50:38.379412 systemd[1]: Reloading... Nov 1 01:50:38.541695 zram_generator::config[1238]: No configuration found. Nov 1 01:50:38.606059 ldconfig[1176]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 01:50:38.784842 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:50:38.852791 systemd[1]: Reloading finished in 472 ms. Nov 1 01:50:38.890068 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 01:50:38.891542 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 01:50:38.908067 systemd[1]: Starting ensure-sysext.service... Nov 1 01:50:38.912210 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 01:50:38.925123 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 01:50:38.934349 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 01:50:38.940213 systemd[1]: Reloading requested from client PID 1296 ('systemctl') (unit ensure-sysext.service)... Nov 1 01:50:38.940251 systemd[1]: Reloading... Nov 1 01:50:38.957633 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 01:50:38.959378 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 01:50:38.960964 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 01:50:38.961473 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Nov 1 01:50:38.961589 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Nov 1 01:50:38.969487 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 01:50:38.969628 systemd-tmpfiles[1297]: Skipping /boot Nov 1 01:50:38.989690 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 01:50:38.991064 systemd-tmpfiles[1297]: Skipping /boot Nov 1 01:50:39.019704 systemd-udevd[1299]: Using default interface naming scheme 'v255'. Nov 1 01:50:39.121149 zram_generator::config[1334]: No configuration found. Nov 1 01:50:39.318055 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1338) Nov 1 01:50:39.322798 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 01:50:39.363101 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 1 01:50:39.368057 kernel: ACPI: button: Power Button [PWRF] Nov 1 01:50:39.431497 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:50:39.470049 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 01:50:39.474581 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Nov 1 01:50:39.474956 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 01:50:39.505223 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Nov 1 01:50:39.573690 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 01:50:39.576627 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 01:50:39.577941 systemd[1]: Reloading finished in 637 ms. Nov 1 01:50:39.609279 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 01:50:39.617909 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 01:50:39.670800 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:50:39.693155 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 01:50:39.699633 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 01:50:39.700759 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 01:50:39.754550 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 01:50:39.761075 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 01:50:39.772497 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 01:50:39.784220 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 01:50:39.785282 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 01:50:39.792548 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 01:50:39.804494 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 01:50:39.810442 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 01:50:39.821444 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 01:50:39.831527 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 01:50:39.836473 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 01:50:39.838268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 01:50:39.843074 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 01:50:39.843386 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 01:50:39.844900 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 01:50:39.845711 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 01:50:39.848531 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 01:50:39.849462 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 01:50:39.852396 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 01:50:39.853228 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 01:50:39.860841 systemd[1]: Finished ensure-sysext.service. Nov 1 01:50:39.873490 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 01:50:39.873592 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 01:50:39.885394 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 01:50:39.897410 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 01:50:39.906692 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 01:50:39.909737 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 01:50:39.915460 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 01:50:39.928566 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 01:50:39.932130 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 01:50:39.950577 augenrules[1445]: No rules Nov 1 01:50:39.957323 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 01:50:39.959595 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 01:50:39.965548 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 01:50:39.991653 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 01:50:40.002288 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 01:50:40.021780 lvm[1457]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:50:40.028348 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 01:50:40.062154 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 01:50:40.063366 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 01:50:40.071273 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 01:50:40.104645 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 01:50:40.169627 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 01:50:40.170106 systemd-networkd[1428]: lo: Link UP Nov 1 01:50:40.172049 systemd-networkd[1428]: lo: Gained carrier Nov 1 01:50:40.176413 systemd-networkd[1428]: Enumeration completed Nov 1 01:50:40.177188 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 01:50:40.177299 systemd-networkd[1428]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 01:50:40.180471 systemd-networkd[1428]: eth0: Link UP Nov 1 01:50:40.180616 systemd-networkd[1428]: eth0: Gained carrier Nov 1 01:50:40.181787 systemd-networkd[1428]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 01:50:40.208125 systemd-networkd[1428]: eth0: DHCPv4 address 10.230.17.2/30, gateway 10.230.17.1 acquired from 10.230.17.1 Nov 1 01:50:40.215357 systemd-resolved[1429]: Positive Trust Anchors: Nov 1 01:50:40.215836 systemd-resolved[1429]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 01:50:40.215977 systemd-resolved[1429]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 01:50:40.222775 systemd-resolved[1429]: Using system hostname 'srv-d9muf.gb1.brightbox.com'. Nov 1 01:50:40.224956 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 01:50:40.226799 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 01:50:40.229247 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 01:50:40.230495 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 01:50:40.233214 systemd[1]: Reached target network.target - Network. Nov 1 01:50:40.234005 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 01:50:40.234909 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 01:50:40.235806 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 01:50:40.236670 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 01:50:40.237501 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 01:50:40.238309 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 01:50:40.238361 systemd[1]: Reached target paths.target - Path Units. Nov 1 01:50:40.239003 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 01:50:40.240050 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 01:50:40.240904 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 01:50:40.241707 systemd[1]: Reached target timers.target - Timer Units. Nov 1 01:50:40.243726 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 01:50:40.246493 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 01:50:40.252301 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 01:50:40.255156 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 01:50:40.256671 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 01:50:40.257524 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 01:50:40.258239 systemd[1]: Reached target basic.target - Basic System. Nov 1 01:50:40.258949 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 01:50:40.259000 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 01:50:40.263168 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 01:50:40.279341 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 01:50:40.289672 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 01:50:40.301290 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 01:50:40.303115 systemd-timesyncd[1439]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Nov 1 01:50:40.303231 systemd-timesyncd[1439]: Initial clock synchronization to Sat 2025-11-01 01:50:40.459764 UTC. Nov 1 01:50:40.309322 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 01:50:40.310156 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 01:50:40.314767 jq[1480]: false Nov 1 01:50:40.319336 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 01:50:40.327070 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 01:50:40.329956 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 01:50:40.333941 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 01:50:40.344287 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 01:50:40.346525 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 01:50:40.347404 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 01:50:40.356360 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 01:50:40.360180 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 01:50:40.371452 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 01:50:40.371791 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 01:50:40.397144 extend-filesystems[1481]: Found loop4 Nov 1 01:50:40.395969 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 01:50:40.396315 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 01:50:40.407003 extend-filesystems[1481]: Found loop5 Nov 1 01:50:40.407003 extend-filesystems[1481]: Found loop6 Nov 1 01:50:40.407003 extend-filesystems[1481]: Found loop7 Nov 1 01:50:40.407003 extend-filesystems[1481]: Found vda Nov 1 01:50:40.407003 extend-filesystems[1481]: Found vda1 Nov 1 01:50:40.407003 extend-filesystems[1481]: Found vda2 Nov 1 01:50:40.407003 extend-filesystems[1481]: Found vda3 Nov 1 01:50:40.407003 extend-filesystems[1481]: Found usr Nov 1 01:50:40.407003 extend-filesystems[1481]: Found vda4 Nov 1 01:50:40.407003 extend-filesystems[1481]: Found vda6 Nov 1 01:50:40.407003 extend-filesystems[1481]: Found vda7 Nov 1 01:50:40.407003 extend-filesystems[1481]: Found vda9 Nov 1 01:50:40.407003 extend-filesystems[1481]: Checking size of /dev/vda9 Nov 1 01:50:40.459171 update_engine[1488]: I20251101 01:50:40.442135 1488 main.cc:92] Flatcar Update Engine starting Nov 1 01:50:40.413153 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 01:50:40.412832 dbus-daemon[1478]: [system] SELinux support is enabled Nov 1 01:50:40.417814 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 01:50:40.434964 dbus-daemon[1478]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1428 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 1 01:50:40.417860 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 01:50:40.451820 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 01:50:40.466238 jq[1490]: true Nov 1 01:50:40.418909 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 01:50:40.418938 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 01:50:40.438830 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 01:50:40.439209 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 01:50:40.461657 (ntainerd)[1506]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 01:50:40.471358 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 1 01:50:40.474986 systemd[1]: Started update-engine.service - Update Engine. Nov 1 01:50:40.488422 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 01:50:40.491468 update_engine[1488]: I20251101 01:50:40.491384 1488 update_check_scheduler.cc:74] Next update check in 11m0s Nov 1 01:50:40.495398 extend-filesystems[1481]: Resized partition /dev/vda9 Nov 1 01:50:40.514061 tar[1494]: linux-amd64/LICENSE Nov 1 01:50:40.514061 tar[1494]: linux-amd64/helm Nov 1 01:50:40.523961 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Nov 1 01:50:40.524061 extend-filesystems[1517]: resize2fs 1.47.1 (20-May-2024) Nov 1 01:50:40.554036 jq[1510]: true Nov 1 01:50:40.622504 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1337) Nov 1 01:50:40.754994 systemd-logind[1487]: Watching system buttons on /dev/input/event2 (Power Button) Nov 1 01:50:40.762787 systemd-logind[1487]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 01:50:40.763835 systemd-logind[1487]: New seat seat0. Nov 1 01:50:40.777542 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 01:50:40.834072 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:50:40.836810 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 01:50:40.854736 systemd[1]: Starting sshkeys.service... Nov 1 01:50:40.902949 containerd[1506]: time="2025-11-01T01:50:40.902094350Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 01:50:40.930164 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 1 01:50:40.941599 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 1 01:50:40.949067 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 01:50:40.972961 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Nov 1 01:50:40.973060 containerd[1506]: time="2025-11-01T01:50:40.955107011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:50:40.973060 containerd[1506]: time="2025-11-01T01:50:40.957596509Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:50:40.973060 containerd[1506]: time="2025-11-01T01:50:40.957633738Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 01:50:40.973060 containerd[1506]: time="2025-11-01T01:50:40.957656589Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 01:50:40.975653 containerd[1506]: time="2025-11-01T01:50:40.975408284Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 01:50:40.975653 containerd[1506]: time="2025-11-01T01:50:40.975475635Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 01:50:40.975653 containerd[1506]: time="2025-11-01T01:50:40.975606288Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:50:40.975786 extend-filesystems[1517]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 01:50:40.975786 extend-filesystems[1517]: old_desc_blocks = 1, new_desc_blocks = 8 Nov 1 01:50:40.975786 extend-filesystems[1517]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Nov 1 01:50:40.991213 extend-filesystems[1481]: Resized filesystem in /dev/vda9 Nov 1 01:50:40.977367 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 01:50:40.997453 containerd[1506]: time="2025-11-01T01:50:40.976085207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:50:40.997453 containerd[1506]: time="2025-11-01T01:50:40.976390308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:50:40.997453 containerd[1506]: time="2025-11-01T01:50:40.976416607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 01:50:40.997453 containerd[1506]: time="2025-11-01T01:50:40.976438289Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:50:40.997453 containerd[1506]: time="2025-11-01T01:50:40.976457138Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 01:50:40.997453 containerd[1506]: time="2025-11-01T01:50:40.976590386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:50:40.997453 containerd[1506]: time="2025-11-01T01:50:40.976981672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 01:50:40.997453 containerd[1506]: time="2025-11-01T01:50:40.984439794Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 01:50:40.997453 containerd[1506]: time="2025-11-01T01:50:40.984474822Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 01:50:40.997453 containerd[1506]: time="2025-11-01T01:50:40.984636250Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 01:50:40.997453 containerd[1506]: time="2025-11-01T01:50:40.984724560Z" level=info msg="metadata content store policy set" policy=shared Nov 1 01:50:40.997453 containerd[1506]: time="2025-11-01T01:50:40.995084225Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 01:50:40.977667 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 01:50:40.998832 containerd[1506]: time="2025-11-01T01:50:40.995224340Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 01:50:40.998832 containerd[1506]: time="2025-11-01T01:50:40.995261176Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 01:50:40.998832 containerd[1506]: time="2025-11-01T01:50:40.995291319Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 01:50:40.998832 containerd[1506]: time="2025-11-01T01:50:40.995324185Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 01:50:40.998832 containerd[1506]: time="2025-11-01T01:50:40.995575922Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 01:50:40.998832 containerd[1506]: time="2025-11-01T01:50:40.996082376Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 01:50:40.998832 containerd[1506]: time="2025-11-01T01:50:40.996322329Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 01:50:40.998832 containerd[1506]: time="2025-11-01T01:50:40.996351263Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 01:50:40.998832 containerd[1506]: time="2025-11-01T01:50:40.996378059Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 01:50:40.998832 containerd[1506]: time="2025-11-01T01:50:40.996417960Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 01:50:40.998832 containerd[1506]: time="2025-11-01T01:50:40.996454322Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 01:50:40.998832 containerd[1506]: time="2025-11-01T01:50:40.996484651Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 01:50:40.998832 containerd[1506]: time="2025-11-01T01:50:40.996510823Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 01:50:40.998832 containerd[1506]: time="2025-11-01T01:50:40.996534849Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 01:50:40.997624 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 1 01:50:40.997838 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 1 01:50:41.002408 containerd[1506]: time="2025-11-01T01:50:40.996557599Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 01:50:41.002408 containerd[1506]: time="2025-11-01T01:50:40.996580614Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 01:50:41.002408 containerd[1506]: time="2025-11-01T01:50:40.996603007Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 01:50:41.002408 containerd[1506]: time="2025-11-01T01:50:40.996658727Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.002408 containerd[1506]: time="2025-11-01T01:50:40.996685602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.002408 containerd[1506]: time="2025-11-01T01:50:40.996706739Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.002408 containerd[1506]: time="2025-11-01T01:50:40.996731117Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.002408 containerd[1506]: time="2025-11-01T01:50:40.996752678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.002408 containerd[1506]: time="2025-11-01T01:50:40.996776459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.002408 containerd[1506]: time="2025-11-01T01:50:40.996798463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.002408 containerd[1506]: time="2025-11-01T01:50:40.996831565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.002408 containerd[1506]: time="2025-11-01T01:50:40.996856160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.002408 containerd[1506]: time="2025-11-01T01:50:40.996883899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.002408 containerd[1506]: time="2025-11-01T01:50:40.996905730Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.002900 containerd[1506]: time="2025-11-01T01:50:40.996927484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.002900 containerd[1506]: time="2025-11-01T01:50:40.996949424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.002900 containerd[1506]: time="2025-11-01T01:50:40.996988644Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 01:50:41.004914 containerd[1506]: time="2025-11-01T01:50:41.004588344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.004914 containerd[1506]: time="2025-11-01T01:50:41.004629816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.004914 containerd[1506]: time="2025-11-01T01:50:41.004676148Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 01:50:41.004914 containerd[1506]: time="2025-11-01T01:50:41.004872234Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 01:50:41.003148 dbus-daemon[1478]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1513 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 1 01:50:41.007793 containerd[1506]: time="2025-11-01T01:50:41.005885718Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 01:50:41.007793 containerd[1506]: time="2025-11-01T01:50:41.005940492Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 01:50:41.007793 containerd[1506]: time="2025-11-01T01:50:41.005967265Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 01:50:41.007793 containerd[1506]: time="2025-11-01T01:50:41.005985251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.007793 containerd[1506]: time="2025-11-01T01:50:41.006036871Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 01:50:41.007793 containerd[1506]: time="2025-11-01T01:50:41.007137492Z" level=info msg="NRI interface is disabled by configuration." Nov 1 01:50:41.007793 containerd[1506]: time="2025-11-01T01:50:41.007200075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 01:50:41.016525 containerd[1506]: time="2025-11-01T01:50:41.014357115Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 01:50:41.016525 containerd[1506]: time="2025-11-01T01:50:41.015378255Z" level=info msg="Connect containerd service" Nov 1 01:50:41.016525 containerd[1506]: time="2025-11-01T01:50:41.015865716Z" level=info msg="using legacy CRI server" Nov 1 01:50:41.016525 containerd[1506]: time="2025-11-01T01:50:41.015919313Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 01:50:41.020316 containerd[1506]: time="2025-11-01T01:50:41.019298911Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 01:50:41.021474 systemd[1]: Starting polkit.service - Authorization Manager... Nov 1 01:50:41.025721 containerd[1506]: time="2025-11-01T01:50:41.025663738Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 01:50:41.028422 containerd[1506]: time="2025-11-01T01:50:41.028354316Z" level=info msg="Start subscribing containerd event" Nov 1 01:50:41.028488 containerd[1506]: time="2025-11-01T01:50:41.028462141Z" level=info msg="Start recovering state" Nov 1 01:50:41.028949 containerd[1506]: time="2025-11-01T01:50:41.028609499Z" level=info msg="Start event monitor" Nov 1 01:50:41.028949 containerd[1506]: time="2025-11-01T01:50:41.028652032Z" level=info msg="Start snapshots syncer" Nov 1 01:50:41.028949 containerd[1506]: time="2025-11-01T01:50:41.028673926Z" level=info msg="Start cni network conf syncer for default" Nov 1 01:50:41.028949 containerd[1506]: time="2025-11-01T01:50:41.028687298Z" level=info msg="Start streaming server" Nov 1 01:50:41.030595 containerd[1506]: time="2025-11-01T01:50:41.029440765Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 01:50:41.030595 containerd[1506]: time="2025-11-01T01:50:41.029538585Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 01:50:41.030791 containerd[1506]: time="2025-11-01T01:50:41.030733409Z" level=info msg="containerd successfully booted in 0.129890s" Nov 1 01:50:41.034214 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 01:50:41.055722 polkitd[1554]: Started polkitd version 121 Nov 1 01:50:41.079812 polkitd[1554]: Loading rules from directory /etc/polkit-1/rules.d Nov 1 01:50:41.079941 polkitd[1554]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 1 01:50:41.086063 polkitd[1554]: Finished loading, compiling and executing 2 rules Nov 1 01:50:41.087576 dbus-daemon[1478]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 1 01:50:41.087891 systemd[1]: Started polkit.service - Authorization Manager. Nov 1 01:50:41.088159 polkitd[1554]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 1 01:50:41.119590 systemd-hostnamed[1513]: Hostname set to (static) Nov 1 01:50:41.267136 sshd_keygen[1509]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 01:50:41.300742 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 01:50:41.312167 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 01:50:41.337826 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 01:50:41.339213 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 01:50:41.350180 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 01:50:41.377998 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 01:50:41.389545 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 01:50:41.393265 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 01:50:41.394879 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 01:50:41.562846 tar[1494]: linux-amd64/README.md Nov 1 01:50:41.583324 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 01:50:41.640393 systemd-networkd[1428]: eth0: Gained IPv6LL Nov 1 01:50:41.645229 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 01:50:41.647725 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 01:50:41.657487 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:50:41.662233 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 01:50:41.705182 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 01:50:41.859922 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 01:50:41.871557 systemd[1]: Started sshd@0-10.230.17.2:22-147.75.109.163:40302.service - OpenSSH per-connection server daemon (147.75.109.163:40302). Nov 1 01:50:42.755001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:50:42.762632 (kubelet)[1604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:50:42.808114 sshd[1596]: Accepted publickey for core from 147.75.109.163 port 40302 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:50:42.810712 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:50:42.832589 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 01:50:42.855677 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 01:50:42.867747 systemd-logind[1487]: New session 1 of user core. Nov 1 01:50:42.891169 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 01:50:42.903668 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 01:50:42.917723 (systemd)[1608]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 01:50:43.071886 systemd[1608]: Queued start job for default target default.target. Nov 1 01:50:43.077277 systemd[1608]: Created slice app.slice - User Application Slice. Nov 1 01:50:43.077318 systemd[1608]: Reached target paths.target - Paths. Nov 1 01:50:43.077342 systemd[1608]: Reached target timers.target - Timers. Nov 1 01:50:43.081258 systemd[1608]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 01:50:43.107971 systemd[1608]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 01:50:43.108107 systemd[1608]: Reached target sockets.target - Sockets. Nov 1 01:50:43.108134 systemd[1608]: Reached target basic.target - Basic System. Nov 1 01:50:43.108200 systemd[1608]: Reached target default.target - Main User Target. Nov 1 01:50:43.108267 systemd[1608]: Startup finished in 178ms. Nov 1 01:50:43.109238 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 01:50:43.119583 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 01:50:43.152769 systemd-networkd[1428]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8440:24:19ff:fee6:1102/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8440:24:19ff:fee6:1102/64 assigned by NDisc. Nov 1 01:50:43.152783 systemd-networkd[1428]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Nov 1 01:50:43.462668 kubelet[1604]: E1101 01:50:43.462372 1604 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:50:43.467076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:50:43.467673 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:50:43.468548 systemd[1]: kubelet.service: Consumed 1.134s CPU time. Nov 1 01:50:43.778493 systemd[1]: Started sshd@1-10.230.17.2:22-147.75.109.163:40312.service - OpenSSH per-connection server daemon (147.75.109.163:40312). Nov 1 01:50:44.691677 sshd[1627]: Accepted publickey for core from 147.75.109.163 port 40312 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:50:44.694414 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:50:44.701994 systemd-logind[1487]: New session 2 of user core. Nov 1 01:50:44.713496 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 01:50:45.329586 sshd[1627]: pam_unix(sshd:session): session closed for user core Nov 1 01:50:45.334984 systemd[1]: sshd@1-10.230.17.2:22-147.75.109.163:40312.service: Deactivated successfully. Nov 1 01:50:45.337366 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 01:50:45.338633 systemd-logind[1487]: Session 2 logged out. Waiting for processes to exit. Nov 1 01:50:45.340627 systemd-logind[1487]: Removed session 2. Nov 1 01:50:45.490368 systemd[1]: Started sshd@2-10.230.17.2:22-147.75.109.163:40316.service - OpenSSH per-connection server daemon (147.75.109.163:40316). Nov 1 01:50:46.416457 sshd[1635]: Accepted publickey for core from 147.75.109.163 port 40316 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:50:46.420052 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:50:46.434420 systemd-logind[1487]: New session 3 of user core. Nov 1 01:50:46.444339 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 01:50:46.463245 login[1580]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:50:46.468430 login[1579]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 01:50:46.472515 systemd-logind[1487]: New session 4 of user core. Nov 1 01:50:46.484367 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 01:50:46.489350 systemd-logind[1487]: New session 5 of user core. Nov 1 01:50:46.490357 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 01:50:47.048466 sshd[1635]: pam_unix(sshd:session): session closed for user core Nov 1 01:50:47.052682 systemd-logind[1487]: Session 3 logged out. Waiting for processes to exit. Nov 1 01:50:47.054169 systemd[1]: sshd@2-10.230.17.2:22-147.75.109.163:40316.service: Deactivated successfully. Nov 1 01:50:47.056863 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 01:50:47.059394 systemd-logind[1487]: Removed session 3. Nov 1 01:50:47.383750 coreos-metadata[1476]: Nov 01 01:50:47.383 WARN failed to locate config-drive, using the metadata service API instead Nov 1 01:50:47.411430 coreos-metadata[1476]: Nov 01 01:50:47.411 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Nov 1 01:50:47.418318 coreos-metadata[1476]: Nov 01 01:50:47.418 INFO Fetch failed with 404: resource not found Nov 1 01:50:47.418318 coreos-metadata[1476]: Nov 01 01:50:47.418 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Nov 1 01:50:47.418977 coreos-metadata[1476]: Nov 01 01:50:47.418 INFO Fetch successful Nov 1 01:50:47.419110 coreos-metadata[1476]: Nov 01 01:50:47.419 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Nov 1 01:50:47.433234 coreos-metadata[1476]: Nov 01 01:50:47.433 INFO Fetch successful Nov 1 01:50:47.433234 coreos-metadata[1476]: Nov 01 01:50:47.433 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Nov 1 01:50:47.450808 coreos-metadata[1476]: Nov 01 01:50:47.450 INFO Fetch successful Nov 1 01:50:47.450808 coreos-metadata[1476]: Nov 01 01:50:47.450 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Nov 1 01:50:47.469431 coreos-metadata[1476]: Nov 01 01:50:47.469 INFO Fetch successful Nov 1 01:50:47.469431 coreos-metadata[1476]: Nov 01 01:50:47.469 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Nov 1 01:50:47.487676 coreos-metadata[1476]: Nov 01 01:50:47.487 INFO Fetch successful Nov 1 01:50:47.524682 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 01:50:47.525701 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 01:50:48.043371 coreos-metadata[1547]: Nov 01 01:50:48.043 WARN failed to locate config-drive, using the metadata service API instead Nov 1 01:50:48.066780 coreos-metadata[1547]: Nov 01 01:50:48.066 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Nov 1 01:50:48.098244 coreos-metadata[1547]: Nov 01 01:50:48.098 INFO Fetch successful Nov 1 01:50:48.098432 coreos-metadata[1547]: Nov 01 01:50:48.098 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 1 01:50:48.127573 coreos-metadata[1547]: Nov 01 01:50:48.127 INFO Fetch successful Nov 1 01:50:48.129742 unknown[1547]: wrote ssh authorized keys file for user: core Nov 1 01:50:48.154528 update-ssh-keys[1676]: Updated "/home/core/.ssh/authorized_keys" Nov 1 01:50:48.156718 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 1 01:50:48.159913 systemd[1]: Finished sshkeys.service. Nov 1 01:50:48.161553 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 01:50:48.162356 systemd[1]: Startup finished in 1.347s (kernel) + 14.662s (initrd) + 11.791s (userspace) = 27.801s. Nov 1 01:50:53.718184 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 01:50:53.733488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:50:53.906118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:50:53.913960 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:50:54.006833 kubelet[1687]: E1101 01:50:54.006524 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:50:54.011493 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:50:54.011848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:50:57.262449 systemd[1]: Started sshd@3-10.230.17.2:22-147.75.109.163:45394.service - OpenSSH per-connection server daemon (147.75.109.163:45394). Nov 1 01:50:58.174691 sshd[1696]: Accepted publickey for core from 147.75.109.163 port 45394 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:50:58.176806 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:50:58.183655 systemd-logind[1487]: New session 6 of user core. Nov 1 01:50:58.192494 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 01:50:58.807280 sshd[1696]: pam_unix(sshd:session): session closed for user core Nov 1 01:50:58.812741 systemd[1]: sshd@3-10.230.17.2:22-147.75.109.163:45394.service: Deactivated successfully. Nov 1 01:50:58.815186 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 01:50:58.816176 systemd-logind[1487]: Session 6 logged out. Waiting for processes to exit. Nov 1 01:50:58.817927 systemd-logind[1487]: Removed session 6. Nov 1 01:50:58.962888 systemd[1]: Started sshd@4-10.230.17.2:22-147.75.109.163:45400.service - OpenSSH per-connection server daemon (147.75.109.163:45400). Nov 1 01:50:59.875952 sshd[1703]: Accepted publickey for core from 147.75.109.163 port 45400 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:50:59.878107 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:50:59.884987 systemd-logind[1487]: New session 7 of user core. Nov 1 01:50:59.898424 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 01:51:00.500782 sshd[1703]: pam_unix(sshd:session): session closed for user core Nov 1 01:51:00.504808 systemd[1]: sshd@4-10.230.17.2:22-147.75.109.163:45400.service: Deactivated successfully. Nov 1 01:51:00.507059 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 01:51:00.509271 systemd-logind[1487]: Session 7 logged out. Waiting for processes to exit. Nov 1 01:51:00.510908 systemd-logind[1487]: Removed session 7. Nov 1 01:51:00.666518 systemd[1]: Started sshd@5-10.230.17.2:22-147.75.109.163:35332.service - OpenSSH per-connection server daemon (147.75.109.163:35332). Nov 1 01:51:01.563922 sshd[1710]: Accepted publickey for core from 147.75.109.163 port 35332 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:51:01.566042 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:51:01.573355 systemd-logind[1487]: New session 8 of user core. Nov 1 01:51:01.577292 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 01:51:02.206389 sshd[1710]: pam_unix(sshd:session): session closed for user core Nov 1 01:51:02.211833 systemd[1]: sshd@5-10.230.17.2:22-147.75.109.163:35332.service: Deactivated successfully. Nov 1 01:51:02.214355 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 01:51:02.215339 systemd-logind[1487]: Session 8 logged out. Waiting for processes to exit. Nov 1 01:51:02.216929 systemd-logind[1487]: Removed session 8. Nov 1 01:51:02.366530 systemd[1]: Started sshd@6-10.230.17.2:22-147.75.109.163:35338.service - OpenSSH per-connection server daemon (147.75.109.163:35338). Nov 1 01:51:03.257714 sshd[1717]: Accepted publickey for core from 147.75.109.163 port 35338 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:51:03.259881 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:51:03.267339 systemd-logind[1487]: New session 9 of user core. Nov 1 01:51:03.275231 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 01:51:03.794537 sudo[1720]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 01:51:03.795065 sudo[1720]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:51:03.814071 sudo[1720]: pam_unix(sudo:session): session closed for user root Nov 1 01:51:03.960588 sshd[1717]: pam_unix(sshd:session): session closed for user core Nov 1 01:51:03.966473 systemd[1]: sshd@6-10.230.17.2:22-147.75.109.163:35338.service: Deactivated successfully. Nov 1 01:51:03.968994 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 01:51:03.969980 systemd-logind[1487]: Session 9 logged out. Waiting for processes to exit. Nov 1 01:51:03.972211 systemd-logind[1487]: Removed session 9. Nov 1 01:51:04.113844 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 01:51:04.119263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:51:04.131322 systemd[1]: Started sshd@7-10.230.17.2:22-147.75.109.163:35354.service - OpenSSH per-connection server daemon (147.75.109.163:35354). Nov 1 01:51:04.312458 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:51:04.314829 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:51:04.406213 kubelet[1735]: E1101 01:51:04.405968 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:51:04.408899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:51:04.409164 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:51:05.023845 sshd[1726]: Accepted publickey for core from 147.75.109.163 port 35354 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:51:05.026226 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:51:05.034677 systemd-logind[1487]: New session 10 of user core. Nov 1 01:51:05.046404 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 01:51:05.507145 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 01:51:05.507669 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:51:05.514704 sudo[1744]: pam_unix(sudo:session): session closed for user root Nov 1 01:51:05.523938 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 01:51:05.524473 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:51:05.550601 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 01:51:05.554322 auditctl[1747]: No rules Nov 1 01:51:05.554967 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 01:51:05.555343 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 01:51:05.564575 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 01:51:05.603781 augenrules[1765]: No rules Nov 1 01:51:05.604712 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 01:51:05.605960 sudo[1743]: pam_unix(sudo:session): session closed for user root Nov 1 01:51:05.751557 sshd[1726]: pam_unix(sshd:session): session closed for user core Nov 1 01:51:05.756589 systemd-logind[1487]: Session 10 logged out. Waiting for processes to exit. Nov 1 01:51:05.758169 systemd[1]: sshd@7-10.230.17.2:22-147.75.109.163:35354.service: Deactivated successfully. Nov 1 01:51:05.760420 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 01:51:05.761584 systemd-logind[1487]: Removed session 10. Nov 1 01:51:05.909397 systemd[1]: Started sshd@8-10.230.17.2:22-147.75.109.163:35370.service - OpenSSH per-connection server daemon (147.75.109.163:35370). Nov 1 01:51:06.821493 sshd[1773]: Accepted publickey for core from 147.75.109.163 port 35370 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:51:06.823690 sshd[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:51:06.831316 systemd-logind[1487]: New session 11 of user core. Nov 1 01:51:06.841405 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 01:51:07.304330 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 01:51:07.305447 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 01:51:07.784717 (dockerd)[1792]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 01:51:07.785397 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 01:51:08.212315 dockerd[1792]: time="2025-11-01T01:51:08.211443232Z" level=info msg="Starting up" Nov 1 01:51:08.323331 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1105115089-merged.mount: Deactivated successfully. Nov 1 01:51:08.358341 dockerd[1792]: time="2025-11-01T01:51:08.357895234Z" level=info msg="Loading containers: start." Nov 1 01:51:08.504126 kernel: Initializing XFRM netlink socket Nov 1 01:51:08.616666 systemd-networkd[1428]: docker0: Link UP Nov 1 01:51:08.638219 dockerd[1792]: time="2025-11-01T01:51:08.638095279Z" level=info msg="Loading containers: done." Nov 1 01:51:08.655616 dockerd[1792]: time="2025-11-01T01:51:08.655470954Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 01:51:08.655819 dockerd[1792]: time="2025-11-01T01:51:08.655621700Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 01:51:08.655819 dockerd[1792]: time="2025-11-01T01:51:08.655802006Z" level=info msg="Daemon has completed initialization" Nov 1 01:51:08.700328 dockerd[1792]: time="2025-11-01T01:51:08.699167759Z" level=info msg="API listen on /run/docker.sock" Nov 1 01:51:08.699607 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 01:51:09.893967 containerd[1506]: time="2025-11-01T01:51:09.893141685Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 01:51:10.888881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount650838185.mount: Deactivated successfully. Nov 1 01:51:13.187973 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 1 01:51:13.428051 containerd[1506]: time="2025-11-01T01:51:13.426404497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:13.428051 containerd[1506]: time="2025-11-01T01:51:13.427788045Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837924" Nov 1 01:51:13.429180 containerd[1506]: time="2025-11-01T01:51:13.429144713Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:13.433663 containerd[1506]: time="2025-11-01T01:51:13.433626072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:13.435717 containerd[1506]: time="2025-11-01T01:51:13.435653375Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 3.54238596s" Nov 1 01:51:13.435807 containerd[1506]: time="2025-11-01T01:51:13.435741122Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 01:51:13.441831 containerd[1506]: time="2025-11-01T01:51:13.441597282Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 01:51:14.466261 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 01:51:14.477612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:51:14.670448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:51:14.682635 (kubelet)[2003]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:51:14.759948 kubelet[2003]: E1101 01:51:14.759608 2003 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:51:14.764803 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:51:14.765309 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:51:16.306990 containerd[1506]: time="2025-11-01T01:51:16.305063452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:16.308314 containerd[1506]: time="2025-11-01T01:51:16.308226442Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787035" Nov 1 01:51:16.309069 containerd[1506]: time="2025-11-01T01:51:16.309007576Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:16.314163 containerd[1506]: time="2025-11-01T01:51:16.313512285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:16.315648 containerd[1506]: time="2025-11-01T01:51:16.315138248Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.873483777s" Nov 1 01:51:16.315648 containerd[1506]: time="2025-11-01T01:51:16.315214739Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 01:51:16.316665 containerd[1506]: time="2025-11-01T01:51:16.316598542Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 01:51:18.313211 containerd[1506]: time="2025-11-01T01:51:18.313117843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:18.315045 containerd[1506]: time="2025-11-01T01:51:18.314821204Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176297" Nov 1 01:51:18.315993 containerd[1506]: time="2025-11-01T01:51:18.315930812Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:18.320136 containerd[1506]: time="2025-11-01T01:51:18.320086486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:18.322874 containerd[1506]: time="2025-11-01T01:51:18.321795961Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.004287504s" Nov 1 01:51:18.322874 containerd[1506]: time="2025-11-01T01:51:18.321864565Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 01:51:18.323092 containerd[1506]: time="2025-11-01T01:51:18.323064289Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 01:51:20.636896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount230715498.mount: Deactivated successfully. Nov 1 01:51:21.367051 containerd[1506]: time="2025-11-01T01:51:21.366919491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:21.368509 containerd[1506]: time="2025-11-01T01:51:21.368297171Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924214" Nov 1 01:51:21.369126 containerd[1506]: time="2025-11-01T01:51:21.369074077Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:21.372799 containerd[1506]: time="2025-11-01T01:51:21.372712633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:21.374211 containerd[1506]: time="2025-11-01T01:51:21.373747097Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 3.050636608s" Nov 1 01:51:21.374211 containerd[1506]: time="2025-11-01T01:51:21.373794475Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 01:51:21.375224 containerd[1506]: time="2025-11-01T01:51:21.375191746Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 01:51:22.140593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3530988738.mount: Deactivated successfully. Nov 1 01:51:23.674728 containerd[1506]: time="2025-11-01T01:51:23.674612671Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:23.677039 containerd[1506]: time="2025-11-01T01:51:23.675651614Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Nov 1 01:51:23.677536 containerd[1506]: time="2025-11-01T01:51:23.677500320Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:23.681937 containerd[1506]: time="2025-11-01T01:51:23.681883942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:23.683589 containerd[1506]: time="2025-11-01T01:51:23.683552130Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.308194593s" Nov 1 01:51:23.683744 containerd[1506]: time="2025-11-01T01:51:23.683714939Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 01:51:23.687954 containerd[1506]: time="2025-11-01T01:51:23.687819227Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 01:51:24.377325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3568629676.mount: Deactivated successfully. Nov 1 01:51:24.387071 containerd[1506]: time="2025-11-01T01:51:24.386055486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:24.387289 containerd[1506]: time="2025-11-01T01:51:24.387233414Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 1 01:51:24.390047 containerd[1506]: time="2025-11-01T01:51:24.387789978Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:24.391005 containerd[1506]: time="2025-11-01T01:51:24.390957206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:24.392350 containerd[1506]: time="2025-11-01T01:51:24.392303188Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 704.238631ms" Nov 1 01:51:24.392507 containerd[1506]: time="2025-11-01T01:51:24.392353101Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 01:51:24.393843 containerd[1506]: time="2025-11-01T01:51:24.393811757Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 01:51:24.967767 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 1 01:51:24.976270 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:51:25.165612 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:51:25.176518 (kubelet)[2090]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 01:51:25.295788 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1175815513.mount: Deactivated successfully. Nov 1 01:51:25.346871 kubelet[2090]: E1101 01:51:25.346672 2090 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 01:51:25.351287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 01:51:25.351624 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 01:51:25.373659 update_engine[1488]: I20251101 01:51:25.373487 1488 update_attempter.cc:509] Updating boot flags... Nov 1 01:51:25.477279 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2116) Nov 1 01:51:25.582137 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2118) Nov 1 01:51:29.593126 containerd[1506]: time="2025-11-01T01:51:29.592940576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:29.594566 containerd[1506]: time="2025-11-01T01:51:29.594485304Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Nov 1 01:51:29.599106 containerd[1506]: time="2025-11-01T01:51:29.599061155Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:29.601315 containerd[1506]: time="2025-11-01T01:51:29.600944497Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 5.206977461s" Nov 1 01:51:29.601315 containerd[1506]: time="2025-11-01T01:51:29.600997116Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 01:51:29.603403 containerd[1506]: time="2025-11-01T01:51:29.602924378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:33.737099 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:51:33.756570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:51:33.798816 systemd[1]: Reloading requested from client PID 2192 ('systemctl') (unit session-11.scope)... Nov 1 01:51:33.798859 systemd[1]: Reloading... Nov 1 01:51:33.997072 zram_generator::config[2231]: No configuration found. Nov 1 01:51:34.138479 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:51:34.254922 systemd[1]: Reloading finished in 455 ms. Nov 1 01:51:34.331316 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 01:51:34.331477 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 01:51:34.331926 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:51:34.338653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:51:34.509712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:51:34.518988 (kubelet)[2298]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 01:51:34.622377 kubelet[2298]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:51:34.622377 kubelet[2298]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:51:34.622377 kubelet[2298]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:51:34.638166 kubelet[2298]: I1101 01:51:34.622504 2298 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:51:35.488302 kubelet[2298]: I1101 01:51:35.488241 2298 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 01:51:35.488302 kubelet[2298]: I1101 01:51:35.488294 2298 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:51:35.488737 kubelet[2298]: I1101 01:51:35.488706 2298 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 01:51:35.530550 kubelet[2298]: E1101 01:51:35.530454 2298 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.17.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.17.2:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:51:35.532108 kubelet[2298]: I1101 01:51:35.531838 2298 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:51:35.551193 kubelet[2298]: E1101 01:51:35.551092 2298 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:51:35.551193 kubelet[2298]: I1101 01:51:35.551168 2298 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 01:51:35.562448 kubelet[2298]: I1101 01:51:35.562349 2298 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 01:51:35.565805 kubelet[2298]: I1101 01:51:35.565718 2298 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:51:35.566147 kubelet[2298]: I1101 01:51:35.565799 2298 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-d9muf.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 01:51:35.567994 kubelet[2298]: I1101 01:51:35.567921 2298 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:51:35.567994 kubelet[2298]: I1101 01:51:35.567987 2298 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 01:51:35.569514 kubelet[2298]: I1101 01:51:35.569463 2298 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:51:35.574189 kubelet[2298]: I1101 01:51:35.573906 2298 kubelet.go:446] "Attempting to sync node with API server" Nov 1 01:51:35.574189 kubelet[2298]: I1101 01:51:35.573969 2298 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:51:35.574189 kubelet[2298]: I1101 01:51:35.574045 2298 kubelet.go:352] "Adding apiserver pod source" Nov 1 01:51:35.574189 kubelet[2298]: I1101 01:51:35.574073 2298 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:51:35.580478 kubelet[2298]: W1101 01:51:35.580408 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.17.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-d9muf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.17.2:6443: connect: connection refused Nov 1 01:51:35.581144 kubelet[2298]: E1101 01:51:35.581107 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.17.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-d9muf.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.17.2:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:51:35.582823 kubelet[2298]: I1101 01:51:35.582650 2298 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 01:51:35.586159 kubelet[2298]: I1101 01:51:35.586133 2298 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 01:51:35.587250 kubelet[2298]: W1101 01:51:35.586377 2298 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 01:51:35.589143 kubelet[2298]: I1101 01:51:35.588827 2298 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 01:51:35.589143 kubelet[2298]: I1101 01:51:35.588891 2298 server.go:1287] "Started kubelet" Nov 1 01:51:35.590295 kubelet[2298]: W1101 01:51:35.590001 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.17.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.17.2:6443: connect: connection refused Nov 1 01:51:35.590295 kubelet[2298]: E1101 01:51:35.590097 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.17.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.17.2:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:51:35.590295 kubelet[2298]: I1101 01:51:35.590261 2298 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:51:35.596525 kubelet[2298]: I1101 01:51:35.595473 2298 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:51:35.596525 kubelet[2298]: I1101 01:51:35.595758 2298 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:51:35.596525 kubelet[2298]: I1101 01:51:35.596345 2298 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:51:35.601993 kubelet[2298]: E1101 01:51:35.597634 2298 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.17.2:6443/api/v1/namespaces/default/events\": dial tcp 10.230.17.2:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-d9muf.gb1.brightbox.com.1873bf023e59044d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-d9muf.gb1.brightbox.com,UID:srv-d9muf.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-d9muf.gb1.brightbox.com,},FirstTimestamp:2025-11-01 01:51:35.588856909 +0000 UTC m=+1.061535776,LastTimestamp:2025-11-01 01:51:35.588856909 +0000 UTC m=+1.061535776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-d9muf.gb1.brightbox.com,}" Nov 1 01:51:35.606159 kubelet[2298]: I1101 01:51:35.605493 2298 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:51:35.609028 kubelet[2298]: I1101 01:51:35.608721 2298 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 01:51:35.610466 kubelet[2298]: E1101 01:51:35.609374 2298 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-d9muf.gb1.brightbox.com\" not found" Nov 1 01:51:35.613650 kubelet[2298]: I1101 01:51:35.613622 2298 server.go:479] "Adding debug handlers to kubelet server" Nov 1 01:51:35.617934 kubelet[2298]: E1101 01:51:35.617893 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.17.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-d9muf.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.17.2:6443: connect: connection refused" interval="200ms" Nov 1 01:51:35.618096 kubelet[2298]: I1101 01:51:35.615331 2298 reconciler.go:26] "Reconciler: start to sync state" Nov 1 01:51:35.618850 kubelet[2298]: W1101 01:51:35.618789 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.17.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.17.2:6443: connect: connection refused Nov 1 01:51:35.619004 kubelet[2298]: E1101 01:51:35.618978 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.17.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.17.2:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:51:35.619803 kubelet[2298]: I1101 01:51:35.619702 2298 factory.go:221] Registration of the systemd container factory successfully Nov 1 01:51:35.620906 kubelet[2298]: I1101 01:51:35.620841 2298 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:51:35.621575 kubelet[2298]: I1101 01:51:35.614002 2298 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 01:51:35.625202 kubelet[2298]: E1101 01:51:35.624836 2298 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:51:35.626118 kubelet[2298]: I1101 01:51:35.625952 2298 factory.go:221] Registration of the containerd container factory successfully Nov 1 01:51:35.644132 kubelet[2298]: I1101 01:51:35.644081 2298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 01:51:35.650231 kubelet[2298]: I1101 01:51:35.650145 2298 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 01:51:35.650337 kubelet[2298]: I1101 01:51:35.650256 2298 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 01:51:35.650337 kubelet[2298]: I1101 01:51:35.650300 2298 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:51:35.650337 kubelet[2298]: I1101 01:51:35.650313 2298 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 01:51:35.650497 kubelet[2298]: E1101 01:51:35.650390 2298 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:51:35.654540 kubelet[2298]: W1101 01:51:35.653544 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.17.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.17.2:6443: connect: connection refused Nov 1 01:51:35.654540 kubelet[2298]: E1101 01:51:35.653593 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.17.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.17.2:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:51:35.657117 kubelet[2298]: I1101 01:51:35.656754 2298 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:51:35.657117 kubelet[2298]: I1101 01:51:35.656777 2298 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:51:35.657117 kubelet[2298]: I1101 01:51:35.656808 2298 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:51:35.658619 kubelet[2298]: I1101 01:51:35.658597 2298 policy_none.go:49] "None policy: Start" Nov 1 01:51:35.658757 kubelet[2298]: I1101 01:51:35.658737 2298 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 01:51:35.658897 kubelet[2298]: I1101 01:51:35.658878 2298 state_mem.go:35] "Initializing new in-memory state store" Nov 1 01:51:35.672105 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 01:51:35.682949 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 01:51:35.689808 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 01:51:35.699338 kubelet[2298]: I1101 01:51:35.699299 2298 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 01:51:35.699820 kubelet[2298]: I1101 01:51:35.699614 2298 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:51:35.699820 kubelet[2298]: I1101 01:51:35.699647 2298 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:51:35.700798 kubelet[2298]: I1101 01:51:35.700129 2298 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:51:35.703142 kubelet[2298]: E1101 01:51:35.701353 2298 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:51:35.703142 kubelet[2298]: E1101 01:51:35.702155 2298 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-d9muf.gb1.brightbox.com\" not found" Nov 1 01:51:35.766414 systemd[1]: Created slice kubepods-burstable-pod1631a8020048aefb33b3c2b7d3690d00.slice - libcontainer container kubepods-burstable-pod1631a8020048aefb33b3c2b7d3690d00.slice. Nov 1 01:51:35.782466 kubelet[2298]: E1101 01:51:35.782370 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-d9muf.gb1.brightbox.com\" not found" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:35.788717 systemd[1]: Created slice kubepods-burstable-podfea968af995cfe47c881ac93656bca04.slice - libcontainer container kubepods-burstable-podfea968af995cfe47c881ac93656bca04.slice. Nov 1 01:51:35.792001 kubelet[2298]: E1101 01:51:35.791966 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-d9muf.gb1.brightbox.com\" not found" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:35.794417 systemd[1]: Created slice kubepods-burstable-podb4bf46a7e2cfefed3806d497de7b8d86.slice - libcontainer container kubepods-burstable-podb4bf46a7e2cfefed3806d497de7b8d86.slice. Nov 1 01:51:35.797540 kubelet[2298]: E1101 01:51:35.797489 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-d9muf.gb1.brightbox.com\" not found" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:35.803056 kubelet[2298]: I1101 01:51:35.803004 2298 kubelet_node_status.go:75] "Attempting to register node" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:35.803703 kubelet[2298]: E1101 01:51:35.803669 2298 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.17.2:6443/api/v1/nodes\": dial tcp 10.230.17.2:6443: connect: connection refused" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:35.819061 kubelet[2298]: E1101 01:51:35.818980 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.17.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-d9muf.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.17.2:6443: connect: connection refused" interval="400ms" Nov 1 01:51:35.919666 kubelet[2298]: I1101 01:51:35.919475 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4bf46a7e2cfefed3806d497de7b8d86-ca-certs\") pod \"kube-apiserver-srv-d9muf.gb1.brightbox.com\" (UID: \"b4bf46a7e2cfefed3806d497de7b8d86\") " pod="kube-system/kube-apiserver-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:35.919666 kubelet[2298]: I1101 01:51:35.919547 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4bf46a7e2cfefed3806d497de7b8d86-k8s-certs\") pod \"kube-apiserver-srv-d9muf.gb1.brightbox.com\" (UID: \"b4bf46a7e2cfefed3806d497de7b8d86\") " pod="kube-system/kube-apiserver-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:35.919666 kubelet[2298]: I1101 01:51:35.919592 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4bf46a7e2cfefed3806d497de7b8d86-usr-share-ca-certificates\") pod \"kube-apiserver-srv-d9muf.gb1.brightbox.com\" (UID: \"b4bf46a7e2cfefed3806d497de7b8d86\") " pod="kube-system/kube-apiserver-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:35.919666 kubelet[2298]: I1101 01:51:35.919628 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1631a8020048aefb33b3c2b7d3690d00-ca-certs\") pod \"kube-controller-manager-srv-d9muf.gb1.brightbox.com\" (UID: \"1631a8020048aefb33b3c2b7d3690d00\") " pod="kube-system/kube-controller-manager-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:35.919666 kubelet[2298]: I1101 01:51:35.919665 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1631a8020048aefb33b3c2b7d3690d00-flexvolume-dir\") pod \"kube-controller-manager-srv-d9muf.gb1.brightbox.com\" (UID: \"1631a8020048aefb33b3c2b7d3690d00\") " pod="kube-system/kube-controller-manager-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:35.920188 kubelet[2298]: I1101 01:51:35.919702 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1631a8020048aefb33b3c2b7d3690d00-k8s-certs\") pod \"kube-controller-manager-srv-d9muf.gb1.brightbox.com\" (UID: \"1631a8020048aefb33b3c2b7d3690d00\") " pod="kube-system/kube-controller-manager-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:35.920188 kubelet[2298]: I1101 01:51:35.919732 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1631a8020048aefb33b3c2b7d3690d00-kubeconfig\") pod \"kube-controller-manager-srv-d9muf.gb1.brightbox.com\" (UID: \"1631a8020048aefb33b3c2b7d3690d00\") " pod="kube-system/kube-controller-manager-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:35.920188 kubelet[2298]: I1101 01:51:35.919769 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1631a8020048aefb33b3c2b7d3690d00-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-d9muf.gb1.brightbox.com\" (UID: \"1631a8020048aefb33b3c2b7d3690d00\") " pod="kube-system/kube-controller-manager-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:35.920188 kubelet[2298]: I1101 01:51:35.919797 2298 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fea968af995cfe47c881ac93656bca04-kubeconfig\") pod \"kube-scheduler-srv-d9muf.gb1.brightbox.com\" (UID: \"fea968af995cfe47c881ac93656bca04\") " pod="kube-system/kube-scheduler-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:36.007315 kubelet[2298]: I1101 01:51:36.006928 2298 kubelet_node_status.go:75] "Attempting to register node" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:36.007634 kubelet[2298]: E1101 01:51:36.007601 2298 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.17.2:6443/api/v1/nodes\": dial tcp 10.230.17.2:6443: connect: connection refused" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:36.084768 containerd[1506]: time="2025-11-01T01:51:36.084572078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-d9muf.gb1.brightbox.com,Uid:1631a8020048aefb33b3c2b7d3690d00,Namespace:kube-system,Attempt:0,}" Nov 1 01:51:36.101800 containerd[1506]: time="2025-11-01T01:51:36.101655268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-d9muf.gb1.brightbox.com,Uid:fea968af995cfe47c881ac93656bca04,Namespace:kube-system,Attempt:0,}" Nov 1 01:51:36.102230 containerd[1506]: time="2025-11-01T01:51:36.101663731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-d9muf.gb1.brightbox.com,Uid:b4bf46a7e2cfefed3806d497de7b8d86,Namespace:kube-system,Attempt:0,}" Nov 1 01:51:36.219730 kubelet[2298]: E1101 01:51:36.219655 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.17.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-d9muf.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.17.2:6443: connect: connection refused" interval="800ms" Nov 1 01:51:36.411903 kubelet[2298]: I1101 01:51:36.411373 2298 kubelet_node_status.go:75] "Attempting to register node" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:36.411903 kubelet[2298]: E1101 01:51:36.411784 2298 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.17.2:6443/api/v1/nodes\": dial tcp 10.230.17.2:6443: connect: connection refused" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:36.669095 kubelet[2298]: W1101 01:51:36.668778 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.17.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-d9muf.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.17.2:6443: connect: connection refused Nov 1 01:51:36.669095 kubelet[2298]: E1101 01:51:36.668877 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.17.2:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-d9muf.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.17.2:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:51:36.763288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1633940081.mount: Deactivated successfully. Nov 1 01:51:36.774878 containerd[1506]: time="2025-11-01T01:51:36.774616603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:51:36.777897 containerd[1506]: time="2025-11-01T01:51:36.777800733Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 01:51:36.780447 containerd[1506]: time="2025-11-01T01:51:36.780383741Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:51:36.781470 containerd[1506]: time="2025-11-01T01:51:36.781392667Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 1 01:51:36.782192 containerd[1506]: time="2025-11-01T01:51:36.782122501Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:51:36.783274 containerd[1506]: time="2025-11-01T01:51:36.783074169Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 01:51:36.783274 containerd[1506]: time="2025-11-01T01:51:36.783206720Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:51:36.787329 containerd[1506]: time="2025-11-01T01:51:36.787279209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 01:51:36.790813 containerd[1506]: time="2025-11-01T01:51:36.790759986Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 705.197334ms" Nov 1 01:51:36.794007 containerd[1506]: time="2025-11-01T01:51:36.793877849Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 692.108653ms" Nov 1 01:51:36.795132 containerd[1506]: time="2025-11-01T01:51:36.795093587Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 693.181402ms" Nov 1 01:51:36.939666 kubelet[2298]: W1101 01:51:36.938666 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.17.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.17.2:6443: connect: connection refused Nov 1 01:51:36.939666 kubelet[2298]: E1101 01:51:36.938746 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.17.2:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.17.2:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:51:36.939666 kubelet[2298]: W1101 01:51:36.939270 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.17.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.17.2:6443: connect: connection refused Nov 1 01:51:36.939666 kubelet[2298]: E1101 01:51:36.939339 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.17.2:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.17.2:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:51:36.992096 containerd[1506]: time="2025-11-01T01:51:36.991933703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:51:36.992314 containerd[1506]: time="2025-11-01T01:51:36.992053323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:51:36.992314 containerd[1506]: time="2025-11-01T01:51:36.992097801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:51:36.992314 containerd[1506]: time="2025-11-01T01:51:36.992226039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:51:37.008362 containerd[1506]: time="2025-11-01T01:51:37.007984894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:51:37.008362 containerd[1506]: time="2025-11-01T01:51:37.008094132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:51:37.008362 containerd[1506]: time="2025-11-01T01:51:37.008125106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:51:37.008362 containerd[1506]: time="2025-11-01T01:51:37.008229532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:51:37.014868 containerd[1506]: time="2025-11-01T01:51:37.014455292Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:51:37.014868 containerd[1506]: time="2025-11-01T01:51:37.014536057Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:51:37.014868 containerd[1506]: time="2025-11-01T01:51:37.014562166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:51:37.014868 containerd[1506]: time="2025-11-01T01:51:37.014692784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:51:37.021378 kubelet[2298]: W1101 01:51:37.021328 2298 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.17.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.17.2:6443: connect: connection refused Nov 1 01:51:37.021521 kubelet[2298]: E1101 01:51:37.021398 2298 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.17.2:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.17.2:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:51:37.023025 kubelet[2298]: E1101 01:51:37.022599 2298 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.17.2:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-d9muf.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.17.2:6443: connect: connection refused" interval="1.6s" Nov 1 01:51:37.051044 systemd[1]: Started cri-containerd-c5700020c64813833b73a4d143be65063ab1b6004429736bd03245e095e1d9fd.scope - libcontainer container c5700020c64813833b73a4d143be65063ab1b6004429736bd03245e095e1d9fd. Nov 1 01:51:37.063503 systemd[1]: Started cri-containerd-459a507c058a5baa6f0bcb89a2b8c03377332a3b4b53c60a0dcb20d0c27dbde0.scope - libcontainer container 459a507c058a5baa6f0bcb89a2b8c03377332a3b4b53c60a0dcb20d0c27dbde0. Nov 1 01:51:37.083376 systemd[1]: Started cri-containerd-5aef4231475fab18e15dafc9df61dee1f9e406dac0890ff27c160b4d4f0d6766.scope - libcontainer container 5aef4231475fab18e15dafc9df61dee1f9e406dac0890ff27c160b4d4f0d6766. Nov 1 01:51:37.181119 containerd[1506]: time="2025-11-01T01:51:37.180838451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-d9muf.gb1.brightbox.com,Uid:b4bf46a7e2cfefed3806d497de7b8d86,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5700020c64813833b73a4d143be65063ab1b6004429736bd03245e095e1d9fd\"" Nov 1 01:51:37.199059 containerd[1506]: time="2025-11-01T01:51:37.197471083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-d9muf.gb1.brightbox.com,Uid:1631a8020048aefb33b3c2b7d3690d00,Namespace:kube-system,Attempt:0,} returns sandbox id \"459a507c058a5baa6f0bcb89a2b8c03377332a3b4b53c60a0dcb20d0c27dbde0\"" Nov 1 01:51:37.206117 containerd[1506]: time="2025-11-01T01:51:37.205379352Z" level=info msg="CreateContainer within sandbox \"c5700020c64813833b73a4d143be65063ab1b6004429736bd03245e095e1d9fd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 01:51:37.207595 containerd[1506]: time="2025-11-01T01:51:37.207463542Z" level=info msg="CreateContainer within sandbox \"459a507c058a5baa6f0bcb89a2b8c03377332a3b4b53c60a0dcb20d0c27dbde0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 01:51:37.228126 containerd[1506]: time="2025-11-01T01:51:37.227994941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-d9muf.gb1.brightbox.com,Uid:fea968af995cfe47c881ac93656bca04,Namespace:kube-system,Attempt:0,} returns sandbox id \"5aef4231475fab18e15dafc9df61dee1f9e406dac0890ff27c160b4d4f0d6766\"" Nov 1 01:51:37.229159 kubelet[2298]: I1101 01:51:37.228090 2298 kubelet_node_status.go:75] "Attempting to register node" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:37.229159 kubelet[2298]: E1101 01:51:37.229094 2298 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.17.2:6443/api/v1/nodes\": dial tcp 10.230.17.2:6443: connect: connection refused" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:37.234510 containerd[1506]: time="2025-11-01T01:51:37.234463170Z" level=info msg="CreateContainer within sandbox \"5aef4231475fab18e15dafc9df61dee1f9e406dac0890ff27c160b4d4f0d6766\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 01:51:37.235247 containerd[1506]: time="2025-11-01T01:51:37.235164780Z" level=info msg="CreateContainer within sandbox \"c5700020c64813833b73a4d143be65063ab1b6004429736bd03245e095e1d9fd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5b652f9f63da84e7d9466263a3ed306f6de7f6af6530a6bc60fdfbd794f88cad\"" Nov 1 01:51:37.235852 containerd[1506]: time="2025-11-01T01:51:37.235809565Z" level=info msg="StartContainer for \"5b652f9f63da84e7d9466263a3ed306f6de7f6af6530a6bc60fdfbd794f88cad\"" Nov 1 01:51:37.263787 containerd[1506]: time="2025-11-01T01:51:37.263721416Z" level=info msg="CreateContainer within sandbox \"459a507c058a5baa6f0bcb89a2b8c03377332a3b4b53c60a0dcb20d0c27dbde0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"060190b414b3ccb15fb2f1929125fdbde0cd27d2198c7db81844e7a66af1dda0\"" Nov 1 01:51:37.265319 containerd[1506]: time="2025-11-01T01:51:37.265273643Z" level=info msg="StartContainer for \"060190b414b3ccb15fb2f1929125fdbde0cd27d2198c7db81844e7a66af1dda0\"" Nov 1 01:51:37.274644 containerd[1506]: time="2025-11-01T01:51:37.274440273Z" level=info msg="CreateContainer within sandbox \"5aef4231475fab18e15dafc9df61dee1f9e406dac0890ff27c160b4d4f0d6766\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"70ae5b82f880561d7de876c2460bbaa4f9b33cba193073634228018fbd543ea7\"" Nov 1 01:51:37.278720 containerd[1506]: time="2025-11-01T01:51:37.277324800Z" level=info msg="StartContainer for \"70ae5b82f880561d7de876c2460bbaa4f9b33cba193073634228018fbd543ea7\"" Nov 1 01:51:37.277824 systemd[1]: Started cri-containerd-5b652f9f63da84e7d9466263a3ed306f6de7f6af6530a6bc60fdfbd794f88cad.scope - libcontainer container 5b652f9f63da84e7d9466263a3ed306f6de7f6af6530a6bc60fdfbd794f88cad. Nov 1 01:51:37.333353 systemd[1]: Started cri-containerd-060190b414b3ccb15fb2f1929125fdbde0cd27d2198c7db81844e7a66af1dda0.scope - libcontainer container 060190b414b3ccb15fb2f1929125fdbde0cd27d2198c7db81844e7a66af1dda0. Nov 1 01:51:37.357370 systemd[1]: Started cri-containerd-70ae5b82f880561d7de876c2460bbaa4f9b33cba193073634228018fbd543ea7.scope - libcontainer container 70ae5b82f880561d7de876c2460bbaa4f9b33cba193073634228018fbd543ea7. Nov 1 01:51:37.406847 containerd[1506]: time="2025-11-01T01:51:37.406058532Z" level=info msg="StartContainer for \"5b652f9f63da84e7d9466263a3ed306f6de7f6af6530a6bc60fdfbd794f88cad\" returns successfully" Nov 1 01:51:37.455279 containerd[1506]: time="2025-11-01T01:51:37.453460505Z" level=info msg="StartContainer for \"060190b414b3ccb15fb2f1929125fdbde0cd27d2198c7db81844e7a66af1dda0\" returns successfully" Nov 1 01:51:37.498786 containerd[1506]: time="2025-11-01T01:51:37.498698289Z" level=info msg="StartContainer for \"70ae5b82f880561d7de876c2460bbaa4f9b33cba193073634228018fbd543ea7\" returns successfully" Nov 1 01:51:37.577104 kubelet[2298]: E1101 01:51:37.576556 2298 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.17.2:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.17.2:6443: connect: connection refused" logger="UnhandledError" Nov 1 01:51:37.672434 kubelet[2298]: E1101 01:51:37.671705 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-d9muf.gb1.brightbox.com\" not found" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:37.672434 kubelet[2298]: E1101 01:51:37.671811 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-d9muf.gb1.brightbox.com\" not found" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:37.678354 kubelet[2298]: E1101 01:51:37.678309 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-d9muf.gb1.brightbox.com\" not found" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:38.682789 kubelet[2298]: E1101 01:51:38.682673 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-d9muf.gb1.brightbox.com\" not found" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:38.684062 kubelet[2298]: E1101 01:51:38.683547 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-d9muf.gb1.brightbox.com\" not found" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:38.684062 kubelet[2298]: E1101 01:51:38.683883 2298 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-d9muf.gb1.brightbox.com\" not found" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:38.833726 kubelet[2298]: I1101 01:51:38.833665 2298 kubelet_node_status.go:75] "Attempting to register node" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:40.257202 kubelet[2298]: E1101 01:51:40.257129 2298 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-d9muf.gb1.brightbox.com\" not found" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:40.272830 kubelet[2298]: E1101 01:51:40.271618 2298 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-d9muf.gb1.brightbox.com.1873bf023e59044d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-d9muf.gb1.brightbox.com,UID:srv-d9muf.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-d9muf.gb1.brightbox.com,},FirstTimestamp:2025-11-01 01:51:35.588856909 +0000 UTC m=+1.061535776,LastTimestamp:2025-11-01 01:51:35.588856909 +0000 UTC m=+1.061535776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-d9muf.gb1.brightbox.com,}" Nov 1 01:51:40.321188 kubelet[2298]: I1101 01:51:40.319777 2298 kubelet_node_status.go:78] "Successfully registered node" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:40.321188 kubelet[2298]: E1101 01:51:40.319850 2298 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-d9muf.gb1.brightbox.com\": node \"srv-d9muf.gb1.brightbox.com\" not found" Nov 1 01:51:40.335554 kubelet[2298]: E1101 01:51:40.335205 2298 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-d9muf.gb1.brightbox.com.1873bf02407db382 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-d9muf.gb1.brightbox.com,UID:srv-d9muf.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:srv-d9muf.gb1.brightbox.com,},FirstTimestamp:2025-11-01 01:51:35.62481549 +0000 UTC m=+1.097494368,LastTimestamp:2025-11-01 01:51:35.62481549 +0000 UTC m=+1.097494368,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-d9muf.gb1.brightbox.com,}" Nov 1 01:51:40.411256 kubelet[2298]: I1101 01:51:40.410780 2298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:40.420858 kubelet[2298]: E1101 01:51:40.420807 2298 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-d9muf.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:40.420858 kubelet[2298]: I1101 01:51:40.420851 2298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:40.424851 kubelet[2298]: E1101 01:51:40.424650 2298 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-d9muf.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:40.424851 kubelet[2298]: I1101 01:51:40.424683 2298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:40.427626 kubelet[2298]: E1101 01:51:40.427590 2298 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-d9muf.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:40.592278 kubelet[2298]: I1101 01:51:40.592071 2298 apiserver.go:52] "Watching apiserver" Nov 1 01:51:40.622006 kubelet[2298]: I1101 01:51:40.621891 2298 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 01:51:42.544212 systemd[1]: Reloading requested from client PID 2574 ('systemctl') (unit session-11.scope)... Nov 1 01:51:42.544261 systemd[1]: Reloading... Nov 1 01:51:42.677076 zram_generator::config[2616]: No configuration found. Nov 1 01:51:42.865809 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 01:51:43.009308 systemd[1]: Reloading finished in 464 ms. Nov 1 01:51:43.021395 kubelet[2298]: I1101 01:51:43.020859 2298 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.034547 kubelet[2298]: W1101 01:51:43.033877 2298 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:51:43.073538 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:51:43.084757 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 01:51:43.085147 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:51:43.085279 systemd[1]: kubelet.service: Consumed 1.641s CPU time, 132.5M memory peak, 0B memory swap peak. Nov 1 01:51:43.093471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 01:51:43.349283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 01:51:43.367803 (kubelet)[2677]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 01:51:43.474582 kubelet[2677]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:51:43.476065 kubelet[2677]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 01:51:43.476065 kubelet[2677]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 01:51:43.476065 kubelet[2677]: I1101 01:51:43.475428 2677 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 01:51:43.489037 kubelet[2677]: I1101 01:51:43.488966 2677 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 01:51:43.491038 kubelet[2677]: I1101 01:51:43.489299 2677 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 01:51:43.491038 kubelet[2677]: I1101 01:51:43.489695 2677 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 01:51:43.492091 kubelet[2677]: I1101 01:51:43.492065 2677 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 01:51:43.495806 kubelet[2677]: I1101 01:51:43.495750 2677 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 01:51:43.508511 kubelet[2677]: E1101 01:51:43.508192 2677 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 01:51:43.508511 kubelet[2677]: I1101 01:51:43.508252 2677 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 1 01:51:43.517365 kubelet[2677]: I1101 01:51:43.517314 2677 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 01:51:43.517811 kubelet[2677]: I1101 01:51:43.517739 2677 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 01:51:43.519977 kubelet[2677]: I1101 01:51:43.517806 2677 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-d9muf.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 01:51:43.519977 kubelet[2677]: I1101 01:51:43.519692 2677 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 01:51:43.519977 kubelet[2677]: I1101 01:51:43.519713 2677 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 01:51:43.524037 kubelet[2677]: I1101 01:51:43.523204 2677 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:51:43.524037 kubelet[2677]: I1101 01:51:43.523713 2677 kubelet.go:446] "Attempting to sync node with API server" Nov 1 01:51:43.524037 kubelet[2677]: I1101 01:51:43.523778 2677 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 01:51:43.524037 kubelet[2677]: I1101 01:51:43.523827 2677 kubelet.go:352] "Adding apiserver pod source" Nov 1 01:51:43.524037 kubelet[2677]: I1101 01:51:43.523851 2677 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 01:51:43.536434 kubelet[2677]: I1101 01:51:43.534633 2677 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 01:51:43.539954 kubelet[2677]: I1101 01:51:43.539923 2677 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 01:51:43.554169 kubelet[2677]: I1101 01:51:43.554058 2677 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 01:51:43.554513 kubelet[2677]: I1101 01:51:43.554479 2677 server.go:1287] "Started kubelet" Nov 1 01:51:43.562349 kubelet[2677]: I1101 01:51:43.558758 2677 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 01:51:43.562349 kubelet[2677]: I1101 01:51:43.560510 2677 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 01:51:43.562349 kubelet[2677]: I1101 01:51:43.560614 2677 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 01:51:43.563458 kubelet[2677]: I1101 01:51:43.563373 2677 server.go:479] "Adding debug handlers to kubelet server" Nov 1 01:51:43.564791 kubelet[2677]: I1101 01:51:43.564708 2677 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 01:51:43.566252 kubelet[2677]: I1101 01:51:43.566215 2677 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 01:51:43.567942 kubelet[2677]: I1101 01:51:43.567239 2677 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 01:51:43.571689 kubelet[2677]: I1101 01:51:43.571611 2677 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 01:51:43.571941 kubelet[2677]: I1101 01:51:43.571911 2677 reconciler.go:26] "Reconciler: start to sync state" Nov 1 01:51:43.577041 kubelet[2677]: I1101 01:51:43.576280 2677 factory.go:221] Registration of the systemd container factory successfully Nov 1 01:51:43.577041 kubelet[2677]: I1101 01:51:43.576473 2677 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 01:51:43.588239 kubelet[2677]: I1101 01:51:43.586559 2677 factory.go:221] Registration of the containerd container factory successfully Nov 1 01:51:43.590387 kubelet[2677]: E1101 01:51:43.590354 2677 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 01:51:43.599865 kubelet[2677]: I1101 01:51:43.599153 2677 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 01:51:43.603041 kubelet[2677]: I1101 01:51:43.602720 2677 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 01:51:43.603041 kubelet[2677]: I1101 01:51:43.602778 2677 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 01:51:43.603041 kubelet[2677]: I1101 01:51:43.602814 2677 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 01:51:43.603041 kubelet[2677]: I1101 01:51:43.602827 2677 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 01:51:43.603041 kubelet[2677]: E1101 01:51:43.602904 2677 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 01:51:43.673046 kubelet[2677]: I1101 01:51:43.672380 2677 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 01:51:43.673046 kubelet[2677]: I1101 01:51:43.672416 2677 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 01:51:43.673046 kubelet[2677]: I1101 01:51:43.672457 2677 state_mem.go:36] "Initialized new in-memory state store" Nov 1 01:51:43.673046 kubelet[2677]: I1101 01:51:43.672757 2677 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 01:51:43.673046 kubelet[2677]: I1101 01:51:43.672779 2677 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 01:51:43.673046 kubelet[2677]: I1101 01:51:43.672828 2677 policy_none.go:49] "None policy: Start" Nov 1 01:51:43.673046 kubelet[2677]: I1101 01:51:43.672863 2677 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 01:51:43.673046 kubelet[2677]: I1101 01:51:43.672899 2677 state_mem.go:35] "Initializing new in-memory state store" Nov 1 01:51:43.673752 kubelet[2677]: I1101 01:51:43.673728 2677 state_mem.go:75] "Updated machine memory state" Nov 1 01:51:43.686856 kubelet[2677]: I1101 01:51:43.686799 2677 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 01:51:43.687644 kubelet[2677]: I1101 01:51:43.687603 2677 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 01:51:43.687904 kubelet[2677]: I1101 01:51:43.687822 2677 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 01:51:43.688978 kubelet[2677]: I1101 01:51:43.688956 2677 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 01:51:43.693555 kubelet[2677]: E1101 01:51:43.693522 2677 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 01:51:43.710834 kubelet[2677]: I1101 01:51:43.710790 2677 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.712267 kubelet[2677]: I1101 01:51:43.711372 2677 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.713686 kubelet[2677]: I1101 01:51:43.711618 2677 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.727788 kubelet[2677]: W1101 01:51:43.727654 2677 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:51:43.728746 kubelet[2677]: W1101 01:51:43.728394 2677 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:51:43.729920 kubelet[2677]: W1101 01:51:43.729725 2677 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:51:43.729920 kubelet[2677]: E1101 01:51:43.729800 2677 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-d9muf.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.774238 kubelet[2677]: I1101 01:51:43.773867 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4bf46a7e2cfefed3806d497de7b8d86-k8s-certs\") pod \"kube-apiserver-srv-d9muf.gb1.brightbox.com\" (UID: \"b4bf46a7e2cfefed3806d497de7b8d86\") " pod="kube-system/kube-apiserver-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.774238 kubelet[2677]: I1101 01:51:43.773958 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1631a8020048aefb33b3c2b7d3690d00-k8s-certs\") pod \"kube-controller-manager-srv-d9muf.gb1.brightbox.com\" (UID: \"1631a8020048aefb33b3c2b7d3690d00\") " pod="kube-system/kube-controller-manager-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.774238 kubelet[2677]: I1101 01:51:43.774009 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1631a8020048aefb33b3c2b7d3690d00-kubeconfig\") pod \"kube-controller-manager-srv-d9muf.gb1.brightbox.com\" (UID: \"1631a8020048aefb33b3c2b7d3690d00\") " pod="kube-system/kube-controller-manager-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.774238 kubelet[2677]: I1101 01:51:43.774064 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4bf46a7e2cfefed3806d497de7b8d86-ca-certs\") pod \"kube-apiserver-srv-d9muf.gb1.brightbox.com\" (UID: \"b4bf46a7e2cfefed3806d497de7b8d86\") " pod="kube-system/kube-apiserver-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.774238 kubelet[2677]: I1101 01:51:43.774100 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4bf46a7e2cfefed3806d497de7b8d86-usr-share-ca-certificates\") pod \"kube-apiserver-srv-d9muf.gb1.brightbox.com\" (UID: \"b4bf46a7e2cfefed3806d497de7b8d86\") " pod="kube-system/kube-apiserver-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.774767 kubelet[2677]: I1101 01:51:43.774131 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1631a8020048aefb33b3c2b7d3690d00-ca-certs\") pod \"kube-controller-manager-srv-d9muf.gb1.brightbox.com\" (UID: \"1631a8020048aefb33b3c2b7d3690d00\") " pod="kube-system/kube-controller-manager-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.774767 kubelet[2677]: I1101 01:51:43.774177 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1631a8020048aefb33b3c2b7d3690d00-flexvolume-dir\") pod \"kube-controller-manager-srv-d9muf.gb1.brightbox.com\" (UID: \"1631a8020048aefb33b3c2b7d3690d00\") " pod="kube-system/kube-controller-manager-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.774767 kubelet[2677]: I1101 01:51:43.774210 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1631a8020048aefb33b3c2b7d3690d00-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-d9muf.gb1.brightbox.com\" (UID: \"1631a8020048aefb33b3c2b7d3690d00\") " pod="kube-system/kube-controller-manager-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.774767 kubelet[2677]: I1101 01:51:43.774245 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fea968af995cfe47c881ac93656bca04-kubeconfig\") pod \"kube-scheduler-srv-d9muf.gb1.brightbox.com\" (UID: \"fea968af995cfe47c881ac93656bca04\") " pod="kube-system/kube-scheduler-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.834519 kubelet[2677]: I1101 01:51:43.832967 2677 kubelet_node_status.go:75] "Attempting to register node" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.851830 kubelet[2677]: I1101 01:51:43.851711 2677 kubelet_node_status.go:124] "Node was previously registered" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:43.852124 kubelet[2677]: I1101 01:51:43.851869 2677 kubelet_node_status.go:78] "Successfully registered node" node="srv-d9muf.gb1.brightbox.com" Nov 1 01:51:44.529992 kubelet[2677]: I1101 01:51:44.529893 2677 apiserver.go:52] "Watching apiserver" Nov 1 01:51:44.572602 kubelet[2677]: I1101 01:51:44.572531 2677 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 01:51:44.640381 kubelet[2677]: I1101 01:51:44.640324 2677 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:44.640973 kubelet[2677]: I1101 01:51:44.640950 2677 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:44.660875 kubelet[2677]: W1101 01:51:44.658608 2677 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:51:44.660875 kubelet[2677]: E1101 01:51:44.658805 2677 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-d9muf.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:44.660875 kubelet[2677]: W1101 01:51:44.659914 2677 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 1 01:51:44.660875 kubelet[2677]: E1101 01:51:44.659955 2677 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-d9muf.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-d9muf.gb1.brightbox.com" Nov 1 01:51:44.706385 kubelet[2677]: I1101 01:51:44.706268 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-d9muf.gb1.brightbox.com" podStartSLOduration=1.7061791240000002 podStartE2EDuration="1.706179124s" podCreationTimestamp="2025-11-01 01:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:51:44.704213856 +0000 UTC m=+1.324425549" watchObservedRunningTime="2025-11-01 01:51:44.706179124 +0000 UTC m=+1.326390793" Nov 1 01:51:44.732892 kubelet[2677]: I1101 01:51:44.732779 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-d9muf.gb1.brightbox.com" podStartSLOduration=1.732751592 podStartE2EDuration="1.732751592s" podCreationTimestamp="2025-11-01 01:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:51:44.71769937 +0000 UTC m=+1.337911063" watchObservedRunningTime="2025-11-01 01:51:44.732751592 +0000 UTC m=+1.352963261" Nov 1 01:51:44.733192 kubelet[2677]: I1101 01:51:44.732982 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-d9muf.gb1.brightbox.com" podStartSLOduration=1.73297382 podStartE2EDuration="1.73297382s" podCreationTimestamp="2025-11-01 01:51:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:51:44.730821853 +0000 UTC m=+1.351033527" watchObservedRunningTime="2025-11-01 01:51:44.73297382 +0000 UTC m=+1.353185494" Nov 1 01:51:48.443145 kubelet[2677]: I1101 01:51:48.442982 2677 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 01:51:48.447953 containerd[1506]: time="2025-11-01T01:51:48.444992017Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 01:51:48.448973 kubelet[2677]: I1101 01:51:48.447292 2677 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 01:51:49.361410 systemd[1]: Created slice kubepods-besteffort-podc28155b6_1d6c_4882_942b_6855af3490aa.slice - libcontainer container kubepods-besteffort-podc28155b6_1d6c_4882_942b_6855af3490aa.slice. Nov 1 01:51:49.507538 kubelet[2677]: I1101 01:51:49.507399 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c28155b6-1d6c-4882-942b-6855af3490aa-lib-modules\") pod \"kube-proxy-s84ng\" (UID: \"c28155b6-1d6c-4882-942b-6855af3490aa\") " pod="kube-system/kube-proxy-s84ng" Nov 1 01:51:49.507538 kubelet[2677]: I1101 01:51:49.507524 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c28155b6-1d6c-4882-942b-6855af3490aa-kube-proxy\") pod \"kube-proxy-s84ng\" (UID: \"c28155b6-1d6c-4882-942b-6855af3490aa\") " pod="kube-system/kube-proxy-s84ng" Nov 1 01:51:49.508605 kubelet[2677]: I1101 01:51:49.507583 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c28155b6-1d6c-4882-942b-6855af3490aa-xtables-lock\") pod \"kube-proxy-s84ng\" (UID: \"c28155b6-1d6c-4882-942b-6855af3490aa\") " pod="kube-system/kube-proxy-s84ng" Nov 1 01:51:49.508605 kubelet[2677]: I1101 01:51:49.507616 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2zz2\" (UniqueName: \"kubernetes.io/projected/c28155b6-1d6c-4882-942b-6855af3490aa-kube-api-access-t2zz2\") pod \"kube-proxy-s84ng\" (UID: \"c28155b6-1d6c-4882-942b-6855af3490aa\") " pod="kube-system/kube-proxy-s84ng" Nov 1 01:51:49.598076 systemd[1]: Created slice kubepods-besteffort-pod8708b51c_9e28_4a5a_b9ff_d98a3f454720.slice - libcontainer container kubepods-besteffort-pod8708b51c_9e28_4a5a_b9ff_d98a3f454720.slice. Nov 1 01:51:49.674774 containerd[1506]: time="2025-11-01T01:51:49.674554511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s84ng,Uid:c28155b6-1d6c-4882-942b-6855af3490aa,Namespace:kube-system,Attempt:0,}" Nov 1 01:51:49.709135 kubelet[2677]: I1101 01:51:49.708902 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6d69\" (UniqueName: \"kubernetes.io/projected/8708b51c-9e28-4a5a-b9ff-d98a3f454720-kube-api-access-w6d69\") pod \"tigera-operator-7dcd859c48-z9xnp\" (UID: \"8708b51c-9e28-4a5a-b9ff-d98a3f454720\") " pod="tigera-operator/tigera-operator-7dcd859c48-z9xnp" Nov 1 01:51:49.709135 kubelet[2677]: I1101 01:51:49.708970 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8708b51c-9e28-4a5a-b9ff-d98a3f454720-var-lib-calico\") pod \"tigera-operator-7dcd859c48-z9xnp\" (UID: \"8708b51c-9e28-4a5a-b9ff-d98a3f454720\") " pod="tigera-operator/tigera-operator-7dcd859c48-z9xnp" Nov 1 01:51:49.718236 containerd[1506]: time="2025-11-01T01:51:49.717783041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:51:49.718236 containerd[1506]: time="2025-11-01T01:51:49.717977960Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:51:49.718236 containerd[1506]: time="2025-11-01T01:51:49.718005418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:51:49.721491 containerd[1506]: time="2025-11-01T01:51:49.721340030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:51:49.768279 systemd[1]: Started cri-containerd-927323ebe48ec89063ed325fed9ff13860d1ac80c5a3464276527eab9d266a67.scope - libcontainer container 927323ebe48ec89063ed325fed9ff13860d1ac80c5a3464276527eab9d266a67. Nov 1 01:51:49.816482 containerd[1506]: time="2025-11-01T01:51:49.816243873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s84ng,Uid:c28155b6-1d6c-4882-942b-6855af3490aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"927323ebe48ec89063ed325fed9ff13860d1ac80c5a3464276527eab9d266a67\"" Nov 1 01:51:49.830057 containerd[1506]: time="2025-11-01T01:51:49.829911275Z" level=info msg="CreateContainer within sandbox \"927323ebe48ec89063ed325fed9ff13860d1ac80c5a3464276527eab9d266a67\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 01:51:49.858484 containerd[1506]: time="2025-11-01T01:51:49.858438206Z" level=info msg="CreateContainer within sandbox \"927323ebe48ec89063ed325fed9ff13860d1ac80c5a3464276527eab9d266a67\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f7f6a85061a04a9efe227d777f634623e7a648a68a882a5f6b01a388c01c664a\"" Nov 1 01:51:49.860059 containerd[1506]: time="2025-11-01T01:51:49.859957598Z" level=info msg="StartContainer for \"f7f6a85061a04a9efe227d777f634623e7a648a68a882a5f6b01a388c01c664a\"" Nov 1 01:51:49.897293 systemd[1]: Started cri-containerd-f7f6a85061a04a9efe227d777f634623e7a648a68a882a5f6b01a388c01c664a.scope - libcontainer container f7f6a85061a04a9efe227d777f634623e7a648a68a882a5f6b01a388c01c664a. Nov 1 01:51:49.903195 containerd[1506]: time="2025-11-01T01:51:49.903134050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-z9xnp,Uid:8708b51c-9e28-4a5a-b9ff-d98a3f454720,Namespace:tigera-operator,Attempt:0,}" Nov 1 01:51:49.953327 containerd[1506]: time="2025-11-01T01:51:49.952173962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:51:49.953327 containerd[1506]: time="2025-11-01T01:51:49.952392905Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:51:49.953327 containerd[1506]: time="2025-11-01T01:51:49.952418314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:51:49.953327 containerd[1506]: time="2025-11-01T01:51:49.952859455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:51:49.972053 containerd[1506]: time="2025-11-01T01:51:49.970960433Z" level=info msg="StartContainer for \"f7f6a85061a04a9efe227d777f634623e7a648a68a882a5f6b01a388c01c664a\" returns successfully" Nov 1 01:51:49.997250 systemd[1]: Started cri-containerd-e6c8d4d80327b4ac60388a0194064cb4ff2f05d571914490a267ba0edc3f4ee2.scope - libcontainer container e6c8d4d80327b4ac60388a0194064cb4ff2f05d571914490a267ba0edc3f4ee2. Nov 1 01:51:50.073736 containerd[1506]: time="2025-11-01T01:51:50.073569637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-z9xnp,Uid:8708b51c-9e28-4a5a-b9ff-d98a3f454720,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e6c8d4d80327b4ac60388a0194064cb4ff2f05d571914490a267ba0edc3f4ee2\"" Nov 1 01:51:50.081064 containerd[1506]: time="2025-11-01T01:51:50.079476129Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 01:51:51.739277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886209357.mount: Deactivated successfully. Nov 1 01:51:52.618158 kubelet[2677]: I1101 01:51:52.617416 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s84ng" podStartSLOduration=3.617374781 podStartE2EDuration="3.617374781s" podCreationTimestamp="2025-11-01 01:51:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:51:50.680328331 +0000 UTC m=+7.300540011" watchObservedRunningTime="2025-11-01 01:51:52.617374781 +0000 UTC m=+9.237586452" Nov 1 01:51:52.832303 containerd[1506]: time="2025-11-01T01:51:52.831065061Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:52.834175 containerd[1506]: time="2025-11-01T01:51:52.834102370Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 01:51:52.837126 containerd[1506]: time="2025-11-01T01:51:52.834967686Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:52.838134 containerd[1506]: time="2025-11-01T01:51:52.838091493Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:51:52.839548 containerd[1506]: time="2025-11-01T01:51:52.839508584Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.759960183s" Nov 1 01:51:52.839693 containerd[1506]: time="2025-11-01T01:51:52.839664265Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 01:51:52.842857 containerd[1506]: time="2025-11-01T01:51:52.842815638Z" level=info msg="CreateContainer within sandbox \"e6c8d4d80327b4ac60388a0194064cb4ff2f05d571914490a267ba0edc3f4ee2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 01:51:52.863198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924529892.mount: Deactivated successfully. Nov 1 01:51:52.866369 containerd[1506]: time="2025-11-01T01:51:52.866306212Z" level=info msg="CreateContainer within sandbox \"e6c8d4d80327b4ac60388a0194064cb4ff2f05d571914490a267ba0edc3f4ee2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f0869cca83424ba01e69f095fb6b6d330a15a968d8de05020fe704fb4f0c4068\"" Nov 1 01:51:52.868623 containerd[1506]: time="2025-11-01T01:51:52.867517649Z" level=info msg="StartContainer for \"f0869cca83424ba01e69f095fb6b6d330a15a968d8de05020fe704fb4f0c4068\"" Nov 1 01:51:52.925338 systemd[1]: Started cri-containerd-f0869cca83424ba01e69f095fb6b6d330a15a968d8de05020fe704fb4f0c4068.scope - libcontainer container f0869cca83424ba01e69f095fb6b6d330a15a968d8de05020fe704fb4f0c4068. Nov 1 01:51:52.965463 containerd[1506]: time="2025-11-01T01:51:52.965409191Z" level=info msg="StartContainer for \"f0869cca83424ba01e69f095fb6b6d330a15a968d8de05020fe704fb4f0c4068\" returns successfully" Nov 1 01:51:56.425310 kubelet[2677]: I1101 01:51:56.425091 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-z9xnp" podStartSLOduration=4.661860439 podStartE2EDuration="7.42500444s" podCreationTimestamp="2025-11-01 01:51:49 +0000 UTC" firstStartedPulling="2025-11-01 01:51:50.077994708 +0000 UTC m=+6.698206375" lastFinishedPulling="2025-11-01 01:51:52.841138714 +0000 UTC m=+9.461350376" observedRunningTime="2025-11-01 01:51:53.685681636 +0000 UTC m=+10.305893323" watchObservedRunningTime="2025-11-01 01:51:56.42500444 +0000 UTC m=+13.045216121" Nov 1 01:52:00.607457 sudo[1776]: pam_unix(sudo:session): session closed for user root Nov 1 01:52:00.760556 sshd[1773]: pam_unix(sshd:session): session closed for user core Nov 1 01:52:00.770201 systemd-logind[1487]: Session 11 logged out. Waiting for processes to exit. Nov 1 01:52:00.771996 systemd[1]: sshd@8-10.230.17.2:22-147.75.109.163:35370.service: Deactivated successfully. Nov 1 01:52:00.777902 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 01:52:00.778756 systemd[1]: session-11.scope: Consumed 6.683s CPU time, 143.0M memory peak, 0B memory swap peak. Nov 1 01:52:00.780741 systemd-logind[1487]: Removed session 11. Nov 1 01:52:07.817888 systemd[1]: Created slice kubepods-besteffort-podf2d12dda_b245_4e51_afe4_fce14b4705a0.slice - libcontainer container kubepods-besteffort-podf2d12dda_b245_4e51_afe4_fce14b4705a0.slice. Nov 1 01:52:07.933932 kubelet[2677]: I1101 01:52:07.933681 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f2d12dda-b245-4e51-afe4-fce14b4705a0-typha-certs\") pod \"calico-typha-847f9bd77b-x8frz\" (UID: \"f2d12dda-b245-4e51-afe4-fce14b4705a0\") " pod="calico-system/calico-typha-847f9bd77b-x8frz" Nov 1 01:52:07.935634 kubelet[2677]: I1101 01:52:07.934388 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnwvg\" (UniqueName: \"kubernetes.io/projected/f2d12dda-b245-4e51-afe4-fce14b4705a0-kube-api-access-rnwvg\") pod \"calico-typha-847f9bd77b-x8frz\" (UID: \"f2d12dda-b245-4e51-afe4-fce14b4705a0\") " pod="calico-system/calico-typha-847f9bd77b-x8frz" Nov 1 01:52:07.935634 kubelet[2677]: I1101 01:52:07.935230 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2d12dda-b245-4e51-afe4-fce14b4705a0-tigera-ca-bundle\") pod \"calico-typha-847f9bd77b-x8frz\" (UID: \"f2d12dda-b245-4e51-afe4-fce14b4705a0\") " pod="calico-system/calico-typha-847f9bd77b-x8frz" Nov 1 01:52:07.972078 systemd[1]: Created slice kubepods-besteffort-podf8b8635c_bf68_4d22_8ce9_f7d0f1703a8d.slice - libcontainer container kubepods-besteffort-podf8b8635c_bf68_4d22_8ce9_f7d0f1703a8d.slice. Nov 1 01:52:08.036310 kubelet[2677]: I1101 01:52:08.036183 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d-node-certs\") pod \"calico-node-j5fgg\" (UID: \"f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d\") " pod="calico-system/calico-node-j5fgg" Nov 1 01:52:08.036310 kubelet[2677]: I1101 01:52:08.036252 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d-xtables-lock\") pod \"calico-node-j5fgg\" (UID: \"f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d\") " pod="calico-system/calico-node-j5fgg" Nov 1 01:52:08.037212 kubelet[2677]: I1101 01:52:08.036353 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d-cni-log-dir\") pod \"calico-node-j5fgg\" (UID: \"f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d\") " pod="calico-system/calico-node-j5fgg" Nov 1 01:52:08.037212 kubelet[2677]: I1101 01:52:08.036614 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d-lib-modules\") pod \"calico-node-j5fgg\" (UID: \"f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d\") " pod="calico-system/calico-node-j5fgg" Nov 1 01:52:08.037212 kubelet[2677]: I1101 01:52:08.036675 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d-cni-net-dir\") pod \"calico-node-j5fgg\" (UID: \"f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d\") " pod="calico-system/calico-node-j5fgg" Nov 1 01:52:08.037212 kubelet[2677]: I1101 01:52:08.036761 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d-policysync\") pod \"calico-node-j5fgg\" (UID: \"f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d\") " pod="calico-system/calico-node-j5fgg" Nov 1 01:52:08.038210 kubelet[2677]: I1101 01:52:08.037510 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rdnq\" (UniqueName: \"kubernetes.io/projected/f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d-kube-api-access-4rdnq\") pod \"calico-node-j5fgg\" (UID: \"f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d\") " pod="calico-system/calico-node-j5fgg" Nov 1 01:52:08.038210 kubelet[2677]: I1101 01:52:08.037574 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d-cni-bin-dir\") pod \"calico-node-j5fgg\" (UID: \"f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d\") " pod="calico-system/calico-node-j5fgg" Nov 1 01:52:08.038210 kubelet[2677]: I1101 01:52:08.037604 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d-var-run-calico\") pod \"calico-node-j5fgg\" (UID: \"f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d\") " pod="calico-system/calico-node-j5fgg" Nov 1 01:52:08.038210 kubelet[2677]: I1101 01:52:08.037746 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d-flexvol-driver-host\") pod \"calico-node-j5fgg\" (UID: \"f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d\") " pod="calico-system/calico-node-j5fgg" Nov 1 01:52:08.038210 kubelet[2677]: I1101 01:52:08.037804 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d-var-lib-calico\") pod \"calico-node-j5fgg\" (UID: \"f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d\") " pod="calico-system/calico-node-j5fgg" Nov 1 01:52:08.038601 kubelet[2677]: I1101 01:52:08.037865 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d-tigera-ca-bundle\") pod \"calico-node-j5fgg\" (UID: \"f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d\") " pod="calico-system/calico-node-j5fgg" Nov 1 01:52:08.137603 containerd[1506]: time="2025-11-01T01:52:08.137336272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-847f9bd77b-x8frz,Uid:f2d12dda-b245-4e51-afe4-fce14b4705a0,Namespace:calico-system,Attempt:0,}" Nov 1 01:52:08.146968 kubelet[2677]: E1101 01:52:08.146564 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.146968 kubelet[2677]: W1101 01:52:08.146705 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.148479 kubelet[2677]: E1101 01:52:08.147697 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.148479 kubelet[2677]: E1101 01:52:08.147999 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.148479 kubelet[2677]: W1101 01:52:08.148033 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.148479 kubelet[2677]: E1101 01:52:08.148055 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.149603 kubelet[2677]: E1101 01:52:08.149142 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.149603 kubelet[2677]: W1101 01:52:08.149163 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.149603 kubelet[2677]: E1101 01:52:08.149188 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.150739 kubelet[2677]: E1101 01:52:08.150457 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.150739 kubelet[2677]: W1101 01:52:08.150477 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.150739 kubelet[2677]: E1101 01:52:08.150499 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.151752 kubelet[2677]: E1101 01:52:08.151395 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.151752 kubelet[2677]: W1101 01:52:08.151414 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.151752 kubelet[2677]: E1101 01:52:08.151435 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.152313 kubelet[2677]: E1101 01:52:08.152126 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.152313 kubelet[2677]: W1101 01:52:08.152146 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.152313 kubelet[2677]: E1101 01:52:08.152163 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.152986 kubelet[2677]: E1101 01:52:08.152935 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.153313 kubelet[2677]: W1101 01:52:08.153163 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.153313 kubelet[2677]: E1101 01:52:08.153200 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.154320 kubelet[2677]: E1101 01:52:08.154150 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.154320 kubelet[2677]: W1101 01:52:08.154170 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.154320 kubelet[2677]: E1101 01:52:08.154187 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.154697 kubelet[2677]: E1101 01:52:08.154676 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.154950 kubelet[2677]: W1101 01:52:08.154801 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.154950 kubelet[2677]: E1101 01:52:08.154829 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.155479 kubelet[2677]: E1101 01:52:08.155311 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.155479 kubelet[2677]: W1101 01:52:08.155330 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.155479 kubelet[2677]: E1101 01:52:08.155347 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.156143 kubelet[2677]: E1101 01:52:08.155858 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.156143 kubelet[2677]: W1101 01:52:08.155877 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.156143 kubelet[2677]: E1101 01:52:08.155894 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.156519 kubelet[2677]: E1101 01:52:08.156497 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.156650 kubelet[2677]: W1101 01:52:08.156628 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.159141 kubelet[2677]: E1101 01:52:08.159116 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.163279 kubelet[2677]: E1101 01:52:08.163232 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.163279 kubelet[2677]: W1101 01:52:08.163256 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.167408 kubelet[2677]: E1101 01:52:08.165098 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.195968 kubelet[2677]: E1101 01:52:08.195397 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:52:08.221045 kubelet[2677]: E1101 01:52:08.217643 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.221045 kubelet[2677]: W1101 01:52:08.217679 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.221045 kubelet[2677]: E1101 01:52:08.217711 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.239187 kubelet[2677]: E1101 01:52:08.239138 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.240631 kubelet[2677]: W1101 01:52:08.240189 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.240631 kubelet[2677]: E1101 01:52:08.240241 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.249782 kubelet[2677]: E1101 01:52:08.246558 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.249782 kubelet[2677]: W1101 01:52:08.246587 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.249782 kubelet[2677]: E1101 01:52:08.246621 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.249782 kubelet[2677]: E1101 01:52:08.249120 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.249782 kubelet[2677]: W1101 01:52:08.249138 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.249782 kubelet[2677]: E1101 01:52:08.249156 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.250810 kubelet[2677]: E1101 01:52:08.250551 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.250810 kubelet[2677]: W1101 01:52:08.250573 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.250810 kubelet[2677]: E1101 01:52:08.250626 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.256188 kubelet[2677]: E1101 01:52:08.254264 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.256188 kubelet[2677]: W1101 01:52:08.254291 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.256188 kubelet[2677]: E1101 01:52:08.254313 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.256188 kubelet[2677]: E1101 01:52:08.254993 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.256188 kubelet[2677]: W1101 01:52:08.255027 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.256188 kubelet[2677]: E1101 01:52:08.255047 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.256188 kubelet[2677]: E1101 01:52:08.255604 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.256188 kubelet[2677]: W1101 01:52:08.255620 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.256188 kubelet[2677]: E1101 01:52:08.255781 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.258141 kubelet[2677]: E1101 01:52:08.258119 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.258380 kubelet[2677]: W1101 01:52:08.258244 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.258380 kubelet[2677]: E1101 01:52:08.258271 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.261182 kubelet[2677]: E1101 01:52:08.261159 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.261325 kubelet[2677]: W1101 01:52:08.261302 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.261448 kubelet[2677]: E1101 01:52:08.261426 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.262035 kubelet[2677]: E1101 01:52:08.261896 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.262035 kubelet[2677]: W1101 01:52:08.261933 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.262035 kubelet[2677]: E1101 01:52:08.261954 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.262700 kubelet[2677]: E1101 01:52:08.262536 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.262700 kubelet[2677]: W1101 01:52:08.262555 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.262700 kubelet[2677]: E1101 01:52:08.262576 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.264055 kubelet[2677]: E1101 01:52:08.263321 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.264055 kubelet[2677]: W1101 01:52:08.263341 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.264055 kubelet[2677]: E1101 01:52:08.263358 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.264704 kubelet[2677]: E1101 01:52:08.264547 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.264704 kubelet[2677]: W1101 01:52:08.264570 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.264704 kubelet[2677]: E1101 01:52:08.264587 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.266440 kubelet[2677]: E1101 01:52:08.266287 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.266440 kubelet[2677]: W1101 01:52:08.266307 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.266440 kubelet[2677]: E1101 01:52:08.266325 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.267057 kubelet[2677]: E1101 01:52:08.266825 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.267057 kubelet[2677]: W1101 01:52:08.266844 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.267057 kubelet[2677]: E1101 01:52:08.266862 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.267883 kubelet[2677]: E1101 01:52:08.267597 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.267883 kubelet[2677]: W1101 01:52:08.267618 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.267883 kubelet[2677]: E1101 01:52:08.267635 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.268964 kubelet[2677]: E1101 01:52:08.268569 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.268964 kubelet[2677]: W1101 01:52:08.268588 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.268964 kubelet[2677]: E1101 01:52:08.268605 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.271244 kubelet[2677]: E1101 01:52:08.271137 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.271244 kubelet[2677]: W1101 01:52:08.271160 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.271244 kubelet[2677]: E1101 01:52:08.271178 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.272725 kubelet[2677]: E1101 01:52:08.272356 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.272725 kubelet[2677]: W1101 01:52:08.272377 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.272725 kubelet[2677]: E1101 01:52:08.272501 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.274049 kubelet[2677]: E1101 01:52:08.273267 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.274049 kubelet[2677]: W1101 01:52:08.273287 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.274049 kubelet[2677]: E1101 01:52:08.273304 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.275558 kubelet[2677]: E1101 01:52:08.274521 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.275832 kubelet[2677]: W1101 01:52:08.275665 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.275832 kubelet[2677]: E1101 01:52:08.275696 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.275832 kubelet[2677]: I1101 01:52:08.275743 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ec724e45-3797-40ba-a9db-970952094e39-varrun\") pod \"csi-node-driver-f685l\" (UID: \"ec724e45-3797-40ba-a9db-970952094e39\") " pod="calico-system/csi-node-driver-f685l" Nov 1 01:52:08.276634 kubelet[2677]: E1101 01:52:08.276468 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.276634 kubelet[2677]: W1101 01:52:08.276491 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.276634 kubelet[2677]: E1101 01:52:08.276518 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.276634 kubelet[2677]: I1101 01:52:08.276551 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vbx2\" (UniqueName: \"kubernetes.io/projected/ec724e45-3797-40ba-a9db-970952094e39-kube-api-access-2vbx2\") pod \"csi-node-driver-f685l\" (UID: \"ec724e45-3797-40ba-a9db-970952094e39\") " pod="calico-system/csi-node-driver-f685l" Nov 1 01:52:08.278217 kubelet[2677]: E1101 01:52:08.277718 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.278217 kubelet[2677]: W1101 01:52:08.277740 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.278217 kubelet[2677]: E1101 01:52:08.277769 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.278217 kubelet[2677]: I1101 01:52:08.277795 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ec724e45-3797-40ba-a9db-970952094e39-socket-dir\") pod \"csi-node-driver-f685l\" (UID: \"ec724e45-3797-40ba-a9db-970952094e39\") " pod="calico-system/csi-node-driver-f685l" Nov 1 01:52:08.279844 kubelet[2677]: E1101 01:52:08.279674 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.279844 kubelet[2677]: W1101 01:52:08.279718 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.279975 kubelet[2677]: E1101 01:52:08.279860 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.279975 kubelet[2677]: I1101 01:52:08.279904 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ec724e45-3797-40ba-a9db-970952094e39-registration-dir\") pod \"csi-node-driver-f685l\" (UID: \"ec724e45-3797-40ba-a9db-970952094e39\") " pod="calico-system/csi-node-driver-f685l" Nov 1 01:52:08.284280 containerd[1506]: time="2025-11-01T01:52:08.282208941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j5fgg,Uid:f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d,Namespace:calico-system,Attempt:0,}" Nov 1 01:52:08.285886 kubelet[2677]: E1101 01:52:08.284887 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.285886 kubelet[2677]: W1101 01:52:08.284934 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.285886 kubelet[2677]: E1101 01:52:08.285171 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.287270 kubelet[2677]: E1101 01:52:08.286227 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.287270 kubelet[2677]: W1101 01:52:08.286247 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.287270 kubelet[2677]: E1101 01:52:08.286468 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.287270 kubelet[2677]: E1101 01:52:08.287056 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.287270 kubelet[2677]: W1101 01:52:08.287083 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.289036 kubelet[2677]: E1101 01:52:08.287364 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.289036 kubelet[2677]: E1101 01:52:08.288793 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.289036 kubelet[2677]: W1101 01:52:08.288813 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.289195 containerd[1506]: time="2025-11-01T01:52:08.288600508Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:52:08.289357 kubelet[2677]: E1101 01:52:08.289269 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.289357 kubelet[2677]: I1101 01:52:08.289323 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ec724e45-3797-40ba-a9db-970952094e39-kubelet-dir\") pod \"csi-node-driver-f685l\" (UID: \"ec724e45-3797-40ba-a9db-970952094e39\") " pod="calico-system/csi-node-driver-f685l" Nov 1 01:52:08.290451 kubelet[2677]: E1101 01:52:08.290239 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.290451 kubelet[2677]: W1101 01:52:08.290446 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.291070 kubelet[2677]: E1101 01:52:08.291026 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.291365 kubelet[2677]: E1101 01:52:08.291337 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.291365 kubelet[2677]: W1101 01:52:08.291359 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.291491 kubelet[2677]: E1101 01:52:08.291377 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.292048 containerd[1506]: time="2025-11-01T01:52:08.291531002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:52:08.292884 kubelet[2677]: E1101 01:52:08.292654 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.292884 kubelet[2677]: W1101 01:52:08.292670 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.292884 kubelet[2677]: E1101 01:52:08.292696 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.293911 containerd[1506]: time="2025-11-01T01:52:08.293156506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:08.293911 containerd[1506]: time="2025-11-01T01:52:08.293365636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:08.294128 kubelet[2677]: E1101 01:52:08.294106 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.295276 kubelet[2677]: W1101 01:52:08.294215 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.295276 kubelet[2677]: E1101 01:52:08.294276 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.295896 kubelet[2677]: E1101 01:52:08.295677 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.297071 kubelet[2677]: W1101 01:52:08.297042 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.297338 kubelet[2677]: E1101 01:52:08.297183 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.297885 kubelet[2677]: E1101 01:52:08.297606 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.298534 kubelet[2677]: W1101 01:52:08.298174 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.298534 kubelet[2677]: E1101 01:52:08.298204 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.300169 kubelet[2677]: E1101 01:52:08.299185 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.300169 kubelet[2677]: W1101 01:52:08.299868 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.300169 kubelet[2677]: E1101 01:52:08.299892 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.394618 kubelet[2677]: E1101 01:52:08.393840 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.394618 kubelet[2677]: W1101 01:52:08.393879 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.416042 containerd[1506]: time="2025-11-01T01:52:08.411703256Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:52:08.416042 containerd[1506]: time="2025-11-01T01:52:08.411859521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:52:08.416042 containerd[1506]: time="2025-11-01T01:52:08.411904741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:08.416454 kubelet[2677]: E1101 01:52:08.415860 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.418710 kubelet[2677]: E1101 01:52:08.417438 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.418710 kubelet[2677]: W1101 01:52:08.417465 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.418710 kubelet[2677]: E1101 01:52:08.417505 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.418983 kubelet[2677]: E1101 01:52:08.418920 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.418983 kubelet[2677]: W1101 01:52:08.418977 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.422893 kubelet[2677]: E1101 01:52:08.419390 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.422893 kubelet[2677]: W1101 01:52:08.419411 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.422893 kubelet[2677]: E1101 01:52:08.419572 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.422893 kubelet[2677]: E1101 01:52:08.419823 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.422893 kubelet[2677]: E1101 01:52:08.419908 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.422893 kubelet[2677]: W1101 01:52:08.419935 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.422893 kubelet[2677]: E1101 01:52:08.420146 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.422893 kubelet[2677]: E1101 01:52:08.420409 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.422893 kubelet[2677]: W1101 01:52:08.420424 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.422893 kubelet[2677]: E1101 01:52:08.420447 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.424985 kubelet[2677]: E1101 01:52:08.420893 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.424985 kubelet[2677]: W1101 01:52:08.420908 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.424985 kubelet[2677]: E1101 01:52:08.420954 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.424985 kubelet[2677]: E1101 01:52:08.421400 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.424985 kubelet[2677]: W1101 01:52:08.421414 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.424985 kubelet[2677]: E1101 01:52:08.421451 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.424985 kubelet[2677]: E1101 01:52:08.421842 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.424985 kubelet[2677]: W1101 01:52:08.421863 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.424985 kubelet[2677]: E1101 01:52:08.421969 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.424985 kubelet[2677]: E1101 01:52:08.422446 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.430528 kubelet[2677]: W1101 01:52:08.422462 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.430528 kubelet[2677]: E1101 01:52:08.422615 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.430528 kubelet[2677]: E1101 01:52:08.422774 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.430528 kubelet[2677]: W1101 01:52:08.422791 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.430528 kubelet[2677]: E1101 01:52:08.422951 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.430528 kubelet[2677]: E1101 01:52:08.423218 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.430528 kubelet[2677]: W1101 01:52:08.423232 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.430528 kubelet[2677]: E1101 01:52:08.423348 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.430528 kubelet[2677]: E1101 01:52:08.423702 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.430528 kubelet[2677]: W1101 01:52:08.423717 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.433494 kubelet[2677]: E1101 01:52:08.423791 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.433494 kubelet[2677]: E1101 01:52:08.424189 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.433494 kubelet[2677]: W1101 01:52:08.424203 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.433494 kubelet[2677]: E1101 01:52:08.424401 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.433494 kubelet[2677]: E1101 01:52:08.424578 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.433494 kubelet[2677]: W1101 01:52:08.424632 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.433494 kubelet[2677]: E1101 01:52:08.424727 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.433494 kubelet[2677]: E1101 01:52:08.425067 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.433494 kubelet[2677]: W1101 01:52:08.425082 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.433494 kubelet[2677]: E1101 01:52:08.425244 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.437112 kubelet[2677]: E1101 01:52:08.425500 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.437112 kubelet[2677]: W1101 01:52:08.425530 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.437112 kubelet[2677]: E1101 01:52:08.426236 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.437112 kubelet[2677]: E1101 01:52:08.433944 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.437112 kubelet[2677]: W1101 01:52:08.433967 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.437112 kubelet[2677]: E1101 01:52:08.434481 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.437112 kubelet[2677]: E1101 01:52:08.436508 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.437112 kubelet[2677]: W1101 01:52:08.436530 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.437941 kubelet[2677]: E1101 01:52:08.437569 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.438509 systemd[1]: Started cri-containerd-c266ee41a0bc3bd1b3ce6eb66483e89767dcf3042d22b7d85bc2d323b1af0040.scope - libcontainer container c266ee41a0bc3bd1b3ce6eb66483e89767dcf3042d22b7d85bc2d323b1af0040. Nov 1 01:52:08.439672 kubelet[2677]: E1101 01:52:08.438914 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.439672 kubelet[2677]: W1101 01:52:08.439039 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.439672 kubelet[2677]: E1101 01:52:08.439660 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.441095 kubelet[2677]: E1101 01:52:08.440563 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.441095 kubelet[2677]: W1101 01:52:08.440597 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.441095 kubelet[2677]: E1101 01:52:08.440988 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.441388 kubelet[2677]: E1101 01:52:08.441336 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.441388 kubelet[2677]: W1101 01:52:08.441352 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.441493 kubelet[2677]: E1101 01:52:08.441447 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.443588 kubelet[2677]: E1101 01:52:08.441785 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.443588 kubelet[2677]: W1101 01:52:08.441851 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.443588 kubelet[2677]: E1101 01:52:08.441902 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.443588 kubelet[2677]: E1101 01:52:08.442340 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.443588 kubelet[2677]: W1101 01:52:08.442353 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.443588 kubelet[2677]: E1101 01:52:08.442447 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.443588 kubelet[2677]: E1101 01:52:08.443049 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.443588 kubelet[2677]: W1101 01:52:08.443092 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.443588 kubelet[2677]: E1101 01:52:08.443109 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.445702 containerd[1506]: time="2025-11-01T01:52:08.445621280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:08.509290 systemd[1]: Started cri-containerd-90849b468dca4af60bd79098917a0866af3e2e4e8dfe09f0f4a1c8a565806abc.scope - libcontainer container 90849b468dca4af60bd79098917a0866af3e2e4e8dfe09f0f4a1c8a565806abc. Nov 1 01:52:08.529430 kubelet[2677]: E1101 01:52:08.529341 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:08.529430 kubelet[2677]: W1101 01:52:08.529411 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:08.529758 kubelet[2677]: E1101 01:52:08.529485 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:08.628592 containerd[1506]: time="2025-11-01T01:52:08.628485824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-847f9bd77b-x8frz,Uid:f2d12dda-b245-4e51-afe4-fce14b4705a0,Namespace:calico-system,Attempt:0,} returns sandbox id \"c266ee41a0bc3bd1b3ce6eb66483e89767dcf3042d22b7d85bc2d323b1af0040\"" Nov 1 01:52:08.679483 containerd[1506]: time="2025-11-01T01:52:08.679299088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 01:52:08.734048 containerd[1506]: time="2025-11-01T01:52:08.732995354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-j5fgg,Uid:f8b8635c-bf68-4d22-8ce9-f7d0f1703a8d,Namespace:calico-system,Attempt:0,} returns sandbox id \"90849b468dca4af60bd79098917a0866af3e2e4e8dfe09f0f4a1c8a565806abc\"" Nov 1 01:52:09.615498 kubelet[2677]: E1101 01:52:09.615393 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:52:10.243168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3505829473.mount: Deactivated successfully. Nov 1 01:52:11.604745 kubelet[2677]: E1101 01:52:11.604321 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:52:12.526884 containerd[1506]: time="2025-11-01T01:52:12.526176002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:52:12.528836 containerd[1506]: time="2025-11-01T01:52:12.528517859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 01:52:12.531045 containerd[1506]: time="2025-11-01T01:52:12.529651630Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:52:12.533862 containerd[1506]: time="2025-11-01T01:52:12.533804057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:52:12.534828 containerd[1506]: time="2025-11-01T01:52:12.534781396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.855408855s" Nov 1 01:52:12.534944 containerd[1506]: time="2025-11-01T01:52:12.534831259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 01:52:12.540124 containerd[1506]: time="2025-11-01T01:52:12.537937980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 01:52:12.582384 containerd[1506]: time="2025-11-01T01:52:12.582316506Z" level=info msg="CreateContainer within sandbox \"c266ee41a0bc3bd1b3ce6eb66483e89767dcf3042d22b7d85bc2d323b1af0040\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 01:52:12.607740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3216328978.mount: Deactivated successfully. Nov 1 01:52:12.611591 containerd[1506]: time="2025-11-01T01:52:12.609993091Z" level=info msg="CreateContainer within sandbox \"c266ee41a0bc3bd1b3ce6eb66483e89767dcf3042d22b7d85bc2d323b1af0040\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"25661a362052fbadc94871a9cf17bf87dd43ee4fe3a6c193a564e0280c1b6cac\"" Nov 1 01:52:12.615678 containerd[1506]: time="2025-11-01T01:52:12.615419246Z" level=info msg="StartContainer for \"25661a362052fbadc94871a9cf17bf87dd43ee4fe3a6c193a564e0280c1b6cac\"" Nov 1 01:52:12.673419 systemd[1]: Started cri-containerd-25661a362052fbadc94871a9cf17bf87dd43ee4fe3a6c193a564e0280c1b6cac.scope - libcontainer container 25661a362052fbadc94871a9cf17bf87dd43ee4fe3a6c193a564e0280c1b6cac. Nov 1 01:52:12.765174 containerd[1506]: time="2025-11-01T01:52:12.764747299Z" level=info msg="StartContainer for \"25661a362052fbadc94871a9cf17bf87dd43ee4fe3a6c193a564e0280c1b6cac\" returns successfully" Nov 1 01:52:13.604556 kubelet[2677]: E1101 01:52:13.604230 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:52:13.813071 kubelet[2677]: I1101 01:52:13.812465 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-847f9bd77b-x8frz" podStartSLOduration=2.9492604 podStartE2EDuration="6.812419262s" podCreationTimestamp="2025-11-01 01:52:07 +0000 UTC" firstStartedPulling="2025-11-01 01:52:08.674353129 +0000 UTC m=+25.294564796" lastFinishedPulling="2025-11-01 01:52:12.537511994 +0000 UTC m=+29.157723658" observedRunningTime="2025-11-01 01:52:13.811645163 +0000 UTC m=+30.431856854" watchObservedRunningTime="2025-11-01 01:52:13.812419262 +0000 UTC m=+30.432630938" Nov 1 01:52:13.828878 kubelet[2677]: E1101 01:52:13.828800 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.829731 kubelet[2677]: W1101 01:52:13.829215 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.829731 kubelet[2677]: E1101 01:52:13.829529 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.830253 kubelet[2677]: E1101 01:52:13.830066 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.830253 kubelet[2677]: W1101 01:52:13.830087 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.830253 kubelet[2677]: E1101 01:52:13.830105 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.830899 kubelet[2677]: E1101 01:52:13.830704 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.830899 kubelet[2677]: W1101 01:52:13.830741 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.830899 kubelet[2677]: E1101 01:52:13.830761 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.832994 kubelet[2677]: E1101 01:52:13.831604 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.832994 kubelet[2677]: W1101 01:52:13.831624 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.833442 kubelet[2677]: E1101 01:52:13.833236 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.835596 kubelet[2677]: E1101 01:52:13.835403 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.835596 kubelet[2677]: W1101 01:52:13.835468 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.835596 kubelet[2677]: E1101 01:52:13.835493 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.836349 kubelet[2677]: E1101 01:52:13.836267 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.836349 kubelet[2677]: W1101 01:52:13.836286 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.836349 kubelet[2677]: E1101 01:52:13.836304 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.837724 kubelet[2677]: E1101 01:52:13.837682 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.837724 kubelet[2677]: W1101 01:52:13.837706 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.837724 kubelet[2677]: E1101 01:52:13.837724 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.838351 kubelet[2677]: E1101 01:52:13.838311 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.838351 kubelet[2677]: W1101 01:52:13.838334 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.838351 kubelet[2677]: E1101 01:52:13.838350 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.840278 kubelet[2677]: E1101 01:52:13.840241 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.840278 kubelet[2677]: W1101 01:52:13.840264 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.842224 kubelet[2677]: E1101 01:52:13.840283 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.842224 kubelet[2677]: E1101 01:52:13.840901 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.842224 kubelet[2677]: W1101 01:52:13.840954 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.842224 kubelet[2677]: E1101 01:52:13.840985 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.842224 kubelet[2677]: E1101 01:52:13.841453 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.842224 kubelet[2677]: W1101 01:52:13.841468 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.842224 kubelet[2677]: E1101 01:52:13.841514 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.842224 kubelet[2677]: E1101 01:52:13.841955 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.842224 kubelet[2677]: W1101 01:52:13.841981 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.842224 kubelet[2677]: E1101 01:52:13.841999 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.843627 kubelet[2677]: E1101 01:52:13.842515 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.843627 kubelet[2677]: W1101 01:52:13.842531 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.843627 kubelet[2677]: E1101 01:52:13.842550 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.843627 kubelet[2677]: E1101 01:52:13.843233 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.843627 kubelet[2677]: W1101 01:52:13.843249 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.843627 kubelet[2677]: E1101 01:52:13.843306 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.844456 kubelet[2677]: E1101 01:52:13.843880 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.844456 kubelet[2677]: W1101 01:52:13.843896 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.844456 kubelet[2677]: E1101 01:52:13.843912 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.872069 kubelet[2677]: E1101 01:52:13.870583 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.872069 kubelet[2677]: W1101 01:52:13.870632 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.872069 kubelet[2677]: E1101 01:52:13.870666 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.872069 kubelet[2677]: E1101 01:52:13.871640 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.872069 kubelet[2677]: W1101 01:52:13.871656 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.872069 kubelet[2677]: E1101 01:52:13.871681 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.874200 kubelet[2677]: E1101 01:52:13.872862 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.874200 kubelet[2677]: W1101 01:52:13.872890 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.874200 kubelet[2677]: E1101 01:52:13.872908 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.875095 kubelet[2677]: E1101 01:52:13.874419 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.875095 kubelet[2677]: W1101 01:52:13.874439 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.875095 kubelet[2677]: E1101 01:52:13.874700 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.876104 kubelet[2677]: E1101 01:52:13.875501 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.876104 kubelet[2677]: W1101 01:52:13.875523 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.878522 kubelet[2677]: E1101 01:52:13.876541 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.878522 kubelet[2677]: E1101 01:52:13.876557 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.878522 kubelet[2677]: W1101 01:52:13.876557 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.878522 kubelet[2677]: E1101 01:52:13.876881 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.878522 kubelet[2677]: E1101 01:52:13.877081 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.878522 kubelet[2677]: W1101 01:52:13.877126 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.878522 kubelet[2677]: E1101 01:52:13.877403 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.878522 kubelet[2677]: E1101 01:52:13.877429 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.878522 kubelet[2677]: W1101 01:52:13.877447 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.878522 kubelet[2677]: E1101 01:52:13.877474 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.880433 kubelet[2677]: E1101 01:52:13.878154 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.880433 kubelet[2677]: W1101 01:52:13.878169 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.880433 kubelet[2677]: E1101 01:52:13.878185 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.880433 kubelet[2677]: E1101 01:52:13.879690 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.880433 kubelet[2677]: W1101 01:52:13.879706 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.880433 kubelet[2677]: E1101 01:52:13.879743 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.880433 kubelet[2677]: E1101 01:52:13.880243 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.880433 kubelet[2677]: W1101 01:52:13.880259 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.882506 kubelet[2677]: E1101 01:52:13.880853 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.882506 kubelet[2677]: W1101 01:52:13.880868 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.882506 kubelet[2677]: E1101 01:52:13.880888 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.882506 kubelet[2677]: E1101 01:52:13.881364 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.882506 kubelet[2677]: E1101 01:52:13.881858 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.882506 kubelet[2677]: W1101 01:52:13.881874 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.882506 kubelet[2677]: E1101 01:52:13.881898 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.884763 kubelet[2677]: E1101 01:52:13.884398 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.884763 kubelet[2677]: W1101 01:52:13.884483 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.886664 kubelet[2677]: E1101 01:52:13.884511 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.887118 kubelet[2677]: E1101 01:52:13.887061 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.887678 kubelet[2677]: W1101 01:52:13.887372 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.887678 kubelet[2677]: E1101 01:52:13.887451 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.888282 kubelet[2677]: E1101 01:52:13.888198 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.888463 kubelet[2677]: W1101 01:52:13.888395 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.889191 kubelet[2677]: E1101 01:52:13.888928 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.892636 kubelet[2677]: E1101 01:52:13.891769 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.892636 kubelet[2677]: W1101 01:52:13.891791 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.892636 kubelet[2677]: E1101 01:52:13.891815 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:13.893063 kubelet[2677]: E1101 01:52:13.892954 2677 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 01:52:13.893063 kubelet[2677]: W1101 01:52:13.892987 2677 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 01:52:13.893063 kubelet[2677]: E1101 01:52:13.893007 2677 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 01:52:14.324060 containerd[1506]: time="2025-11-01T01:52:14.322738442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:52:14.325157 containerd[1506]: time="2025-11-01T01:52:14.324590867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 01:52:14.325239 containerd[1506]: time="2025-11-01T01:52:14.325182317Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:52:14.329268 containerd[1506]: time="2025-11-01T01:52:14.329117375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:52:14.331586 containerd[1506]: time="2025-11-01T01:52:14.330286614Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.791654362s" Nov 1 01:52:14.331586 containerd[1506]: time="2025-11-01T01:52:14.330361606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 01:52:14.335817 containerd[1506]: time="2025-11-01T01:52:14.335752074Z" level=info msg="CreateContainer within sandbox \"90849b468dca4af60bd79098917a0866af3e2e4e8dfe09f0f4a1c8a565806abc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 01:52:14.364363 containerd[1506]: time="2025-11-01T01:52:14.364197542Z" level=info msg="CreateContainer within sandbox \"90849b468dca4af60bd79098917a0866af3e2e4e8dfe09f0f4a1c8a565806abc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ca06abb7536a8e403d637b6feda4d1efd74b5e95ce23942fa425bd75844e0831\"" Nov 1 01:52:14.366058 containerd[1506]: time="2025-11-01T01:52:14.365736415Z" level=info msg="StartContainer for \"ca06abb7536a8e403d637b6feda4d1efd74b5e95ce23942fa425bd75844e0831\"" Nov 1 01:52:14.474340 systemd[1]: Started cri-containerd-ca06abb7536a8e403d637b6feda4d1efd74b5e95ce23942fa425bd75844e0831.scope - libcontainer container ca06abb7536a8e403d637b6feda4d1efd74b5e95ce23942fa425bd75844e0831. Nov 1 01:52:14.534314 containerd[1506]: time="2025-11-01T01:52:14.534239836Z" level=info msg="StartContainer for \"ca06abb7536a8e403d637b6feda4d1efd74b5e95ce23942fa425bd75844e0831\" returns successfully" Nov 1 01:52:14.560842 systemd[1]: cri-containerd-ca06abb7536a8e403d637b6feda4d1efd74b5e95ce23942fa425bd75844e0831.scope: Deactivated successfully. Nov 1 01:52:14.640024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca06abb7536a8e403d637b6feda4d1efd74b5e95ce23942fa425bd75844e0831-rootfs.mount: Deactivated successfully. Nov 1 01:52:14.684392 containerd[1506]: time="2025-11-01T01:52:14.658351163Z" level=info msg="shim disconnected" id=ca06abb7536a8e403d637b6feda4d1efd74b5e95ce23942fa425bd75844e0831 namespace=k8s.io Nov 1 01:52:14.684392 containerd[1506]: time="2025-11-01T01:52:14.684120308Z" level=warning msg="cleaning up after shim disconnected" id=ca06abb7536a8e403d637b6feda4d1efd74b5e95ce23942fa425bd75844e0831 namespace=k8s.io Nov 1 01:52:14.684392 containerd[1506]: time="2025-11-01T01:52:14.684153800Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 01:52:14.770715 containerd[1506]: time="2025-11-01T01:52:14.770005242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 01:52:15.606145 kubelet[2677]: E1101 01:52:15.605932 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:52:17.604305 kubelet[2677]: E1101 01:52:17.603331 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:52:19.611116 kubelet[2677]: E1101 01:52:19.610309 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:52:20.113842 containerd[1506]: time="2025-11-01T01:52:20.113718546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:52:20.115464 containerd[1506]: time="2025-11-01T01:52:20.115191151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 01:52:20.116667 containerd[1506]: time="2025-11-01T01:52:20.116240598Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:52:20.126219 containerd[1506]: time="2025-11-01T01:52:20.126175122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:52:20.128028 containerd[1506]: time="2025-11-01T01:52:20.127959802Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.357874553s" Nov 1 01:52:20.128119 containerd[1506]: time="2025-11-01T01:52:20.128044763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 01:52:20.135209 containerd[1506]: time="2025-11-01T01:52:20.135149652Z" level=info msg="CreateContainer within sandbox \"90849b468dca4af60bd79098917a0866af3e2e4e8dfe09f0f4a1c8a565806abc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 01:52:20.157379 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2199402905.mount: Deactivated successfully. Nov 1 01:52:20.159212 containerd[1506]: time="2025-11-01T01:52:20.159165287Z" level=info msg="CreateContainer within sandbox \"90849b468dca4af60bd79098917a0866af3e2e4e8dfe09f0f4a1c8a565806abc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b2f560a6a9be76f9d8646123613943ac6dce2076f3ecca414596cc2c908f071e\"" Nov 1 01:52:20.161076 containerd[1506]: time="2025-11-01T01:52:20.160949945Z" level=info msg="StartContainer for \"b2f560a6a9be76f9d8646123613943ac6dce2076f3ecca414596cc2c908f071e\"" Nov 1 01:52:20.227794 systemd[1]: run-containerd-runc-k8s.io-b2f560a6a9be76f9d8646123613943ac6dce2076f3ecca414596cc2c908f071e-runc.x6Bort.mount: Deactivated successfully. Nov 1 01:52:20.242263 systemd[1]: Started cri-containerd-b2f560a6a9be76f9d8646123613943ac6dce2076f3ecca414596cc2c908f071e.scope - libcontainer container b2f560a6a9be76f9d8646123613943ac6dce2076f3ecca414596cc2c908f071e. Nov 1 01:52:20.295778 containerd[1506]: time="2025-11-01T01:52:20.295551852Z" level=info msg="StartContainer for \"b2f560a6a9be76f9d8646123613943ac6dce2076f3ecca414596cc2c908f071e\" returns successfully" Nov 1 01:52:21.347542 systemd[1]: cri-containerd-b2f560a6a9be76f9d8646123613943ac6dce2076f3ecca414596cc2c908f071e.scope: Deactivated successfully. Nov 1 01:52:21.399510 kubelet[2677]: I1101 01:52:21.397592 2677 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 01:52:21.455934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2f560a6a9be76f9d8646123613943ac6dce2076f3ecca414596cc2c908f071e-rootfs.mount: Deactivated successfully. Nov 1 01:52:21.557658 containerd[1506]: time="2025-11-01T01:52:21.557489829Z" level=info msg="shim disconnected" id=b2f560a6a9be76f9d8646123613943ac6dce2076f3ecca414596cc2c908f071e namespace=k8s.io Nov 1 01:52:21.557658 containerd[1506]: time="2025-11-01T01:52:21.557623633Z" level=warning msg="cleaning up after shim disconnected" id=b2f560a6a9be76f9d8646123613943ac6dce2076f3ecca414596cc2c908f071e namespace=k8s.io Nov 1 01:52:21.558639 containerd[1506]: time="2025-11-01T01:52:21.558209392Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 01:52:21.594159 systemd[1]: Created slice kubepods-burstable-poddea2f1d9_48f1_44e9_bd05_794af9e0edad.slice - libcontainer container kubepods-burstable-poddea2f1d9_48f1_44e9_bd05_794af9e0edad.slice. Nov 1 01:52:21.622138 containerd[1506]: time="2025-11-01T01:52:21.621489240Z" level=warning msg="cleanup warnings time=\"2025-11-01T01:52:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 1 01:52:21.630744 systemd[1]: Created slice kubepods-besteffort-pod0303fd48_19b2_41d1_991c_312dc81409eb.slice - libcontainer container kubepods-besteffort-pod0303fd48_19b2_41d1_991c_312dc81409eb.slice. Nov 1 01:52:21.635443 kubelet[2677]: I1101 01:52:21.634390 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxth2\" (UniqueName: \"kubernetes.io/projected/0303fd48-19b2-41d1-991c-312dc81409eb-kube-api-access-dxth2\") pod \"calico-kube-controllers-7dfccfdf99-qcr45\" (UID: \"0303fd48-19b2-41d1-991c-312dc81409eb\") " pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" Nov 1 01:52:21.635443 kubelet[2677]: I1101 01:52:21.634466 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v95sz\" (UniqueName: \"kubernetes.io/projected/dea2f1d9-48f1-44e9-bd05-794af9e0edad-kube-api-access-v95sz\") pod \"coredns-668d6bf9bc-zgzhk\" (UID: \"dea2f1d9-48f1-44e9-bd05-794af9e0edad\") " pod="kube-system/coredns-668d6bf9bc-zgzhk" Nov 1 01:52:21.635443 kubelet[2677]: I1101 01:52:21.634516 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dea2f1d9-48f1-44e9-bd05-794af9e0edad-config-volume\") pod \"coredns-668d6bf9bc-zgzhk\" (UID: \"dea2f1d9-48f1-44e9-bd05-794af9e0edad\") " pod="kube-system/coredns-668d6bf9bc-zgzhk" Nov 1 01:52:21.635443 kubelet[2677]: I1101 01:52:21.634555 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0303fd48-19b2-41d1-991c-312dc81409eb-tigera-ca-bundle\") pod \"calico-kube-controllers-7dfccfdf99-qcr45\" (UID: \"0303fd48-19b2-41d1-991c-312dc81409eb\") " pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" Nov 1 01:52:21.647870 systemd[1]: Created slice kubepods-besteffort-podec724e45_3797_40ba_a9db_970952094e39.slice - libcontainer container kubepods-besteffort-podec724e45_3797_40ba_a9db_970952094e39.slice. Nov 1 01:52:21.656066 containerd[1506]: time="2025-11-01T01:52:21.656002522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f685l,Uid:ec724e45-3797-40ba-a9db-970952094e39,Namespace:calico-system,Attempt:0,}" Nov 1 01:52:21.662540 systemd[1]: Created slice kubepods-besteffort-pod692e4d02_4b9e_43c3_8a3c_87f80adc9cda.slice - libcontainer container kubepods-besteffort-pod692e4d02_4b9e_43c3_8a3c_87f80adc9cda.slice. Nov 1 01:52:21.714064 systemd[1]: Created slice kubepods-burstable-pod5d3e007c_5fa3_444d_bda8_4fe6a895dd94.slice - libcontainer container kubepods-burstable-pod5d3e007c_5fa3_444d_bda8_4fe6a895dd94.slice. Nov 1 01:52:21.735458 kubelet[2677]: I1101 01:52:21.735302 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsz8f\" (UniqueName: \"kubernetes.io/projected/692e4d02-4b9e-43c3-8a3c-87f80adc9cda-kube-api-access-rsz8f\") pod \"goldmane-666569f655-2z2fz\" (UID: \"692e4d02-4b9e-43c3-8a3c-87f80adc9cda\") " pod="calico-system/goldmane-666569f655-2z2fz" Nov 1 01:52:21.735458 kubelet[2677]: I1101 01:52:21.735382 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d47l7\" (UniqueName: \"kubernetes.io/projected/961d53cb-00c8-4e88-869d-034281366b6b-kube-api-access-d47l7\") pod \"calico-apiserver-54865fd995-mzf2r\" (UID: \"961d53cb-00c8-4e88-869d-034281366b6b\") " pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" Nov 1 01:52:21.735458 kubelet[2677]: I1101 01:52:21.735418 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwp9j\" (UniqueName: \"kubernetes.io/projected/f54af395-651d-45bc-acec-8a87e82ec93b-kube-api-access-kwp9j\") pod \"calico-apiserver-54865fd995-lqdk2\" (UID: \"f54af395-651d-45bc-acec-8a87e82ec93b\") " pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" Nov 1 01:52:21.735458 kubelet[2677]: I1101 01:52:21.735446 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d3e007c-5fa3-444d-bda8-4fe6a895dd94-config-volume\") pod \"coredns-668d6bf9bc-qvbkg\" (UID: \"5d3e007c-5fa3-444d-bda8-4fe6a895dd94\") " pod="kube-system/coredns-668d6bf9bc-qvbkg" Nov 1 01:52:21.736246 kubelet[2677]: I1101 01:52:21.735477 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/692e4d02-4b9e-43c3-8a3c-87f80adc9cda-config\") pod \"goldmane-666569f655-2z2fz\" (UID: \"692e4d02-4b9e-43c3-8a3c-87f80adc9cda\") " pod="calico-system/goldmane-666569f655-2z2fz" Nov 1 01:52:21.736246 kubelet[2677]: I1101 01:52:21.735537 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/961d53cb-00c8-4e88-869d-034281366b6b-calico-apiserver-certs\") pod \"calico-apiserver-54865fd995-mzf2r\" (UID: \"961d53cb-00c8-4e88-869d-034281366b6b\") " pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" Nov 1 01:52:21.736246 kubelet[2677]: I1101 01:52:21.735570 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/692e4d02-4b9e-43c3-8a3c-87f80adc9cda-goldmane-key-pair\") pod \"goldmane-666569f655-2z2fz\" (UID: \"692e4d02-4b9e-43c3-8a3c-87f80adc9cda\") " pod="calico-system/goldmane-666569f655-2z2fz" Nov 1 01:52:21.736246 kubelet[2677]: I1101 01:52:21.735648 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31b727ee-f1ce-4561-a94e-37319924f336-whisker-ca-bundle\") pod \"whisker-6c74bc7d7-w87m8\" (UID: \"31b727ee-f1ce-4561-a94e-37319924f336\") " pod="calico-system/whisker-6c74bc7d7-w87m8" Nov 1 01:52:21.736246 kubelet[2677]: I1101 01:52:21.735682 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f54af395-651d-45bc-acec-8a87e82ec93b-calico-apiserver-certs\") pod \"calico-apiserver-54865fd995-lqdk2\" (UID: \"f54af395-651d-45bc-acec-8a87e82ec93b\") " pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" Nov 1 01:52:21.736518 kubelet[2677]: I1101 01:52:21.735745 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gtvn\" (UniqueName: \"kubernetes.io/projected/5d3e007c-5fa3-444d-bda8-4fe6a895dd94-kube-api-access-6gtvn\") pod \"coredns-668d6bf9bc-qvbkg\" (UID: \"5d3e007c-5fa3-444d-bda8-4fe6a895dd94\") " pod="kube-system/coredns-668d6bf9bc-qvbkg" Nov 1 01:52:21.736518 kubelet[2677]: I1101 01:52:21.735776 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr22s\" (UniqueName: \"kubernetes.io/projected/31b727ee-f1ce-4561-a94e-37319924f336-kube-api-access-vr22s\") pod \"whisker-6c74bc7d7-w87m8\" (UID: \"31b727ee-f1ce-4561-a94e-37319924f336\") " pod="calico-system/whisker-6c74bc7d7-w87m8" Nov 1 01:52:21.736518 kubelet[2677]: I1101 01:52:21.735803 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/692e4d02-4b9e-43c3-8a3c-87f80adc9cda-goldmane-ca-bundle\") pod \"goldmane-666569f655-2z2fz\" (UID: \"692e4d02-4b9e-43c3-8a3c-87f80adc9cda\") " pod="calico-system/goldmane-666569f655-2z2fz" Nov 1 01:52:21.736518 kubelet[2677]: I1101 01:52:21.735861 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/31b727ee-f1ce-4561-a94e-37319924f336-whisker-backend-key-pair\") pod \"whisker-6c74bc7d7-w87m8\" (UID: \"31b727ee-f1ce-4561-a94e-37319924f336\") " pod="calico-system/whisker-6c74bc7d7-w87m8" Nov 1 01:52:21.747637 systemd[1]: Created slice kubepods-besteffort-pod31b727ee_f1ce_4561_a94e_37319924f336.slice - libcontainer container kubepods-besteffort-pod31b727ee_f1ce_4561_a94e_37319924f336.slice. Nov 1 01:52:21.783869 systemd[1]: Created slice kubepods-besteffort-pod961d53cb_00c8_4e88_869d_034281366b6b.slice - libcontainer container kubepods-besteffort-pod961d53cb_00c8_4e88_869d_034281366b6b.slice. Nov 1 01:52:21.807957 systemd[1]: Created slice kubepods-besteffort-podf54af395_651d_45bc_acec_8a87e82ec93b.slice - libcontainer container kubepods-besteffort-podf54af395_651d_45bc_acec_8a87e82ec93b.slice. Nov 1 01:52:21.908274 containerd[1506]: time="2025-11-01T01:52:21.907260793Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 01:52:21.934041 containerd[1506]: time="2025-11-01T01:52:21.930533026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zgzhk,Uid:dea2f1d9-48f1-44e9-bd05-794af9e0edad,Namespace:kube-system,Attempt:0,}" Nov 1 01:52:21.958323 containerd[1506]: time="2025-11-01T01:52:21.957223776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfccfdf99-qcr45,Uid:0303fd48-19b2-41d1-991c-312dc81409eb,Namespace:calico-system,Attempt:0,}" Nov 1 01:52:21.987867 containerd[1506]: time="2025-11-01T01:52:21.987788776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2z2fz,Uid:692e4d02-4b9e-43c3-8a3c-87f80adc9cda,Namespace:calico-system,Attempt:0,}" Nov 1 01:52:22.040327 containerd[1506]: time="2025-11-01T01:52:22.040271311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvbkg,Uid:5d3e007c-5fa3-444d-bda8-4fe6a895dd94,Namespace:kube-system,Attempt:0,}" Nov 1 01:52:22.086334 containerd[1506]: time="2025-11-01T01:52:22.086283104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c74bc7d7-w87m8,Uid:31b727ee-f1ce-4561-a94e-37319924f336,Namespace:calico-system,Attempt:0,}" Nov 1 01:52:22.104629 containerd[1506]: time="2025-11-01T01:52:22.104576961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54865fd995-mzf2r,Uid:961d53cb-00c8-4e88-869d-034281366b6b,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:52:22.131636 containerd[1506]: time="2025-11-01T01:52:22.131578373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54865fd995-lqdk2,Uid:f54af395-651d-45bc-acec-8a87e82ec93b,Namespace:calico-apiserver,Attempt:0,}" Nov 1 01:52:22.426688 containerd[1506]: time="2025-11-01T01:52:22.426603740Z" level=error msg="Failed to destroy network for sandbox \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.443317 containerd[1506]: time="2025-11-01T01:52:22.443202008Z" level=error msg="encountered an error cleaning up failed sandbox \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.443505 containerd[1506]: time="2025-11-01T01:52:22.443405425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f685l,Uid:ec724e45-3797-40ba-a9db-970952094e39,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.447110 kubelet[2677]: E1101 01:52:22.444667 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.447110 kubelet[2677]: E1101 01:52:22.444820 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f685l" Nov 1 01:52:22.447110 kubelet[2677]: E1101 01:52:22.444876 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f685l" Nov 1 01:52:22.449225 kubelet[2677]: E1101 01:52:22.444983 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f685l_calico-system(ec724e45-3797-40ba-a9db-970952094e39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f685l_calico-system(ec724e45-3797-40ba-a9db-970952094e39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:52:22.484336 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed-shm.mount: Deactivated successfully. Nov 1 01:52:22.515042 containerd[1506]: time="2025-11-01T01:52:22.510580090Z" level=error msg="Failed to destroy network for sandbox \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.513821 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab-shm.mount: Deactivated successfully. Nov 1 01:52:22.516035 containerd[1506]: time="2025-11-01T01:52:22.515834667Z" level=error msg="encountered an error cleaning up failed sandbox \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.516035 containerd[1506]: time="2025-11-01T01:52:22.515914894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c74bc7d7-w87m8,Uid:31b727ee-f1ce-4561-a94e-37319924f336,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.519071 containerd[1506]: time="2025-11-01T01:52:22.516159791Z" level=error msg="Failed to destroy network for sandbox \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.519472 containerd[1506]: time="2025-11-01T01:52:22.519428401Z" level=error msg="encountered an error cleaning up failed sandbox \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.519560 containerd[1506]: time="2025-11-01T01:52:22.519517868Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfccfdf99-qcr45,Uid:0303fd48-19b2-41d1-991c-312dc81409eb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.519710 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6-shm.mount: Deactivated successfully. Nov 1 01:52:22.522268 kubelet[2677]: E1101 01:52:22.520728 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.522268 kubelet[2677]: E1101 01:52:22.520831 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" Nov 1 01:52:22.522268 kubelet[2677]: E1101 01:52:22.520878 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" Nov 1 01:52:22.524256 kubelet[2677]: E1101 01:52:22.520941 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7dfccfdf99-qcr45_calico-system(0303fd48-19b2-41d1-991c-312dc81409eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7dfccfdf99-qcr45_calico-system(0303fd48-19b2-41d1-991c-312dc81409eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" podUID="0303fd48-19b2-41d1-991c-312dc81409eb" Nov 1 01:52:22.524256 kubelet[2677]: E1101 01:52:22.522910 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.524256 kubelet[2677]: E1101 01:52:22.522999 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c74bc7d7-w87m8" Nov 1 01:52:22.524465 kubelet[2677]: E1101 01:52:22.523087 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c74bc7d7-w87m8" Nov 1 01:52:22.524465 kubelet[2677]: E1101 01:52:22.523254 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c74bc7d7-w87m8_calico-system(31b727ee-f1ce-4561-a94e-37319924f336)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c74bc7d7-w87m8_calico-system(31b727ee-f1ce-4561-a94e-37319924f336)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c74bc7d7-w87m8" podUID="31b727ee-f1ce-4561-a94e-37319924f336" Nov 1 01:52:22.537553 containerd[1506]: time="2025-11-01T01:52:22.537360718Z" level=error msg="Failed to destroy network for sandbox \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.541847 containerd[1506]: time="2025-11-01T01:52:22.539709055Z" level=error msg="encountered an error cleaning up failed sandbox \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.541686 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd-shm.mount: Deactivated successfully. Nov 1 01:52:22.543145 containerd[1506]: time="2025-11-01T01:52:22.543102805Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2z2fz,Uid:692e4d02-4b9e-43c3-8a3c-87f80adc9cda,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.547075 kubelet[2677]: E1101 01:52:22.546079 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.547185 kubelet[2677]: E1101 01:52:22.547108 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2z2fz" Nov 1 01:52:22.547185 kubelet[2677]: E1101 01:52:22.547150 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-2z2fz" Nov 1 01:52:22.548444 kubelet[2677]: E1101 01:52:22.547223 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-2z2fz_calico-system(692e4d02-4b9e-43c3-8a3c-87f80adc9cda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-2z2fz_calico-system(692e4d02-4b9e-43c3-8a3c-87f80adc9cda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-2z2fz" podUID="692e4d02-4b9e-43c3-8a3c-87f80adc9cda" Nov 1 01:52:22.554548 containerd[1506]: time="2025-11-01T01:52:22.554222478Z" level=error msg="Failed to destroy network for sandbox \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.555718 containerd[1506]: time="2025-11-01T01:52:22.555459958Z" level=error msg="encountered an error cleaning up failed sandbox \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.558959 containerd[1506]: time="2025-11-01T01:52:22.558156298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zgzhk,Uid:dea2f1d9-48f1-44e9-bd05-794af9e0edad,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.558586 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c-shm.mount: Deactivated successfully. Nov 1 01:52:22.560048 kubelet[2677]: E1101 01:52:22.559979 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.560219 kubelet[2677]: E1101 01:52:22.560163 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zgzhk" Nov 1 01:52:22.560376 kubelet[2677]: E1101 01:52:22.560214 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-zgzhk" Nov 1 01:52:22.560376 kubelet[2677]: E1101 01:52:22.560292 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-zgzhk_kube-system(dea2f1d9-48f1-44e9-bd05-794af9e0edad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-zgzhk_kube-system(dea2f1d9-48f1-44e9-bd05-794af9e0edad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zgzhk" podUID="dea2f1d9-48f1-44e9-bd05-794af9e0edad" Nov 1 01:52:22.605443 containerd[1506]: time="2025-11-01T01:52:22.605269752Z" level=error msg="Failed to destroy network for sandbox \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.607514 containerd[1506]: time="2025-11-01T01:52:22.607461737Z" level=error msg="encountered an error cleaning up failed sandbox \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.607676 containerd[1506]: time="2025-11-01T01:52:22.607624458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvbkg,Uid:5d3e007c-5fa3-444d-bda8-4fe6a895dd94,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.611073 kubelet[2677]: E1101 01:52:22.610981 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.611199 kubelet[2677]: E1101 01:52:22.611121 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qvbkg" Nov 1 01:52:22.611199 kubelet[2677]: E1101 01:52:22.611172 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qvbkg" Nov 1 01:52:22.612642 kubelet[2677]: E1101 01:52:22.611247 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qvbkg_kube-system(5d3e007c-5fa3-444d-bda8-4fe6a895dd94)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qvbkg_kube-system(5d3e007c-5fa3-444d-bda8-4fe6a895dd94)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qvbkg" podUID="5d3e007c-5fa3-444d-bda8-4fe6a895dd94" Nov 1 01:52:22.642267 containerd[1506]: time="2025-11-01T01:52:22.642178337Z" level=error msg="Failed to destroy network for sandbox \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.642775 containerd[1506]: time="2025-11-01T01:52:22.642727472Z" level=error msg="encountered an error cleaning up failed sandbox \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.642872 containerd[1506]: time="2025-11-01T01:52:22.642804614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54865fd995-lqdk2,Uid:f54af395-651d-45bc-acec-8a87e82ec93b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.643236 kubelet[2677]: E1101 01:52:22.643169 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.643335 kubelet[2677]: E1101 01:52:22.643274 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" Nov 1 01:52:22.643335 kubelet[2677]: E1101 01:52:22.643309 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" Nov 1 01:52:22.643448 kubelet[2677]: E1101 01:52:22.643377 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54865fd995-lqdk2_calico-apiserver(f54af395-651d-45bc-acec-8a87e82ec93b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54865fd995-lqdk2_calico-apiserver(f54af395-651d-45bc-acec-8a87e82ec93b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" podUID="f54af395-651d-45bc-acec-8a87e82ec93b" Nov 1 01:52:22.668541 containerd[1506]: time="2025-11-01T01:52:22.667187071Z" level=error msg="Failed to destroy network for sandbox \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.668541 containerd[1506]: time="2025-11-01T01:52:22.667729397Z" level=error msg="encountered an error cleaning up failed sandbox \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.668541 containerd[1506]: time="2025-11-01T01:52:22.667796586Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54865fd995-mzf2r,Uid:961d53cb-00c8-4e88-869d-034281366b6b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.669328 kubelet[2677]: E1101 01:52:22.668204 2677 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:22.669328 kubelet[2677]: E1101 01:52:22.668302 2677 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" Nov 1 01:52:22.669328 kubelet[2677]: E1101 01:52:22.668337 2677 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" Nov 1 01:52:22.669525 kubelet[2677]: E1101 01:52:22.668403 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54865fd995-mzf2r_calico-apiserver(961d53cb-00c8-4e88-869d-034281366b6b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54865fd995-mzf2r_calico-apiserver(961d53cb-00c8-4e88-869d-034281366b6b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" podUID="961d53cb-00c8-4e88-869d-034281366b6b" Nov 1 01:52:22.903497 kubelet[2677]: I1101 01:52:22.903431 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Nov 1 01:52:22.909525 kubelet[2677]: I1101 01:52:22.907508 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Nov 1 01:52:22.928270 kubelet[2677]: I1101 01:52:22.928212 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Nov 1 01:52:22.934219 kubelet[2677]: I1101 01:52:22.934134 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Nov 1 01:52:22.939310 kubelet[2677]: I1101 01:52:22.938867 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Nov 1 01:52:22.943141 kubelet[2677]: I1101 01:52:22.942755 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Nov 1 01:52:22.950929 containerd[1506]: time="2025-11-01T01:52:22.949785705Z" level=info msg="StopPodSandbox for \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\"" Nov 1 01:52:22.952316 containerd[1506]: time="2025-11-01T01:52:22.950693377Z" level=info msg="StopPodSandbox for \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\"" Nov 1 01:52:22.953955 containerd[1506]: time="2025-11-01T01:52:22.953885695Z" level=info msg="Ensure that sandbox c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6 in task-service has been cleanup successfully" Nov 1 01:52:22.953955 containerd[1506]: time="2025-11-01T01:52:22.954008063Z" level=info msg="Ensure that sandbox 1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad in task-service has been cleanup successfully" Nov 1 01:52:22.956429 kubelet[2677]: I1101 01:52:22.955051 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Nov 1 01:52:22.957476 containerd[1506]: time="2025-11-01T01:52:22.950839110Z" level=info msg="StopPodSandbox for \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\"" Nov 1 01:52:22.957476 containerd[1506]: time="2025-11-01T01:52:22.957342721Z" level=info msg="Ensure that sandbox 8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab in task-service has been cleanup successfully" Nov 1 01:52:22.962906 containerd[1506]: time="2025-11-01T01:52:22.950870292Z" level=info msg="StopPodSandbox for \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\"" Nov 1 01:52:22.963360 containerd[1506]: time="2025-11-01T01:52:22.963155284Z" level=info msg="Ensure that sandbox 58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c in task-service has been cleanup successfully" Nov 1 01:52:22.963360 containerd[1506]: time="2025-11-01T01:52:22.951924416Z" level=info msg="StopPodSandbox for \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\"" Nov 1 01:52:22.964623 containerd[1506]: time="2025-11-01T01:52:22.963489677Z" level=info msg="StopPodSandbox for \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\"" Nov 1 01:52:22.964623 containerd[1506]: time="2025-11-01T01:52:22.963671742Z" level=info msg="Ensure that sandbox 7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd in task-service has been cleanup successfully" Nov 1 01:52:22.965320 containerd[1506]: time="2025-11-01T01:52:22.964901526Z" level=info msg="Ensure that sandbox c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18 in task-service has been cleanup successfully" Nov 1 01:52:22.966675 containerd[1506]: time="2025-11-01T01:52:22.950784581Z" level=info msg="StopPodSandbox for \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\"" Nov 1 01:52:22.966886 containerd[1506]: time="2025-11-01T01:52:22.966844769Z" level=info msg="Ensure that sandbox 5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289 in task-service has been cleanup successfully" Nov 1 01:52:22.977801 kubelet[2677]: I1101 01:52:22.977755 2677 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Nov 1 01:52:22.981466 containerd[1506]: time="2025-11-01T01:52:22.981248408Z" level=info msg="StopPodSandbox for \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\"" Nov 1 01:52:22.988048 containerd[1506]: time="2025-11-01T01:52:22.986859119Z" level=info msg="Ensure that sandbox 876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed in task-service has been cleanup successfully" Nov 1 01:52:23.088800 containerd[1506]: time="2025-11-01T01:52:23.088720868Z" level=error msg="StopPodSandbox for \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\" failed" error="failed to destroy network for sandbox \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:23.089666 kubelet[2677]: E1101 01:52:23.089404 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Nov 1 01:52:23.107410 kubelet[2677]: E1101 01:52:23.107132 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad"} Nov 1 01:52:23.107410 kubelet[2677]: E1101 01:52:23.107300 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"961d53cb-00c8-4e88-869d-034281366b6b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:52:23.107410 kubelet[2677]: E1101 01:52:23.107342 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"961d53cb-00c8-4e88-869d-034281366b6b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" podUID="961d53cb-00c8-4e88-869d-034281366b6b" Nov 1 01:52:23.151646 containerd[1506]: time="2025-11-01T01:52:23.151574047Z" level=error msg="StopPodSandbox for \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\" failed" error="failed to destroy network for sandbox \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:23.152706 kubelet[2677]: E1101 01:52:23.152633 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Nov 1 01:52:23.152823 kubelet[2677]: E1101 01:52:23.152727 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289"} Nov 1 01:52:23.152823 kubelet[2677]: E1101 01:52:23.152780 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d3e007c-5fa3-444d-bda8-4fe6a895dd94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:52:23.153030 kubelet[2677]: E1101 01:52:23.152816 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d3e007c-5fa3-444d-bda8-4fe6a895dd94\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qvbkg" podUID="5d3e007c-5fa3-444d-bda8-4fe6a895dd94" Nov 1 01:52:23.160572 containerd[1506]: time="2025-11-01T01:52:23.159776104Z" level=error msg="StopPodSandbox for \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\" failed" error="failed to destroy network for sandbox \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:23.160720 kubelet[2677]: E1101 01:52:23.160181 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Nov 1 01:52:23.160720 kubelet[2677]: E1101 01:52:23.160243 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd"} Nov 1 01:52:23.160720 kubelet[2677]: E1101 01:52:23.160464 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"692e4d02-4b9e-43c3-8a3c-87f80adc9cda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:52:23.160720 kubelet[2677]: E1101 01:52:23.160512 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"692e4d02-4b9e-43c3-8a3c-87f80adc9cda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-2z2fz" podUID="692e4d02-4b9e-43c3-8a3c-87f80adc9cda" Nov 1 01:52:23.164771 containerd[1506]: time="2025-11-01T01:52:23.163986871Z" level=error msg="StopPodSandbox for \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\" failed" error="failed to destroy network for sandbox \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:23.164771 containerd[1506]: time="2025-11-01T01:52:23.164338068Z" level=error msg="StopPodSandbox for \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\" failed" error="failed to destroy network for sandbox \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:23.165180 kubelet[2677]: E1101 01:52:23.164654 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Nov 1 01:52:23.165180 kubelet[2677]: E1101 01:52:23.164703 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab"} Nov 1 01:52:23.165180 kubelet[2677]: E1101 01:52:23.164741 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31b727ee-f1ce-4561-a94e-37319924f336\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:52:23.165180 kubelet[2677]: E1101 01:52:23.164772 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31b727ee-f1ce-4561-a94e-37319924f336\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c74bc7d7-w87m8" podUID="31b727ee-f1ce-4561-a94e-37319924f336" Nov 1 01:52:23.165770 kubelet[2677]: E1101 01:52:23.164653 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Nov 1 01:52:23.165770 kubelet[2677]: E1101 01:52:23.164811 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed"} Nov 1 01:52:23.165770 kubelet[2677]: E1101 01:52:23.164845 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ec724e45-3797-40ba-a9db-970952094e39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:52:23.165770 kubelet[2677]: E1101 01:52:23.164874 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ec724e45-3797-40ba-a9db-970952094e39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:52:23.168593 containerd[1506]: time="2025-11-01T01:52:23.167948412Z" level=error msg="StopPodSandbox for \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\" failed" error="failed to destroy network for sandbox \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:23.168695 kubelet[2677]: E1101 01:52:23.168188 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Nov 1 01:52:23.168695 kubelet[2677]: E1101 01:52:23.168234 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18"} Nov 1 01:52:23.168695 kubelet[2677]: E1101 01:52:23.168273 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f54af395-651d-45bc-acec-8a87e82ec93b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:52:23.168695 kubelet[2677]: E1101 01:52:23.168310 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f54af395-651d-45bc-acec-8a87e82ec93b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" podUID="f54af395-651d-45bc-acec-8a87e82ec93b" Nov 1 01:52:23.173984 containerd[1506]: time="2025-11-01T01:52:23.173834814Z" level=error msg="StopPodSandbox for \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\" failed" error="failed to destroy network for sandbox \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:23.174523 kubelet[2677]: E1101 01:52:23.174227 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Nov 1 01:52:23.174523 kubelet[2677]: E1101 01:52:23.174291 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c"} Nov 1 01:52:23.174523 kubelet[2677]: E1101 01:52:23.174339 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dea2f1d9-48f1-44e9-bd05-794af9e0edad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:52:23.174523 kubelet[2677]: E1101 01:52:23.174381 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dea2f1d9-48f1-44e9-bd05-794af9e0edad\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-zgzhk" podUID="dea2f1d9-48f1-44e9-bd05-794af9e0edad" Nov 1 01:52:23.175431 containerd[1506]: time="2025-11-01T01:52:23.175313131Z" level=error msg="StopPodSandbox for \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\" failed" error="failed to destroy network for sandbox \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:23.175585 kubelet[2677]: E1101 01:52:23.175505 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Nov 1 01:52:23.175585 kubelet[2677]: E1101 01:52:23.175551 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6"} Nov 1 01:52:23.175757 kubelet[2677]: E1101 01:52:23.175587 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0303fd48-19b2-41d1-991c-312dc81409eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:52:23.175757 kubelet[2677]: E1101 01:52:23.175624 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0303fd48-19b2-41d1-991c-312dc81409eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" podUID="0303fd48-19b2-41d1-991c-312dc81409eb" Nov 1 01:52:23.457932 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18-shm.mount: Deactivated successfully. Nov 1 01:52:23.458286 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad-shm.mount: Deactivated successfully. Nov 1 01:52:23.458419 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289-shm.mount: Deactivated successfully. Nov 1 01:52:32.477690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4291744880.mount: Deactivated successfully. Nov 1 01:52:32.584668 containerd[1506]: time="2025-11-01T01:52:32.582607211Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.669436569s" Nov 1 01:52:32.584668 containerd[1506]: time="2025-11-01T01:52:32.582747837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 01:52:32.584668 containerd[1506]: time="2025-11-01T01:52:32.561807219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 01:52:32.630901 containerd[1506]: time="2025-11-01T01:52:32.630813649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:52:32.705607 containerd[1506]: time="2025-11-01T01:52:32.705479072Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:52:32.708221 containerd[1506]: time="2025-11-01T01:52:32.707785933Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 01:52:32.749844 containerd[1506]: time="2025-11-01T01:52:32.749203420Z" level=info msg="CreateContainer within sandbox \"90849b468dca4af60bd79098917a0866af3e2e4e8dfe09f0f4a1c8a565806abc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 01:52:32.856190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1374200887.mount: Deactivated successfully. Nov 1 01:52:32.887684 containerd[1506]: time="2025-11-01T01:52:32.887614290Z" level=info msg="CreateContainer within sandbox \"90849b468dca4af60bd79098917a0866af3e2e4e8dfe09f0f4a1c8a565806abc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e18d25083b12320ab003ad18bf19342265dec4b11eeef350cbc293fbd5d1e410\"" Nov 1 01:52:32.892685 containerd[1506]: time="2025-11-01T01:52:32.892644309Z" level=info msg="StartContainer for \"e18d25083b12320ab003ad18bf19342265dec4b11eeef350cbc293fbd5d1e410\"" Nov 1 01:52:33.188396 systemd[1]: Started cri-containerd-e18d25083b12320ab003ad18bf19342265dec4b11eeef350cbc293fbd5d1e410.scope - libcontainer container e18d25083b12320ab003ad18bf19342265dec4b11eeef350cbc293fbd5d1e410. Nov 1 01:52:33.260243 containerd[1506]: time="2025-11-01T01:52:33.260166952Z" level=info msg="StartContainer for \"e18d25083b12320ab003ad18bf19342265dec4b11eeef350cbc293fbd5d1e410\" returns successfully" Nov 1 01:52:33.554038 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 01:52:33.555139 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 01:52:33.612912 containerd[1506]: time="2025-11-01T01:52:33.611770819Z" level=info msg="StopPodSandbox for \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\"" Nov 1 01:52:33.726068 containerd[1506]: time="2025-11-01T01:52:33.725819147Z" level=error msg="StopPodSandbox for \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\" failed" error="failed to destroy network for sandbox \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 01:52:33.726262 kubelet[2677]: E1101 01:52:33.726139 2677 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Nov 1 01:52:33.727137 kubelet[2677]: E1101 01:52:33.726272 2677 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab"} Nov 1 01:52:33.727137 kubelet[2677]: E1101 01:52:33.726349 2677 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31b727ee-f1ce-4561-a94e-37319924f336\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 01:52:33.727137 kubelet[2677]: E1101 01:52:33.726404 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31b727ee-f1ce-4561-a94e-37319924f336\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c74bc7d7-w87m8" podUID="31b727ee-f1ce-4561-a94e-37319924f336" Nov 1 01:52:34.150606 containerd[1506]: time="2025-11-01T01:52:34.150285637Z" level=info msg="StopPodSandbox for \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\"" Nov 1 01:52:34.284950 kubelet[2677]: I1101 01:52:34.276854 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-j5fgg" podStartSLOduration=3.401120653 podStartE2EDuration="27.243659359s" podCreationTimestamp="2025-11-01 01:52:07 +0000 UTC" firstStartedPulling="2025-11-01 01:52:08.743181527 +0000 UTC m=+25.363393195" lastFinishedPulling="2025-11-01 01:52:32.585720224 +0000 UTC m=+49.205931901" observedRunningTime="2025-11-01 01:52:34.241878052 +0000 UTC m=+50.862089723" watchObservedRunningTime="2025-11-01 01:52:34.243659359 +0000 UTC m=+50.863871029" Nov 1 01:52:34.316633 systemd[1]: run-containerd-runc-k8s.io-e18d25083b12320ab003ad18bf19342265dec4b11eeef350cbc293fbd5d1e410-runc.f75E3T.mount: Deactivated successfully. Nov 1 01:52:34.607167 containerd[1506]: time="2025-11-01T01:52:34.606469031Z" level=info msg="StopPodSandbox for \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\"" Nov 1 01:52:34.608870 containerd[1506]: time="2025-11-01T01:52:34.607453010Z" level=info msg="StopPodSandbox for \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\"" Nov 1 01:52:34.610261 containerd[1506]: time="2025-11-01T01:52:34.607505043Z" level=info msg="StopPodSandbox for \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\"" Nov 1 01:52:34.748176 containerd[1506]: 2025-11-01 01:52:34.356 [INFO][3912] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Nov 1 01:52:34.748176 containerd[1506]: 2025-11-01 01:52:34.356 [INFO][3912] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" iface="eth0" netns="/var/run/netns/cni-0d46b3a5-86fd-8343-2fdc-355f3d55c6ca" Nov 1 01:52:34.748176 containerd[1506]: 2025-11-01 01:52:34.357 [INFO][3912] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" iface="eth0" netns="/var/run/netns/cni-0d46b3a5-86fd-8343-2fdc-355f3d55c6ca" Nov 1 01:52:34.748176 containerd[1506]: 2025-11-01 01:52:34.359 [INFO][3912] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" iface="eth0" netns="/var/run/netns/cni-0d46b3a5-86fd-8343-2fdc-355f3d55c6ca" Nov 1 01:52:34.748176 containerd[1506]: 2025-11-01 01:52:34.360 [INFO][3912] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Nov 1 01:52:34.748176 containerd[1506]: 2025-11-01 01:52:34.360 [INFO][3912] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Nov 1 01:52:34.748176 containerd[1506]: 2025-11-01 01:52:34.628 [INFO][3934] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" HandleID="k8s-pod-network.8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Workload="srv--d9muf.gb1.brightbox.com-k8s-whisker--6c74bc7d7--w87m8-eth0" Nov 1 01:52:34.748176 containerd[1506]: 2025-11-01 01:52:34.631 [INFO][3934] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:34.748176 containerd[1506]: 2025-11-01 01:52:34.631 [INFO][3934] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:34.748176 containerd[1506]: 2025-11-01 01:52:34.716 [WARNING][3934] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" HandleID="k8s-pod-network.8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Workload="srv--d9muf.gb1.brightbox.com-k8s-whisker--6c74bc7d7--w87m8-eth0" Nov 1 01:52:34.748176 containerd[1506]: 2025-11-01 01:52:34.716 [INFO][3934] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" HandleID="k8s-pod-network.8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Workload="srv--d9muf.gb1.brightbox.com-k8s-whisker--6c74bc7d7--w87m8-eth0" Nov 1 01:52:34.748176 containerd[1506]: 2025-11-01 01:52:34.727 [INFO][3934] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:34.748176 containerd[1506]: 2025-11-01 01:52:34.732 [INFO][3912] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Nov 1 01:52:34.757531 containerd[1506]: time="2025-11-01T01:52:34.751120181Z" level=info msg="TearDown network for sandbox \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\" successfully" Nov 1 01:52:34.757531 containerd[1506]: time="2025-11-01T01:52:34.751169965Z" level=info msg="StopPodSandbox for \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\" returns successfully" Nov 1 01:52:34.753724 systemd[1]: run-netns-cni\x2d0d46b3a5\x2d86fd\x2d8343\x2d2fdc\x2d355f3d55c6ca.mount: Deactivated successfully. Nov 1 01:52:34.923700 kubelet[2677]: I1101 01:52:34.922756 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31b727ee-f1ce-4561-a94e-37319924f336-whisker-ca-bundle\") pod \"31b727ee-f1ce-4561-a94e-37319924f336\" (UID: \"31b727ee-f1ce-4561-a94e-37319924f336\") " Nov 1 01:52:34.923700 kubelet[2677]: I1101 01:52:34.922897 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr22s\" (UniqueName: \"kubernetes.io/projected/31b727ee-f1ce-4561-a94e-37319924f336-kube-api-access-vr22s\") pod \"31b727ee-f1ce-4561-a94e-37319924f336\" (UID: \"31b727ee-f1ce-4561-a94e-37319924f336\") " Nov 1 01:52:34.923700 kubelet[2677]: I1101 01:52:34.922987 2677 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/31b727ee-f1ce-4561-a94e-37319924f336-whisker-backend-key-pair\") pod \"31b727ee-f1ce-4561-a94e-37319924f336\" (UID: \"31b727ee-f1ce-4561-a94e-37319924f336\") " Nov 1 01:52:34.967325 kubelet[2677]: I1101 01:52:34.939753 2677 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31b727ee-f1ce-4561-a94e-37319924f336-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "31b727ee-f1ce-4561-a94e-37319924f336" (UID: "31b727ee-f1ce-4561-a94e-37319924f336"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 01:52:34.971127 kubelet[2677]: I1101 01:52:34.970423 2677 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31b727ee-f1ce-4561-a94e-37319924f336-kube-api-access-vr22s" (OuterVolumeSpecName: "kube-api-access-vr22s") pod "31b727ee-f1ce-4561-a94e-37319924f336" (UID: "31b727ee-f1ce-4561-a94e-37319924f336"). InnerVolumeSpecName "kube-api-access-vr22s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 01:52:34.971517 kubelet[2677]: I1101 01:52:34.971444 2677 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31b727ee-f1ce-4561-a94e-37319924f336-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "31b727ee-f1ce-4561-a94e-37319924f336" (UID: "31b727ee-f1ce-4561-a94e-37319924f336"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 01:52:34.973153 systemd[1]: var-lib-kubelet-pods-31b727ee\x2df1ce\x2d4561\x2da94e\x2d37319924f336-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvr22s.mount: Deactivated successfully. Nov 1 01:52:34.980738 systemd[1]: var-lib-kubelet-pods-31b727ee\x2df1ce\x2d4561\x2da94e\x2d37319924f336-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 01:52:35.023777 kubelet[2677]: I1101 01:52:35.023395 2677 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/31b727ee-f1ce-4561-a94e-37319924f336-whisker-backend-key-pair\") on node \"srv-d9muf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 01:52:35.023777 kubelet[2677]: I1101 01:52:35.023459 2677 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31b727ee-f1ce-4561-a94e-37319924f336-whisker-ca-bundle\") on node \"srv-d9muf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 01:52:35.023777 kubelet[2677]: I1101 01:52:35.023477 2677 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vr22s\" (UniqueName: \"kubernetes.io/projected/31b727ee-f1ce-4561-a94e-37319924f336-kube-api-access-vr22s\") on node \"srv-d9muf.gb1.brightbox.com\" DevicePath \"\"" Nov 1 01:52:35.025572 containerd[1506]: 2025-11-01 01:52:34.894 [INFO][3978] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Nov 1 01:52:35.025572 containerd[1506]: 2025-11-01 01:52:34.894 [INFO][3978] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" iface="eth0" netns="/var/run/netns/cni-24ab09e7-b876-ddc9-5b62-f091432fbf5e" Nov 1 01:52:35.025572 containerd[1506]: 2025-11-01 01:52:34.894 [INFO][3978] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" iface="eth0" netns="/var/run/netns/cni-24ab09e7-b876-ddc9-5b62-f091432fbf5e" Nov 1 01:52:35.025572 containerd[1506]: 2025-11-01 01:52:34.897 [INFO][3978] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" iface="eth0" netns="/var/run/netns/cni-24ab09e7-b876-ddc9-5b62-f091432fbf5e" Nov 1 01:52:35.025572 containerd[1506]: 2025-11-01 01:52:34.897 [INFO][3978] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Nov 1 01:52:35.025572 containerd[1506]: 2025-11-01 01:52:34.897 [INFO][3978] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Nov 1 01:52:35.025572 containerd[1506]: 2025-11-01 01:52:34.984 [INFO][4004] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" HandleID="k8s-pod-network.7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Workload="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:35.025572 containerd[1506]: 2025-11-01 01:52:34.984 [INFO][4004] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:35.025572 containerd[1506]: 2025-11-01 01:52:34.984 [INFO][4004] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:35.025572 containerd[1506]: 2025-11-01 01:52:35.007 [WARNING][4004] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" HandleID="k8s-pod-network.7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Workload="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:35.025572 containerd[1506]: 2025-11-01 01:52:35.007 [INFO][4004] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" HandleID="k8s-pod-network.7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Workload="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:35.025572 containerd[1506]: 2025-11-01 01:52:35.012 [INFO][4004] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:35.025572 containerd[1506]: 2025-11-01 01:52:35.018 [INFO][3978] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Nov 1 01:52:35.025572 containerd[1506]: time="2025-11-01T01:52:35.025384665Z" level=info msg="TearDown network for sandbox \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\" successfully" Nov 1 01:52:35.025572 containerd[1506]: time="2025-11-01T01:52:35.025444618Z" level=info msg="StopPodSandbox for \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\" returns successfully" Nov 1 01:52:35.031829 containerd[1506]: time="2025-11-01T01:52:35.030524798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2z2fz,Uid:692e4d02-4b9e-43c3-8a3c-87f80adc9cda,Namespace:calico-system,Attempt:1,}" Nov 1 01:52:35.033831 systemd[1]: run-netns-cni\x2d24ab09e7\x2db876\x2dddc9\x2d5b62\x2df091432fbf5e.mount: Deactivated successfully. Nov 1 01:52:35.166296 systemd[1]: Removed slice kubepods-besteffort-pod31b727ee_f1ce_4561_a94e_37319924f336.slice - libcontainer container kubepods-besteffort-pod31b727ee_f1ce_4561_a94e_37319924f336.slice. Nov 1 01:52:35.219039 containerd[1506]: 2025-11-01 01:52:34.994 [INFO][3974] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Nov 1 01:52:35.219039 containerd[1506]: 2025-11-01 01:52:34.996 [INFO][3974] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" iface="eth0" netns="/var/run/netns/cni-7ccc86ee-a5e7-3c4d-7633-20d0b0f4dc74" Nov 1 01:52:35.219039 containerd[1506]: 2025-11-01 01:52:34.997 [INFO][3974] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" iface="eth0" netns="/var/run/netns/cni-7ccc86ee-a5e7-3c4d-7633-20d0b0f4dc74" Nov 1 01:52:35.219039 containerd[1506]: 2025-11-01 01:52:34.999 [INFO][3974] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" iface="eth0" netns="/var/run/netns/cni-7ccc86ee-a5e7-3c4d-7633-20d0b0f4dc74" Nov 1 01:52:35.219039 containerd[1506]: 2025-11-01 01:52:34.999 [INFO][3974] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Nov 1 01:52:35.219039 containerd[1506]: 2025-11-01 01:52:34.999 [INFO][3974] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Nov 1 01:52:35.219039 containerd[1506]: 2025-11-01 01:52:35.137 [INFO][4014] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" HandleID="k8s-pod-network.58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:35.219039 containerd[1506]: 2025-11-01 01:52:35.139 [INFO][4014] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:35.219039 containerd[1506]: 2025-11-01 01:52:35.139 [INFO][4014] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:35.219039 containerd[1506]: 2025-11-01 01:52:35.177 [WARNING][4014] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" HandleID="k8s-pod-network.58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:35.219039 containerd[1506]: 2025-11-01 01:52:35.177 [INFO][4014] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" HandleID="k8s-pod-network.58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:35.219039 containerd[1506]: 2025-11-01 01:52:35.185 [INFO][4014] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:35.219039 containerd[1506]: 2025-11-01 01:52:35.186 [INFO][3974] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Nov 1 01:52:35.234401 containerd[1506]: time="2025-11-01T01:52:35.227480590Z" level=info msg="TearDown network for sandbox \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\" successfully" Nov 1 01:52:35.234401 containerd[1506]: time="2025-11-01T01:52:35.229255570Z" level=info msg="StopPodSandbox for \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\" returns successfully" Nov 1 01:52:35.243391 containerd[1506]: time="2025-11-01T01:52:35.243232093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zgzhk,Uid:dea2f1d9-48f1-44e9-bd05-794af9e0edad,Namespace:kube-system,Attempt:1,}" Nov 1 01:52:35.307700 containerd[1506]: 2025-11-01 01:52:35.049 [INFO][3992] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Nov 1 01:52:35.307700 containerd[1506]: 2025-11-01 01:52:35.049 [INFO][3992] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" iface="eth0" netns="/var/run/netns/cni-48247897-4c25-cab3-4e29-032dddc821ec" Nov 1 01:52:35.307700 containerd[1506]: 2025-11-01 01:52:35.050 [INFO][3992] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" iface="eth0" netns="/var/run/netns/cni-48247897-4c25-cab3-4e29-032dddc821ec" Nov 1 01:52:35.307700 containerd[1506]: 2025-11-01 01:52:35.051 [INFO][3992] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" iface="eth0" netns="/var/run/netns/cni-48247897-4c25-cab3-4e29-032dddc821ec" Nov 1 01:52:35.307700 containerd[1506]: 2025-11-01 01:52:35.052 [INFO][3992] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Nov 1 01:52:35.307700 containerd[1506]: 2025-11-01 01:52:35.052 [INFO][3992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Nov 1 01:52:35.307700 containerd[1506]: 2025-11-01 01:52:35.176 [INFO][4020] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" HandleID="k8s-pod-network.876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Workload="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:35.307700 containerd[1506]: 2025-11-01 01:52:35.177 [INFO][4020] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:35.307700 containerd[1506]: 2025-11-01 01:52:35.186 [INFO][4020] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:35.307700 containerd[1506]: 2025-11-01 01:52:35.270 [WARNING][4020] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" HandleID="k8s-pod-network.876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Workload="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:35.307700 containerd[1506]: 2025-11-01 01:52:35.270 [INFO][4020] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" HandleID="k8s-pod-network.876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Workload="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:35.307700 containerd[1506]: 2025-11-01 01:52:35.286 [INFO][4020] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:35.307700 containerd[1506]: 2025-11-01 01:52:35.296 [INFO][3992] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Nov 1 01:52:35.309695 containerd[1506]: time="2025-11-01T01:52:35.307867270Z" level=info msg="TearDown network for sandbox \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\" successfully" Nov 1 01:52:35.309695 containerd[1506]: time="2025-11-01T01:52:35.307902644Z" level=info msg="StopPodSandbox for \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\" returns successfully" Nov 1 01:52:35.312715 containerd[1506]: time="2025-11-01T01:52:35.312390804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f685l,Uid:ec724e45-3797-40ba-a9db-970952094e39,Namespace:calico-system,Attempt:1,}" Nov 1 01:52:35.500203 systemd[1]: Created slice kubepods-besteffort-pod9bcbd5c6_4789_405c_8bd4_745ed14fab4a.slice - libcontainer container kubepods-besteffort-pod9bcbd5c6_4789_405c_8bd4_745ed14fab4a.slice. Nov 1 01:52:35.611697 containerd[1506]: time="2025-11-01T01:52:35.611288793Z" level=info msg="StopPodSandbox for \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\"" Nov 1 01:52:35.634643 kubelet[2677]: I1101 01:52:35.634573 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9bcbd5c6-4789-405c-8bd4-745ed14fab4a-whisker-backend-key-pair\") pod \"whisker-ffbb49cbc-m9nb9\" (UID: \"9bcbd5c6-4789-405c-8bd4-745ed14fab4a\") " pod="calico-system/whisker-ffbb49cbc-m9nb9" Nov 1 01:52:35.634643 kubelet[2677]: I1101 01:52:35.634653 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjzk6\" (UniqueName: \"kubernetes.io/projected/9bcbd5c6-4789-405c-8bd4-745ed14fab4a-kube-api-access-zjzk6\") pod \"whisker-ffbb49cbc-m9nb9\" (UID: \"9bcbd5c6-4789-405c-8bd4-745ed14fab4a\") " pod="calico-system/whisker-ffbb49cbc-m9nb9" Nov 1 01:52:35.635337 kubelet[2677]: I1101 01:52:35.634701 2677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9bcbd5c6-4789-405c-8bd4-745ed14fab4a-whisker-ca-bundle\") pod \"whisker-ffbb49cbc-m9nb9\" (UID: \"9bcbd5c6-4789-405c-8bd4-745ed14fab4a\") " pod="calico-system/whisker-ffbb49cbc-m9nb9" Nov 1 01:52:35.653291 kubelet[2677]: I1101 01:52:35.653110 2677 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31b727ee-f1ce-4561-a94e-37319924f336" path="/var/lib/kubelet/pods/31b727ee-f1ce-4561-a94e-37319924f336/volumes" Nov 1 01:52:35.760936 systemd[1]: run-netns-cni\x2d7ccc86ee\x2da5e7\x2d3c4d\x2d7633\x2d20d0b0f4dc74.mount: Deactivated successfully. Nov 1 01:52:35.761143 systemd[1]: run-netns-cni\x2d48247897\x2d4c25\x2dcab3\x2d4e29\x2d032dddc821ec.mount: Deactivated successfully. Nov 1 01:52:35.830731 containerd[1506]: time="2025-11-01T01:52:35.830460901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ffbb49cbc-m9nb9,Uid:9bcbd5c6-4789-405c-8bd4-745ed14fab4a,Namespace:calico-system,Attempt:0,}" Nov 1 01:52:35.877430 systemd-networkd[1428]: cali5c065fd31a0: Link UP Nov 1 01:52:35.886075 systemd-networkd[1428]: cali5c065fd31a0: Gained carrier Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.276 [INFO][4025] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.340 [INFO][4025] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0 goldmane-666569f655- calico-system 692e4d02-4b9e-43c3-8a3c-87f80adc9cda 925 0 2025-11-01 01:52:05 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-d9muf.gb1.brightbox.com goldmane-666569f655-2z2fz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5c065fd31a0 [] [] }} ContainerID="f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" Namespace="calico-system" Pod="goldmane-666569f655-2z2fz" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-" Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.340 [INFO][4025] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" Namespace="calico-system" Pod="goldmane-666569f655-2z2fz" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.516 [INFO][4072] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" HandleID="k8s-pod-network.f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" Workload="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.517 [INFO][4072] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" HandleID="k8s-pod-network.f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" Workload="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122920), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-d9muf.gb1.brightbox.com", "pod":"goldmane-666569f655-2z2fz", "timestamp":"2025-11-01 01:52:35.516375187 +0000 UTC"}, Hostname:"srv-d9muf.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.517 [INFO][4072] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.517 [INFO][4072] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.517 [INFO][4072] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-d9muf.gb1.brightbox.com' Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.589 [INFO][4072] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.646 [INFO][4072] ipam/ipam.go 394: Looking up existing affinities for host host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.698 [INFO][4072] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.711 [INFO][4072] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.721 [INFO][4072] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.721 [INFO][4072] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.732 [INFO][4072] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18 Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.744 [INFO][4072] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.795 [INFO][4072] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.129/26] block=192.168.73.128/26 handle="k8s-pod-network.f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.795 [INFO][4072] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.129/26] handle="k8s-pod-network.f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.799 [INFO][4072] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:35.968041 containerd[1506]: 2025-11-01 01:52:35.799 [INFO][4072] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.129/26] IPv6=[] ContainerID="f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" HandleID="k8s-pod-network.f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" Workload="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:35.975663 containerd[1506]: 2025-11-01 01:52:35.812 [INFO][4025] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" Namespace="calico-system" Pod="goldmane-666569f655-2z2fz" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"692e4d02-4b9e-43c3-8a3c-87f80adc9cda", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-2z2fz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.73.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5c065fd31a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:35.975663 containerd[1506]: 2025-11-01 01:52:35.813 [INFO][4025] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.129/32] ContainerID="f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" Namespace="calico-system" Pod="goldmane-666569f655-2z2fz" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:35.975663 containerd[1506]: 2025-11-01 01:52:35.813 [INFO][4025] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c065fd31a0 ContainerID="f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" Namespace="calico-system" Pod="goldmane-666569f655-2z2fz" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:35.975663 containerd[1506]: 2025-11-01 01:52:35.904 [INFO][4025] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" Namespace="calico-system" Pod="goldmane-666569f655-2z2fz" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:35.975663 containerd[1506]: 2025-11-01 01:52:35.905 [INFO][4025] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" Namespace="calico-system" Pod="goldmane-666569f655-2z2fz" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"692e4d02-4b9e-43c3-8a3c-87f80adc9cda", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18", Pod:"goldmane-666569f655-2z2fz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.73.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5c065fd31a0", MAC:"22:ed:71:c3:c0:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:35.975663 containerd[1506]: 2025-11-01 01:52:35.954 [INFO][4025] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18" Namespace="calico-system" Pod="goldmane-666569f655-2z2fz" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:36.087429 systemd-networkd[1428]: cali156b3ea906a: Link UP Nov 1 01:52:36.095644 systemd-networkd[1428]: cali156b3ea906a: Gained carrier Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:35.458 [INFO][4049] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:35.578 [INFO][4049] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0 coredns-668d6bf9bc- kube-system dea2f1d9-48f1-44e9-bd05-794af9e0edad 926 0 2025-11-01 01:51:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-d9muf.gb1.brightbox.com coredns-668d6bf9bc-zgzhk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali156b3ea906a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgzhk" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-" Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:35.579 [INFO][4049] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgzhk" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:35.713 [INFO][4092] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" HandleID="k8s-pod-network.79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:35.713 [INFO][4092] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" HandleID="k8s-pod-network.79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125580), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-d9muf.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-zgzhk", "timestamp":"2025-11-01 01:52:35.713717026 +0000 UTC"}, Hostname:"srv-d9muf.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:35.714 [INFO][4092] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:35.800 [INFO][4092] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:35.800 [INFO][4092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-d9muf.gb1.brightbox.com' Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:35.824 [INFO][4092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:35.864 [INFO][4092] ipam/ipam.go 394: Looking up existing affinities for host host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:35.935 [INFO][4092] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:35.949 [INFO][4092] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:35.957 [INFO][4092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:35.957 [INFO][4092] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:35.977 [INFO][4092] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18 Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:36.000 [INFO][4092] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:36.025 [INFO][4092] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.130/26] block=192.168.73.128/26 handle="k8s-pod-network.79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:36.026 [INFO][4092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.130/26] handle="k8s-pod-network.79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:36.026 [INFO][4092] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:36.150322 containerd[1506]: 2025-11-01 01:52:36.027 [INFO][4092] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.130/26] IPv6=[] ContainerID="79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" HandleID="k8s-pod-network.79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:36.162641 containerd[1506]: 2025-11-01 01:52:36.059 [INFO][4049] cni-plugin/k8s.go 418: Populated endpoint ContainerID="79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgzhk" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dea2f1d9-48f1-44e9-bd05-794af9e0edad", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-zgzhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali156b3ea906a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:36.162641 containerd[1506]: 2025-11-01 01:52:36.059 [INFO][4049] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.130/32] ContainerID="79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgzhk" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:36.162641 containerd[1506]: 2025-11-01 01:52:36.059 [INFO][4049] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali156b3ea906a ContainerID="79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgzhk" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:36.162641 containerd[1506]: 2025-11-01 01:52:36.102 [INFO][4049] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgzhk" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:36.162641 containerd[1506]: 2025-11-01 01:52:36.103 [INFO][4049] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgzhk" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dea2f1d9-48f1-44e9-bd05-794af9e0edad", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18", Pod:"coredns-668d6bf9bc-zgzhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali156b3ea906a", MAC:"5e:e5:f5:1c:77:8e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:36.162641 containerd[1506]: 2025-11-01 01:52:36.143 [INFO][4049] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18" Namespace="kube-system" Pod="coredns-668d6bf9bc-zgzhk" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:36.286351 systemd-networkd[1428]: cali101b555a642: Link UP Nov 1 01:52:36.289072 systemd-networkd[1428]: cali101b555a642: Gained carrier Nov 1 01:52:36.293657 containerd[1506]: time="2025-11-01T01:52:36.290624711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:52:36.293657 containerd[1506]: time="2025-11-01T01:52:36.290763073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:52:36.293657 containerd[1506]: time="2025-11-01T01:52:36.290789111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:36.293657 containerd[1506]: time="2025-11-01T01:52:36.290970524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:35.505 [INFO][4060] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:35.596 [INFO][4060] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0 csi-node-driver- calico-system ec724e45-3797-40ba-a9db-970952094e39 927 0 2025-11-01 01:52:08 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-d9muf.gb1.brightbox.com csi-node-driver-f685l eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali101b555a642 [] [] }} ContainerID="47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" Namespace="calico-system" Pod="csi-node-driver-f685l" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-" Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:35.596 [INFO][4060] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" Namespace="calico-system" Pod="csi-node-driver-f685l" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:35.717 [INFO][4097] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" HandleID="k8s-pod-network.47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" Workload="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:35.719 [INFO][4097] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" HandleID="k8s-pod-network.47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" Workload="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001234f0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-d9muf.gb1.brightbox.com", "pod":"csi-node-driver-f685l", "timestamp":"2025-11-01 01:52:35.7178507 +0000 UTC"}, Hostname:"srv-d9muf.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:35.720 [INFO][4097] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:36.034 [INFO][4097] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:36.034 [INFO][4097] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-d9muf.gb1.brightbox.com' Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:36.082 [INFO][4097] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:36.129 [INFO][4097] ipam/ipam.go 394: Looking up existing affinities for host host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:36.172 [INFO][4097] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:36.189 [INFO][4097] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:36.206 [INFO][4097] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:36.206 [INFO][4097] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:36.223 [INFO][4097] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414 Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:36.236 [INFO][4097] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:36.254 [INFO][4097] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.131/26] block=192.168.73.128/26 handle="k8s-pod-network.47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:36.255 [INFO][4097] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.131/26] handle="k8s-pod-network.47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:36.256 [INFO][4097] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:36.362365 containerd[1506]: 2025-11-01 01:52:36.256 [INFO][4097] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.131/26] IPv6=[] ContainerID="47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" HandleID="k8s-pod-network.47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" Workload="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:36.365217 containerd[1506]: 2025-11-01 01:52:36.264 [INFO][4060] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" Namespace="calico-system" Pod="csi-node-driver-f685l" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ec724e45-3797-40ba-a9db-970952094e39", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-f685l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali101b555a642", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:36.365217 containerd[1506]: 2025-11-01 01:52:36.267 [INFO][4060] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.131/32] ContainerID="47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" Namespace="calico-system" Pod="csi-node-driver-f685l" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:36.365217 containerd[1506]: 2025-11-01 01:52:36.268 [INFO][4060] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali101b555a642 ContainerID="47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" Namespace="calico-system" Pod="csi-node-driver-f685l" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:36.365217 containerd[1506]: 2025-11-01 01:52:36.304 [INFO][4060] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" Namespace="calico-system" Pod="csi-node-driver-f685l" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:36.365217 containerd[1506]: 2025-11-01 01:52:36.311 [INFO][4060] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" Namespace="calico-system" Pod="csi-node-driver-f685l" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ec724e45-3797-40ba-a9db-970952094e39", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414", Pod:"csi-node-driver-f685l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali101b555a642", MAC:"06:25:37:2c:33:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:36.365217 containerd[1506]: 2025-11-01 01:52:36.350 [INFO][4060] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414" Namespace="calico-system" Pod="csi-node-driver-f685l" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:36.374065 containerd[1506]: time="2025-11-01T01:52:36.371419124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:52:36.374065 containerd[1506]: time="2025-11-01T01:52:36.371506307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:52:36.374065 containerd[1506]: time="2025-11-01T01:52:36.371560345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:36.374065 containerd[1506]: time="2025-11-01T01:52:36.371807819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:36.378489 containerd[1506]: 2025-11-01 01:52:36.048 [INFO][4112] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Nov 1 01:52:36.378489 containerd[1506]: 2025-11-01 01:52:36.054 [INFO][4112] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" iface="eth0" netns="/var/run/netns/cni-31760283-a5b8-bb71-08c8-a9ca2ae97a29" Nov 1 01:52:36.378489 containerd[1506]: 2025-11-01 01:52:36.055 [INFO][4112] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" iface="eth0" netns="/var/run/netns/cni-31760283-a5b8-bb71-08c8-a9ca2ae97a29" Nov 1 01:52:36.378489 containerd[1506]: 2025-11-01 01:52:36.059 [INFO][4112] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" iface="eth0" netns="/var/run/netns/cni-31760283-a5b8-bb71-08c8-a9ca2ae97a29" Nov 1 01:52:36.378489 containerd[1506]: 2025-11-01 01:52:36.059 [INFO][4112] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Nov 1 01:52:36.378489 containerd[1506]: 2025-11-01 01:52:36.059 [INFO][4112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Nov 1 01:52:36.378489 containerd[1506]: 2025-11-01 01:52:36.328 [INFO][4192] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" HandleID="k8s-pod-network.5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:36.378489 containerd[1506]: 2025-11-01 01:52:36.328 [INFO][4192] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:36.378489 containerd[1506]: 2025-11-01 01:52:36.328 [INFO][4192] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:36.378489 containerd[1506]: 2025-11-01 01:52:36.358 [WARNING][4192] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" HandleID="k8s-pod-network.5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:36.378489 containerd[1506]: 2025-11-01 01:52:36.359 [INFO][4192] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" HandleID="k8s-pod-network.5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:36.378489 containerd[1506]: 2025-11-01 01:52:36.364 [INFO][4192] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:36.378489 containerd[1506]: 2025-11-01 01:52:36.375 [INFO][4112] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Nov 1 01:52:36.381205 containerd[1506]: time="2025-11-01T01:52:36.378758731Z" level=info msg="TearDown network for sandbox \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\" successfully" Nov 1 01:52:36.381205 containerd[1506]: time="2025-11-01T01:52:36.378818434Z" level=info msg="StopPodSandbox for \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\" returns successfully" Nov 1 01:52:36.381205 containerd[1506]: time="2025-11-01T01:52:36.380553507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvbkg,Uid:5d3e007c-5fa3-444d-bda8-4fe6a895dd94,Namespace:kube-system,Attempt:1,}" Nov 1 01:52:36.428069 systemd[1]: Started cri-containerd-f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18.scope - libcontainer container f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18. Nov 1 01:52:36.501248 systemd[1]: Started cri-containerd-79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18.scope - libcontainer container 79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18. Nov 1 01:52:36.578960 containerd[1506]: time="2025-11-01T01:52:36.576128407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:52:36.578960 containerd[1506]: time="2025-11-01T01:52:36.576239149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:52:36.578960 containerd[1506]: time="2025-11-01T01:52:36.576312723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:36.578960 containerd[1506]: time="2025-11-01T01:52:36.576436775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:36.702377 systemd[1]: Started cri-containerd-47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414.scope - libcontainer container 47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414. Nov 1 01:52:36.777310 systemd[1]: run-netns-cni\x2d31760283\x2da5b8\x2dbb71\x2d08c8\x2da9ca2ae97a29.mount: Deactivated successfully. Nov 1 01:52:36.802427 systemd-networkd[1428]: calice68c0e2e28: Link UP Nov 1 01:52:36.808671 systemd-networkd[1428]: calice68c0e2e28: Gained carrier Nov 1 01:52:36.814413 containerd[1506]: time="2025-11-01T01:52:36.813343643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zgzhk,Uid:dea2f1d9-48f1-44e9-bd05-794af9e0edad,Namespace:kube-system,Attempt:1,} returns sandbox id \"79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18\"" Nov 1 01:52:36.839606 containerd[1506]: time="2025-11-01T01:52:36.839103075Z" level=info msg="CreateContainer within sandbox \"79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.257 [INFO][4155] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.318 [INFO][4155] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--d9muf.gb1.brightbox.com-k8s-whisker--ffbb49cbc--m9nb9-eth0 whisker-ffbb49cbc- calico-system 9bcbd5c6-4789-405c-8bd4-745ed14fab4a 941 0 2025-11-01 01:52:35 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:ffbb49cbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-d9muf.gb1.brightbox.com whisker-ffbb49cbc-m9nb9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calice68c0e2e28 [] [] }} ContainerID="2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" Namespace="calico-system" Pod="whisker-ffbb49cbc-m9nb9" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-whisker--ffbb49cbc--m9nb9-" Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.318 [INFO][4155] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" Namespace="calico-system" Pod="whisker-ffbb49cbc-m9nb9" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-whisker--ffbb49cbc--m9nb9-eth0" Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.528 [INFO][4270] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" HandleID="k8s-pod-network.2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" Workload="srv--d9muf.gb1.brightbox.com-k8s-whisker--ffbb49cbc--m9nb9-eth0" Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.528 [INFO][4270] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" HandleID="k8s-pod-network.2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" Workload="srv--d9muf.gb1.brightbox.com-k8s-whisker--ffbb49cbc--m9nb9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000269600), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-d9muf.gb1.brightbox.com", "pod":"whisker-ffbb49cbc-m9nb9", "timestamp":"2025-11-01 01:52:36.528328827 +0000 UTC"}, Hostname:"srv-d9muf.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.528 [INFO][4270] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.528 [INFO][4270] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.528 [INFO][4270] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-d9muf.gb1.brightbox.com' Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.548 [INFO][4270] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.573 [INFO][4270] ipam/ipam.go 394: Looking up existing affinities for host host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.588 [INFO][4270] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.610 [INFO][4270] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.624 [INFO][4270] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.626 [INFO][4270] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.639 [INFO][4270] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005 Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.666 [INFO][4270] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.688 [INFO][4270] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.132/26] block=192.168.73.128/26 handle="k8s-pod-network.2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.688 [INFO][4270] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.132/26] handle="k8s-pod-network.2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.689 [INFO][4270] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:36.869954 containerd[1506]: 2025-11-01 01:52:36.689 [INFO][4270] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.132/26] IPv6=[] ContainerID="2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" HandleID="k8s-pod-network.2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" Workload="srv--d9muf.gb1.brightbox.com-k8s-whisker--ffbb49cbc--m9nb9-eth0" Nov 1 01:52:36.873668 containerd[1506]: 2025-11-01 01:52:36.716 [INFO][4155] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" Namespace="calico-system" Pod="whisker-ffbb49cbc-m9nb9" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-whisker--ffbb49cbc--m9nb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-whisker--ffbb49cbc--m9nb9-eth0", GenerateName:"whisker-ffbb49cbc-", Namespace:"calico-system", SelfLink:"", UID:"9bcbd5c6-4789-405c-8bd4-745ed14fab4a", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"ffbb49cbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"", Pod:"whisker-ffbb49cbc-m9nb9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.73.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calice68c0e2e28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:36.873668 containerd[1506]: 2025-11-01 01:52:36.720 [INFO][4155] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.132/32] ContainerID="2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" Namespace="calico-system" Pod="whisker-ffbb49cbc-m9nb9" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-whisker--ffbb49cbc--m9nb9-eth0" Nov 1 01:52:36.873668 containerd[1506]: 2025-11-01 01:52:36.722 [INFO][4155] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice68c0e2e28 ContainerID="2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" Namespace="calico-system" Pod="whisker-ffbb49cbc-m9nb9" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-whisker--ffbb49cbc--m9nb9-eth0" Nov 1 01:52:36.873668 containerd[1506]: 2025-11-01 01:52:36.808 [INFO][4155] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" Namespace="calico-system" Pod="whisker-ffbb49cbc-m9nb9" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-whisker--ffbb49cbc--m9nb9-eth0" Nov 1 01:52:36.873668 containerd[1506]: 2025-11-01 01:52:36.820 [INFO][4155] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" Namespace="calico-system" Pod="whisker-ffbb49cbc-m9nb9" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-whisker--ffbb49cbc--m9nb9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-whisker--ffbb49cbc--m9nb9-eth0", GenerateName:"whisker-ffbb49cbc-", Namespace:"calico-system", SelfLink:"", UID:"9bcbd5c6-4789-405c-8bd4-745ed14fab4a", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"ffbb49cbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005", Pod:"whisker-ffbb49cbc-m9nb9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.73.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calice68c0e2e28", MAC:"6e:c4:85:8f:af:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:36.873668 containerd[1506]: 2025-11-01 01:52:36.855 [INFO][4155] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005" Namespace="calico-system" Pod="whisker-ffbb49cbc-m9nb9" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-whisker--ffbb49cbc--m9nb9-eth0" Nov 1 01:52:36.923910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount512228598.mount: Deactivated successfully. Nov 1 01:52:36.942118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount352334852.mount: Deactivated successfully. Nov 1 01:52:36.965772 containerd[1506]: time="2025-11-01T01:52:36.965263089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-2z2fz,Uid:692e4d02-4b9e-43c3-8a3c-87f80adc9cda,Namespace:calico-system,Attempt:1,} returns sandbox id \"f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18\"" Nov 1 01:52:36.975316 containerd[1506]: time="2025-11-01T01:52:36.973688883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:52:36.986059 containerd[1506]: time="2025-11-01T01:52:36.985629278Z" level=info msg="CreateContainer within sandbox \"79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23cd64c61c15caa6ad487d6455f8ee1f2b24bed58e1b1757def6abafc9ed32b6\"" Nov 1 01:52:36.993640 containerd[1506]: time="2025-11-01T01:52:36.993428549Z" level=info msg="StartContainer for \"23cd64c61c15caa6ad487d6455f8ee1f2b24bed58e1b1757def6abafc9ed32b6\"" Nov 1 01:52:37.033102 containerd[1506]: time="2025-11-01T01:52:37.031865271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f685l,Uid:ec724e45-3797-40ba-a9db-970952094e39,Namespace:calico-system,Attempt:1,} returns sandbox id \"47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414\"" Nov 1 01:52:37.118143 containerd[1506]: time="2025-11-01T01:52:37.117502670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:52:37.118143 containerd[1506]: time="2025-11-01T01:52:37.117642852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:52:37.118143 containerd[1506]: time="2025-11-01T01:52:37.117674664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:37.118143 containerd[1506]: time="2025-11-01T01:52:37.117886013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:37.159295 systemd-networkd[1428]: cali156b3ea906a: Gained IPv6LL Nov 1 01:52:37.188454 systemd[1]: Started cri-containerd-23cd64c61c15caa6ad487d6455f8ee1f2b24bed58e1b1757def6abafc9ed32b6.scope - libcontainer container 23cd64c61c15caa6ad487d6455f8ee1f2b24bed58e1b1757def6abafc9ed32b6. Nov 1 01:52:37.240220 systemd[1]: Started cri-containerd-2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005.scope - libcontainer container 2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005. Nov 1 01:52:37.339530 containerd[1506]: time="2025-11-01T01:52:37.337788858Z" level=info msg="StartContainer for \"23cd64c61c15caa6ad487d6455f8ee1f2b24bed58e1b1757def6abafc9ed32b6\" returns successfully" Nov 1 01:52:37.404443 systemd-networkd[1428]: cali3c2a971497a: Link UP Nov 1 01:52:37.410641 systemd-networkd[1428]: cali3c2a971497a: Gained carrier Nov 1 01:52:37.440789 containerd[1506]: time="2025-11-01T01:52:37.437354127Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:36.731 [INFO][4303] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:36.853 [INFO][4303] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0 coredns-668d6bf9bc- kube-system 5d3e007c-5fa3-444d-bda8-4fe6a895dd94 949 0 2025-11-01 01:51:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-d9muf.gb1.brightbox.com coredns-668d6bf9bc-qvbkg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3c2a971497a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvbkg" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-" Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:36.853 [INFO][4303] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvbkg" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.259 [INFO][4379] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" HandleID="k8s-pod-network.9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.262 [INFO][4379] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" HandleID="k8s-pod-network.9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002714d0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-d9muf.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-qvbkg", "timestamp":"2025-11-01 01:52:37.259207437 +0000 UTC"}, Hostname:"srv-d9muf.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.262 [INFO][4379] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.262 [INFO][4379] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.262 [INFO][4379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-d9muf.gb1.brightbox.com' Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.287 [INFO][4379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.308 [INFO][4379] ipam/ipam.go 394: Looking up existing affinities for host host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.341 [INFO][4379] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.349 [INFO][4379] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.355 [INFO][4379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.356 [INFO][4379] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.361 [INFO][4379] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.371 [INFO][4379] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.384 [INFO][4379] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.133/26] block=192.168.73.128/26 handle="k8s-pod-network.9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.384 [INFO][4379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.133/26] handle="k8s-pod-network.9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.384 [INFO][4379] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:37.475644 containerd[1506]: 2025-11-01 01:52:37.385 [INFO][4379] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.133/26] IPv6=[] ContainerID="9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" HandleID="k8s-pod-network.9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:37.478530 containerd[1506]: 2025-11-01 01:52:37.392 [INFO][4303] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvbkg" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5d3e007c-5fa3-444d-bda8-4fe6a895dd94", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-qvbkg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c2a971497a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:37.478530 containerd[1506]: 2025-11-01 01:52:37.393 [INFO][4303] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.133/32] ContainerID="9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvbkg" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:37.478530 containerd[1506]: 2025-11-01 01:52:37.393 [INFO][4303] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c2a971497a ContainerID="9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvbkg" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:37.478530 containerd[1506]: 2025-11-01 01:52:37.407 [INFO][4303] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvbkg" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:37.478530 containerd[1506]: 2025-11-01 01:52:37.415 [INFO][4303] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvbkg" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5d3e007c-5fa3-444d-bda8-4fe6a895dd94", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b", Pod:"coredns-668d6bf9bc-qvbkg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c2a971497a", MAC:"26:11:62:a3:87:b4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:37.478530 containerd[1506]: 2025-11-01 01:52:37.465 [INFO][4303] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b" Namespace="kube-system" Pod="coredns-668d6bf9bc-qvbkg" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:37.482397 containerd[1506]: time="2025-11-01T01:52:37.438832316Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:52:37.483656 containerd[1506]: time="2025-11-01T01:52:37.438883473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:52:37.495692 kubelet[2677]: E1101 01:52:37.485906 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:52:37.496426 kubelet[2677]: E1101 01:52:37.495678 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:52:37.499787 containerd[1506]: time="2025-11-01T01:52:37.499507611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:52:37.545482 kubelet[2677]: E1101 01:52:37.545219 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsz8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2z2fz_calico-system(692e4d02-4b9e-43c3-8a3c-87f80adc9cda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:37.547375 kubelet[2677]: E1101 01:52:37.546734 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2z2fz" podUID="692e4d02-4b9e-43c3-8a3c-87f80adc9cda" Nov 1 01:52:37.580002 containerd[1506]: time="2025-11-01T01:52:37.578512784Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:52:37.580002 containerd[1506]: time="2025-11-01T01:52:37.578654412Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:52:37.580002 containerd[1506]: time="2025-11-01T01:52:37.578704264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:37.580002 containerd[1506]: time="2025-11-01T01:52:37.578904046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:37.608288 systemd-networkd[1428]: cali101b555a642: Gained IPv6LL Nov 1 01:52:37.612591 containerd[1506]: time="2025-11-01T01:52:37.612272930Z" level=info msg="StopPodSandbox for \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\"" Nov 1 01:52:37.620612 containerd[1506]: time="2025-11-01T01:52:37.620549771Z" level=info msg="StopPodSandbox for \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\"" Nov 1 01:52:37.667258 systemd[1]: Started cri-containerd-9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b.scope - libcontainer container 9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b. Nov 1 01:52:37.671301 systemd-networkd[1428]: cali5c065fd31a0: Gained IPv6LL Nov 1 01:52:37.780850 containerd[1506]: time="2025-11-01T01:52:37.778500045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-ffbb49cbc-m9nb9,Uid:9bcbd5c6-4789-405c-8bd4-745ed14fab4a,Namespace:calico-system,Attempt:0,} returns sandbox id \"2e37b5795e924d9a477a1067bde604b79168c51834e5a7df0477c6f63d5a8005\"" Nov 1 01:52:37.903062 containerd[1506]: time="2025-11-01T01:52:37.902071708Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:37.904426 containerd[1506]: time="2025-11-01T01:52:37.903885257Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:52:37.904426 containerd[1506]: time="2025-11-01T01:52:37.904237819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:52:37.904704 kubelet[2677]: E1101 01:52:37.904633 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:52:37.904788 kubelet[2677]: E1101 01:52:37.904717 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:52:37.905239 kubelet[2677]: E1101 01:52:37.905161 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vbx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f685l_calico-system(ec724e45-3797-40ba-a9db-970952094e39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:37.907588 containerd[1506]: time="2025-11-01T01:52:37.907330339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:52:37.938997 containerd[1506]: time="2025-11-01T01:52:37.938814633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qvbkg,Uid:5d3e007c-5fa3-444d-bda8-4fe6a895dd94,Namespace:kube-system,Attempt:1,} returns sandbox id \"9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b\"" Nov 1 01:52:37.965264 containerd[1506]: time="2025-11-01T01:52:37.965207654Z" level=info msg="CreateContainer within sandbox \"9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 01:52:38.055249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1065771788.mount: Deactivated successfully. Nov 1 01:52:38.067048 containerd[1506]: time="2025-11-01T01:52:38.066542750Z" level=info msg="CreateContainer within sandbox \"9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c40a20306b1b9b5060c58a75d062cac2e54842ddd6b0b48b66174c5be0b549e6\"" Nov 1 01:52:38.076046 containerd[1506]: time="2025-11-01T01:52:38.074481588Z" level=info msg="StartContainer for \"c40a20306b1b9b5060c58a75d062cac2e54842ddd6b0b48b66174c5be0b549e6\"" Nov 1 01:52:38.090240 containerd[1506]: 2025-11-01 01:52:37.897 [INFO][4538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Nov 1 01:52:38.090240 containerd[1506]: 2025-11-01 01:52:37.899 [INFO][4538] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" iface="eth0" netns="/var/run/netns/cni-d870b6e7-ec2d-9412-d3d7-a48c273ad0f3" Nov 1 01:52:38.090240 containerd[1506]: 2025-11-01 01:52:37.900 [INFO][4538] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" iface="eth0" netns="/var/run/netns/cni-d870b6e7-ec2d-9412-d3d7-a48c273ad0f3" Nov 1 01:52:38.090240 containerd[1506]: 2025-11-01 01:52:37.900 [INFO][4538] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" iface="eth0" netns="/var/run/netns/cni-d870b6e7-ec2d-9412-d3d7-a48c273ad0f3" Nov 1 01:52:38.090240 containerd[1506]: 2025-11-01 01:52:37.901 [INFO][4538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Nov 1 01:52:38.090240 containerd[1506]: 2025-11-01 01:52:37.901 [INFO][4538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Nov 1 01:52:38.090240 containerd[1506]: 2025-11-01 01:52:38.035 [INFO][4558] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" HandleID="k8s-pod-network.1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:38.090240 containerd[1506]: 2025-11-01 01:52:38.035 [INFO][4558] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:38.090240 containerd[1506]: 2025-11-01 01:52:38.036 [INFO][4558] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:38.090240 containerd[1506]: 2025-11-01 01:52:38.062 [WARNING][4558] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" HandleID="k8s-pod-network.1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:38.090240 containerd[1506]: 2025-11-01 01:52:38.062 [INFO][4558] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" HandleID="k8s-pod-network.1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:38.090240 containerd[1506]: 2025-11-01 01:52:38.069 [INFO][4558] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:38.090240 containerd[1506]: 2025-11-01 01:52:38.080 [INFO][4538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Nov 1 01:52:38.094388 containerd[1506]: time="2025-11-01T01:52:38.093433497Z" level=info msg="TearDown network for sandbox \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\" successfully" Nov 1 01:52:38.094388 containerd[1506]: time="2025-11-01T01:52:38.093773294Z" level=info msg="StopPodSandbox for \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\" returns successfully" Nov 1 01:52:38.099840 containerd[1506]: time="2025-11-01T01:52:38.096856132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54865fd995-mzf2r,Uid:961d53cb-00c8-4e88-869d-034281366b6b,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:52:38.102727 systemd[1]: run-netns-cni\x2dd870b6e7\x2dec2d\x2d9412\x2dd3d7\x2da48c273ad0f3.mount: Deactivated successfully. Nov 1 01:52:38.169175 containerd[1506]: 2025-11-01 01:52:37.921 [INFO][4532] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Nov 1 01:52:38.169175 containerd[1506]: 2025-11-01 01:52:37.922 [INFO][4532] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" iface="eth0" netns="/var/run/netns/cni-4f548dc8-5c27-6e16-4b81-aca55e1210f8" Nov 1 01:52:38.169175 containerd[1506]: 2025-11-01 01:52:37.924 [INFO][4532] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" iface="eth0" netns="/var/run/netns/cni-4f548dc8-5c27-6e16-4b81-aca55e1210f8" Nov 1 01:52:38.169175 containerd[1506]: 2025-11-01 01:52:37.925 [INFO][4532] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" iface="eth0" netns="/var/run/netns/cni-4f548dc8-5c27-6e16-4b81-aca55e1210f8" Nov 1 01:52:38.169175 containerd[1506]: 2025-11-01 01:52:37.925 [INFO][4532] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Nov 1 01:52:38.169175 containerd[1506]: 2025-11-01 01:52:37.925 [INFO][4532] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Nov 1 01:52:38.169175 containerd[1506]: 2025-11-01 01:52:38.116 [INFO][4568] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" HandleID="k8s-pod-network.c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:38.169175 containerd[1506]: 2025-11-01 01:52:38.124 [INFO][4568] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:38.169175 containerd[1506]: 2025-11-01 01:52:38.128 [INFO][4568] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:38.169175 containerd[1506]: 2025-11-01 01:52:38.151 [WARNING][4568] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" HandleID="k8s-pod-network.c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:38.169175 containerd[1506]: 2025-11-01 01:52:38.151 [INFO][4568] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" HandleID="k8s-pod-network.c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:38.169175 containerd[1506]: 2025-11-01 01:52:38.158 [INFO][4568] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:38.169175 containerd[1506]: 2025-11-01 01:52:38.163 [INFO][4532] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Nov 1 01:52:38.170558 containerd[1506]: time="2025-11-01T01:52:38.170143858Z" level=info msg="TearDown network for sandbox \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\" successfully" Nov 1 01:52:38.170558 containerd[1506]: time="2025-11-01T01:52:38.170201867Z" level=info msg="StopPodSandbox for \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\" returns successfully" Nov 1 01:52:38.172945 containerd[1506]: time="2025-11-01T01:52:38.172500877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfccfdf99-qcr45,Uid:0303fd48-19b2-41d1-991c-312dc81409eb,Namespace:calico-system,Attempt:1,}" Nov 1 01:52:38.184579 systemd-networkd[1428]: calice68c0e2e28: Gained IPv6LL Nov 1 01:52:38.208346 systemd[1]: Started cri-containerd-c40a20306b1b9b5060c58a75d062cac2e54842ddd6b0b48b66174c5be0b549e6.scope - libcontainer container c40a20306b1b9b5060c58a75d062cac2e54842ddd6b0b48b66174c5be0b549e6. Nov 1 01:52:38.274193 containerd[1506]: time="2025-11-01T01:52:38.274137655Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:38.282652 containerd[1506]: time="2025-11-01T01:52:38.282506562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:52:38.288052 kubelet[2677]: E1101 01:52:38.284369 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:52:38.288052 kubelet[2677]: E1101 01:52:38.284433 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:52:38.288052 kubelet[2677]: E1101 01:52:38.284663 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4c8f59ef31024bbcaacb44f54e1035cb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zjzk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ffbb49cbc-m9nb9_calico-system(9bcbd5c6-4789-405c-8bd4-745ed14fab4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:38.288435 containerd[1506]: time="2025-11-01T01:52:38.284862094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:52:38.294055 containerd[1506]: time="2025-11-01T01:52:38.293356287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:52:38.341691 kubelet[2677]: E1101 01:52:38.338908 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2z2fz" podUID="692e4d02-4b9e-43c3-8a3c-87f80adc9cda" Nov 1 01:52:38.346039 containerd[1506]: time="2025-11-01T01:52:38.345422802Z" level=info msg="StartContainer for \"c40a20306b1b9b5060c58a75d062cac2e54842ddd6b0b48b66174c5be0b549e6\" returns successfully" Nov 1 01:52:38.558660 kubelet[2677]: I1101 01:52:38.557512 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zgzhk" podStartSLOduration=49.55746558 podStartE2EDuration="49.55746558s" podCreationTimestamp="2025-11-01 01:51:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:52:38.460337693 +0000 UTC m=+55.080549377" watchObservedRunningTime="2025-11-01 01:52:38.55746558 +0000 UTC m=+55.177677267" Nov 1 01:52:38.568055 systemd-networkd[1428]: cali3c2a971497a: Gained IPv6LL Nov 1 01:52:38.610053 containerd[1506]: time="2025-11-01T01:52:38.608190984Z" level=info msg="StopPodSandbox for \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\"" Nov 1 01:52:38.649337 containerd[1506]: time="2025-11-01T01:52:38.649275661Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:38.651035 containerd[1506]: time="2025-11-01T01:52:38.650975486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:52:38.651282 containerd[1506]: time="2025-11-01T01:52:38.651204046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:52:38.652220 kubelet[2677]: E1101 01:52:38.652098 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:52:38.652559 kubelet[2677]: E1101 01:52:38.652515 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:52:38.652929 kubelet[2677]: E1101 01:52:38.652854 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vbx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f685l_calico-system(ec724e45-3797-40ba-a9db-970952094e39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:38.654806 kubelet[2677]: E1101 01:52:38.654733 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:52:38.655615 containerd[1506]: time="2025-11-01T01:52:38.655548727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:52:38.772671 systemd[1]: run-netns-cni\x2d4f548dc8\x2d5c27\x2d6e16\x2d4b81\x2daca55e1210f8.mount: Deactivated successfully. Nov 1 01:52:38.818907 systemd-networkd[1428]: cali6f7e8456f30: Link UP Nov 1 01:52:38.820911 systemd-networkd[1428]: cali6f7e8456f30: Gained carrier Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.344 [INFO][4584] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.450 [INFO][4584] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0 calico-apiserver-54865fd995- calico-apiserver 961d53cb-00c8-4e88-869d-034281366b6b 973 0 2025-11-01 01:52:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54865fd995 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-d9muf.gb1.brightbox.com calico-apiserver-54865fd995-mzf2r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6f7e8456f30 [] [] }} ContainerID="07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-mzf2r" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-" Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.450 [INFO][4584] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-mzf2r" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.632 [INFO][4639] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" HandleID="k8s-pod-network.07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.632 [INFO][4639] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" HandleID="k8s-pod-network.07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000335270), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-d9muf.gb1.brightbox.com", "pod":"calico-apiserver-54865fd995-mzf2r", "timestamp":"2025-11-01 01:52:38.632763391 +0000 UTC"}, Hostname:"srv-d9muf.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.633 [INFO][4639] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.633 [INFO][4639] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.633 [INFO][4639] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-d9muf.gb1.brightbox.com' Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.667 [INFO][4639] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.681 [INFO][4639] ipam/ipam.go 394: Looking up existing affinities for host host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.696 [INFO][4639] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.711 [INFO][4639] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.718 [INFO][4639] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.718 [INFO][4639] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.723 [INFO][4639] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.759 [INFO][4639] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.788 [INFO][4639] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.134/26] block=192.168.73.128/26 handle="k8s-pod-network.07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.789 [INFO][4639] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.134/26] handle="k8s-pod-network.07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.789 [INFO][4639] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:38.892561 containerd[1506]: 2025-11-01 01:52:38.789 [INFO][4639] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.134/26] IPv6=[] ContainerID="07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" HandleID="k8s-pod-network.07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:38.896545 containerd[1506]: 2025-11-01 01:52:38.807 [INFO][4584] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-mzf2r" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0", GenerateName:"calico-apiserver-54865fd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"961d53cb-00c8-4e88-869d-034281366b6b", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54865fd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-54865fd995-mzf2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f7e8456f30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:38.896545 containerd[1506]: 2025-11-01 01:52:38.807 [INFO][4584] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.134/32] ContainerID="07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-mzf2r" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:38.896545 containerd[1506]: 2025-11-01 01:52:38.807 [INFO][4584] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f7e8456f30 ContainerID="07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-mzf2r" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:38.896545 containerd[1506]: 2025-11-01 01:52:38.823 [INFO][4584] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-mzf2r" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:38.896545 containerd[1506]: 2025-11-01 01:52:38.832 [INFO][4584] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-mzf2r" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0", GenerateName:"calico-apiserver-54865fd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"961d53cb-00c8-4e88-869d-034281366b6b", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54865fd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c", Pod:"calico-apiserver-54865fd995-mzf2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f7e8456f30", MAC:"52:4c:87:d2:b5:e7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:38.896545 containerd[1506]: 2025-11-01 01:52:38.872 [INFO][4584] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-mzf2r" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:38.967783 containerd[1506]: time="2025-11-01T01:52:38.966869990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:52:38.967783 containerd[1506]: time="2025-11-01T01:52:38.967372814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:52:38.967783 containerd[1506]: time="2025-11-01T01:52:38.967572187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:38.974253 containerd[1506]: time="2025-11-01T01:52:38.972963311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:38.984702 systemd-networkd[1428]: caliefc640d8452: Link UP Nov 1 01:52:38.987085 systemd-networkd[1428]: caliefc640d8452: Gained carrier Nov 1 01:52:38.989378 containerd[1506]: time="2025-11-01T01:52:38.988801985Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:39.002768 containerd[1506]: time="2025-11-01T01:52:39.002531370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:52:39.002768 containerd[1506]: time="2025-11-01T01:52:39.002531619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:52:39.005108 kubelet[2677]: E1101 01:52:39.003164 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:52:39.005108 kubelet[2677]: E1101 01:52:39.003235 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:52:39.005108 kubelet[2677]: E1101 01:52:39.003413 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zjzk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ffbb49cbc-m9nb9_calico-system(9bcbd5c6-4789-405c-8bd4-745ed14fab4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:39.005108 kubelet[2677]: E1101 01:52:39.004907 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ffbb49cbc-m9nb9" podUID="9bcbd5c6-4789-405c-8bd4-745ed14fab4a" Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.408 [INFO][4613] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.480 [INFO][4613] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0 calico-kube-controllers-7dfccfdf99- calico-system 0303fd48-19b2-41d1-991c-312dc81409eb 975 0 2025-11-01 01:52:08 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7dfccfdf99 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-d9muf.gb1.brightbox.com calico-kube-controllers-7dfccfdf99-qcr45 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliefc640d8452 [] [] }} ContainerID="809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" Namespace="calico-system" Pod="calico-kube-controllers-7dfccfdf99-qcr45" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-" Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.480 [INFO][4613] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" Namespace="calico-system" Pod="calico-kube-controllers-7dfccfdf99-qcr45" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.678 [INFO][4644] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" HandleID="k8s-pod-network.809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.679 [INFO][4644] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" HandleID="k8s-pod-network.809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-d9muf.gb1.brightbox.com", "pod":"calico-kube-controllers-7dfccfdf99-qcr45", "timestamp":"2025-11-01 01:52:38.678403018 +0000 UTC"}, Hostname:"srv-d9muf.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.679 [INFO][4644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.795 [INFO][4644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.795 [INFO][4644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-d9muf.gb1.brightbox.com' Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.857 [INFO][4644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.875 [INFO][4644] ipam/ipam.go 394: Looking up existing affinities for host host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.903 [INFO][4644] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.907 [INFO][4644] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.917 [INFO][4644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.917 [INFO][4644] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.922 [INFO][4644] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874 Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.937 [INFO][4644] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.959 [INFO][4644] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.135/26] block=192.168.73.128/26 handle="k8s-pod-network.809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.959 [INFO][4644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.135/26] handle="k8s-pod-network.809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.959 [INFO][4644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:39.067235 containerd[1506]: 2025-11-01 01:52:38.960 [INFO][4644] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.135/26] IPv6=[] ContainerID="809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" HandleID="k8s-pod-network.809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:39.076653 containerd[1506]: 2025-11-01 01:52:38.971 [INFO][4613] cni-plugin/k8s.go 418: Populated endpoint ContainerID="809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" Namespace="calico-system" Pod="calico-kube-controllers-7dfccfdf99-qcr45" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0", GenerateName:"calico-kube-controllers-7dfccfdf99-", Namespace:"calico-system", SelfLink:"", UID:"0303fd48-19b2-41d1-991c-312dc81409eb", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfccfdf99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-7dfccfdf99-qcr45", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefc640d8452", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:39.076653 containerd[1506]: 2025-11-01 01:52:38.972 [INFO][4613] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.135/32] ContainerID="809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" Namespace="calico-system" Pod="calico-kube-controllers-7dfccfdf99-qcr45" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:39.076653 containerd[1506]: 2025-11-01 01:52:38.972 [INFO][4613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliefc640d8452 ContainerID="809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" Namespace="calico-system" Pod="calico-kube-controllers-7dfccfdf99-qcr45" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:39.076653 containerd[1506]: 2025-11-01 01:52:38.993 [INFO][4613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" Namespace="calico-system" Pod="calico-kube-controllers-7dfccfdf99-qcr45" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:39.076653 containerd[1506]: 2025-11-01 01:52:38.995 [INFO][4613] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" Namespace="calico-system" Pod="calico-kube-controllers-7dfccfdf99-qcr45" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0", GenerateName:"calico-kube-controllers-7dfccfdf99-", Namespace:"calico-system", SelfLink:"", UID:"0303fd48-19b2-41d1-991c-312dc81409eb", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfccfdf99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874", Pod:"calico-kube-controllers-7dfccfdf99-qcr45", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefc640d8452", MAC:"06:a5:f7:4d:95:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:39.076653 containerd[1506]: 2025-11-01 01:52:39.037 [INFO][4613] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874" Namespace="calico-system" Pod="calico-kube-controllers-7dfccfdf99-qcr45" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:39.102757 systemd[1]: Started cri-containerd-07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c.scope - libcontainer container 07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c. Nov 1 01:52:39.115056 containerd[1506]: 2025-11-01 01:52:38.839 [INFO][4662] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Nov 1 01:52:39.115056 containerd[1506]: 2025-11-01 01:52:38.842 [INFO][4662] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" iface="eth0" netns="/var/run/netns/cni-1e2897e1-745c-cc9a-c30d-ba496595c5ca" Nov 1 01:52:39.115056 containerd[1506]: 2025-11-01 01:52:38.844 [INFO][4662] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" iface="eth0" netns="/var/run/netns/cni-1e2897e1-745c-cc9a-c30d-ba496595c5ca" Nov 1 01:52:39.115056 containerd[1506]: 2025-11-01 01:52:38.844 [INFO][4662] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" iface="eth0" netns="/var/run/netns/cni-1e2897e1-745c-cc9a-c30d-ba496595c5ca" Nov 1 01:52:39.115056 containerd[1506]: 2025-11-01 01:52:38.845 [INFO][4662] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Nov 1 01:52:39.115056 containerd[1506]: 2025-11-01 01:52:38.845 [INFO][4662] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Nov 1 01:52:39.115056 containerd[1506]: 2025-11-01 01:52:39.006 [INFO][4675] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" HandleID="k8s-pod-network.c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:39.115056 containerd[1506]: 2025-11-01 01:52:39.006 [INFO][4675] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:39.115056 containerd[1506]: 2025-11-01 01:52:39.006 [INFO][4675] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:39.115056 containerd[1506]: 2025-11-01 01:52:39.043 [WARNING][4675] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" HandleID="k8s-pod-network.c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:39.115056 containerd[1506]: 2025-11-01 01:52:39.043 [INFO][4675] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" HandleID="k8s-pod-network.c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:39.115056 containerd[1506]: 2025-11-01 01:52:39.079 [INFO][4675] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:39.115056 containerd[1506]: 2025-11-01 01:52:39.096 [INFO][4662] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Nov 1 01:52:39.115056 containerd[1506]: time="2025-11-01T01:52:39.112668729Z" level=info msg="TearDown network for sandbox \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\" successfully" Nov 1 01:52:39.115056 containerd[1506]: time="2025-11-01T01:52:39.113113166Z" level=info msg="StopPodSandbox for \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\" returns successfully" Nov 1 01:52:39.122605 systemd[1]: run-netns-cni\x2d1e2897e1\x2d745c\x2dcc9a\x2dc30d\x2dba496595c5ca.mount: Deactivated successfully. Nov 1 01:52:39.128043 containerd[1506]: time="2025-11-01T01:52:39.125151779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54865fd995-lqdk2,Uid:f54af395-651d-45bc-acec-8a87e82ec93b,Namespace:calico-apiserver,Attempt:1,}" Nov 1 01:52:39.236044 containerd[1506]: time="2025-11-01T01:52:39.230727557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:52:39.236044 containerd[1506]: time="2025-11-01T01:52:39.230853583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:52:39.236044 containerd[1506]: time="2025-11-01T01:52:39.230897977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:39.236044 containerd[1506]: time="2025-11-01T01:52:39.231440805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:39.284330 systemd[1]: Started cri-containerd-809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874.scope - libcontainer container 809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874. Nov 1 01:52:39.339903 kubelet[2677]: E1101 01:52:39.338590 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:52:39.348666 kubelet[2677]: E1101 01:52:39.348606 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ffbb49cbc-m9nb9" podUID="9bcbd5c6-4789-405c-8bd4-745ed14fab4a" Nov 1 01:52:39.402632 kubelet[2677]: I1101 01:52:39.402538 2677 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qvbkg" podStartSLOduration=50.402499335 podStartE2EDuration="50.402499335s" podCreationTimestamp="2025-11-01 01:51:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 01:52:39.365319863 +0000 UTC m=+55.985531532" watchObservedRunningTime="2025-11-01 01:52:39.402499335 +0000 UTC m=+56.022711020" Nov 1 01:52:39.478564 systemd-networkd[1428]: calib540875e445: Link UP Nov 1 01:52:39.481473 systemd-networkd[1428]: calib540875e445: Gained carrier Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.254 [INFO][4737] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0 calico-apiserver-54865fd995- calico-apiserver f54af395-651d-45bc-acec-8a87e82ec93b 1006 0 2025-11-01 01:52:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54865fd995 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-d9muf.gb1.brightbox.com calico-apiserver-54865fd995-lqdk2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib540875e445 [] [] }} ContainerID="928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-lqdk2" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-" Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.254 [INFO][4737] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-lqdk2" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.342 [INFO][4773] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" HandleID="k8s-pod-network.928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.343 [INFO][4773] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" HandleID="k8s-pod-network.928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f970), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-d9muf.gb1.brightbox.com", "pod":"calico-apiserver-54865fd995-lqdk2", "timestamp":"2025-11-01 01:52:39.342871204 +0000 UTC"}, Hostname:"srv-d9muf.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.343 [INFO][4773] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.343 [INFO][4773] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.343 [INFO][4773] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-d9muf.gb1.brightbox.com' Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.363 [INFO][4773] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.385 [INFO][4773] ipam/ipam.go 394: Looking up existing affinities for host host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.399 [INFO][4773] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.410 [INFO][4773] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.414 [INFO][4773] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.414 [INFO][4773] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.419 [INFO][4773] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9 Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.430 [INFO][4773] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.452 [INFO][4773] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.73.136/26] block=192.168.73.128/26 handle="k8s-pod-network.928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.452 [INFO][4773] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.136/26] handle="k8s-pod-network.928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" host="srv-d9muf.gb1.brightbox.com" Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.453 [INFO][4773] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:39.516749 containerd[1506]: 2025-11-01 01:52:39.453 [INFO][4773] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.73.136/26] IPv6=[] ContainerID="928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" HandleID="k8s-pod-network.928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:39.521442 containerd[1506]: 2025-11-01 01:52:39.457 [INFO][4737] cni-plugin/k8s.go 418: Populated endpoint ContainerID="928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-lqdk2" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0", GenerateName:"calico-apiserver-54865fd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"f54af395-651d-45bc-acec-8a87e82ec93b", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54865fd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-54865fd995-lqdk2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib540875e445", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:39.521442 containerd[1506]: 2025-11-01 01:52:39.459 [INFO][4737] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.136/32] ContainerID="928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-lqdk2" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:39.521442 containerd[1506]: 2025-11-01 01:52:39.460 [INFO][4737] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib540875e445 ContainerID="928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-lqdk2" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:39.521442 containerd[1506]: 2025-11-01 01:52:39.479 [INFO][4737] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-lqdk2" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:39.521442 containerd[1506]: 2025-11-01 01:52:39.480 [INFO][4737] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-lqdk2" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0", GenerateName:"calico-apiserver-54865fd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"f54af395-651d-45bc-acec-8a87e82ec93b", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54865fd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9", Pod:"calico-apiserver-54865fd995-lqdk2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib540875e445", MAC:"82:4e:9a:ba:2b:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:39.521442 containerd[1506]: 2025-11-01 01:52:39.509 [INFO][4737] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9" Namespace="calico-apiserver" Pod="calico-apiserver-54865fd995-lqdk2" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:39.560811 containerd[1506]: time="2025-11-01T01:52:39.560659310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 01:52:39.564500 containerd[1506]: time="2025-11-01T01:52:39.564114452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 01:52:39.564500 containerd[1506]: time="2025-11-01T01:52:39.564167015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:39.564500 containerd[1506]: time="2025-11-01T01:52:39.564370800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 01:52:39.651095 kernel: bpftool[4842]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 01:52:39.656266 systemd[1]: Started cri-containerd-928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9.scope - libcontainer container 928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9. Nov 1 01:52:39.689618 containerd[1506]: time="2025-11-01T01:52:39.688861068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7dfccfdf99-qcr45,Uid:0303fd48-19b2-41d1-991c-312dc81409eb,Namespace:calico-system,Attempt:1,} returns sandbox id \"809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874\"" Nov 1 01:52:39.695223 containerd[1506]: time="2025-11-01T01:52:39.695172864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:52:39.705705 containerd[1506]: time="2025-11-01T01:52:39.705601443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54865fd995-mzf2r,Uid:961d53cb-00c8-4e88-869d-034281366b6b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c\"" Nov 1 01:52:39.815797 containerd[1506]: time="2025-11-01T01:52:39.815600436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54865fd995-lqdk2,Uid:f54af395-651d-45bc-acec-8a87e82ec93b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9\"" Nov 1 01:52:39.847386 systemd-networkd[1428]: cali6f7e8456f30: Gained IPv6LL Nov 1 01:52:39.998043 containerd[1506]: time="2025-11-01T01:52:39.997774511Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:40.000479 containerd[1506]: time="2025-11-01T01:52:39.999189259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:52:40.000479 containerd[1506]: time="2025-11-01T01:52:39.999304239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:52:40.000623 kubelet[2677]: E1101 01:52:39.999658 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:52:40.000623 kubelet[2677]: E1101 01:52:39.999743 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:52:40.000623 kubelet[2677]: E1101 01:52:40.000116 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxth2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7dfccfdf99-qcr45_calico-system(0303fd48-19b2-41d1-991c-312dc81409eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:40.002275 containerd[1506]: time="2025-11-01T01:52:40.001654871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:52:40.002454 kubelet[2677]: E1101 01:52:40.001843 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" podUID="0303fd48-19b2-41d1-991c-312dc81409eb" Nov 1 01:52:40.126047 systemd-networkd[1428]: vxlan.calico: Link UP Nov 1 01:52:40.126062 systemd-networkd[1428]: vxlan.calico: Gained carrier Nov 1 01:52:40.315539 containerd[1506]: time="2025-11-01T01:52:40.315405342Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:40.317708 containerd[1506]: time="2025-11-01T01:52:40.317060701Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:52:40.317708 containerd[1506]: time="2025-11-01T01:52:40.317087059Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:52:40.318516 kubelet[2677]: E1101 01:52:40.317994 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:52:40.318516 kubelet[2677]: E1101 01:52:40.318105 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:52:40.318841 kubelet[2677]: E1101 01:52:40.318499 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d47l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54865fd995-mzf2r_calico-apiserver(961d53cb-00c8-4e88-869d-034281366b6b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:40.320464 containerd[1506]: time="2025-11-01T01:52:40.320081637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:52:40.320746 kubelet[2677]: E1101 01:52:40.320697 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" podUID="961d53cb-00c8-4e88-869d-034281366b6b" Nov 1 01:52:40.341059 kubelet[2677]: E1101 01:52:40.339933 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" podUID="961d53cb-00c8-4e88-869d-034281366b6b" Nov 1 01:52:40.347767 kubelet[2677]: E1101 01:52:40.347134 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" podUID="0303fd48-19b2-41d1-991c-312dc81409eb" Nov 1 01:52:40.654997 containerd[1506]: time="2025-11-01T01:52:40.654527817Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:40.656366 containerd[1506]: time="2025-11-01T01:52:40.656227455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:52:40.656366 containerd[1506]: time="2025-11-01T01:52:40.656293531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:52:40.657110 kubelet[2677]: E1101 01:52:40.656703 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:52:40.657110 kubelet[2677]: E1101 01:52:40.656780 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:52:40.658064 kubelet[2677]: E1101 01:52:40.656977 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwp9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54865fd995-lqdk2_calico-apiserver(f54af395-651d-45bc-acec-8a87e82ec93b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:40.659437 kubelet[2677]: E1101 01:52:40.659395 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" podUID="f54af395-651d-45bc-acec-8a87e82ec93b" Nov 1 01:52:40.935336 systemd-networkd[1428]: caliefc640d8452: Gained IPv6LL Nov 1 01:52:41.127302 systemd-networkd[1428]: calib540875e445: Gained IPv6LL Nov 1 01:52:41.365977 kubelet[2677]: E1101 01:52:41.365837 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" podUID="0303fd48-19b2-41d1-991c-312dc81409eb" Nov 1 01:52:41.367059 kubelet[2677]: E1101 01:52:41.366996 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" podUID="f54af395-651d-45bc-acec-8a87e82ec93b" Nov 1 01:52:41.367199 kubelet[2677]: E1101 01:52:41.367124 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" podUID="961d53cb-00c8-4e88-869d-034281366b6b" Nov 1 01:52:41.831300 systemd-networkd[1428]: vxlan.calico: Gained IPv6LL Nov 1 01:52:43.609209 containerd[1506]: time="2025-11-01T01:52:43.608209332Z" level=info msg="StopPodSandbox for \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\"" Nov 1 01:52:43.870287 containerd[1506]: 2025-11-01 01:52:43.789 [WARNING][4973] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dea2f1d9-48f1-44e9-bd05-794af9e0edad", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18", Pod:"coredns-668d6bf9bc-zgzhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali156b3ea906a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:43.870287 containerd[1506]: 2025-11-01 01:52:43.790 [INFO][4973] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Nov 1 01:52:43.870287 containerd[1506]: 2025-11-01 01:52:43.791 [INFO][4973] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" iface="eth0" netns="" Nov 1 01:52:43.870287 containerd[1506]: 2025-11-01 01:52:43.791 [INFO][4973] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Nov 1 01:52:43.870287 containerd[1506]: 2025-11-01 01:52:43.791 [INFO][4973] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Nov 1 01:52:43.870287 containerd[1506]: 2025-11-01 01:52:43.849 [INFO][4982] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" HandleID="k8s-pod-network.58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:43.870287 containerd[1506]: 2025-11-01 01:52:43.849 [INFO][4982] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:43.870287 containerd[1506]: 2025-11-01 01:52:43.849 [INFO][4982] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:43.870287 containerd[1506]: 2025-11-01 01:52:43.861 [WARNING][4982] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" HandleID="k8s-pod-network.58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:43.870287 containerd[1506]: 2025-11-01 01:52:43.861 [INFO][4982] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" HandleID="k8s-pod-network.58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:43.870287 containerd[1506]: 2025-11-01 01:52:43.864 [INFO][4982] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:43.870287 containerd[1506]: 2025-11-01 01:52:43.866 [INFO][4973] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Nov 1 01:52:43.871247 containerd[1506]: time="2025-11-01T01:52:43.870224983Z" level=info msg="TearDown network for sandbox \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\" successfully" Nov 1 01:52:43.871364 containerd[1506]: time="2025-11-01T01:52:43.871135983Z" level=info msg="StopPodSandbox for \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\" returns successfully" Nov 1 01:52:43.873345 containerd[1506]: time="2025-11-01T01:52:43.872706114Z" level=info msg="RemovePodSandbox for \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\"" Nov 1 01:52:43.873345 containerd[1506]: time="2025-11-01T01:52:43.872773403Z" level=info msg="Forcibly stopping sandbox \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\"" Nov 1 01:52:44.021373 containerd[1506]: 2025-11-01 01:52:43.939 [WARNING][4996] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dea2f1d9-48f1-44e9-bd05-794af9e0edad", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"79a6aba9cdf07a55054d0c822722d405bfeafe3d04139eea69230b48c4f8dc18", Pod:"coredns-668d6bf9bc-zgzhk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali156b3ea906a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:44.021373 containerd[1506]: 2025-11-01 01:52:43.940 [INFO][4996] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Nov 1 01:52:44.021373 containerd[1506]: 2025-11-01 01:52:43.940 [INFO][4996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" iface="eth0" netns="" Nov 1 01:52:44.021373 containerd[1506]: 2025-11-01 01:52:43.940 [INFO][4996] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Nov 1 01:52:44.021373 containerd[1506]: 2025-11-01 01:52:43.940 [INFO][4996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Nov 1 01:52:44.021373 containerd[1506]: 2025-11-01 01:52:43.995 [INFO][5003] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" HandleID="k8s-pod-network.58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:44.021373 containerd[1506]: 2025-11-01 01:52:43.996 [INFO][5003] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:44.021373 containerd[1506]: 2025-11-01 01:52:43.996 [INFO][5003] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:44.021373 containerd[1506]: 2025-11-01 01:52:44.009 [WARNING][5003] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" HandleID="k8s-pod-network.58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:44.021373 containerd[1506]: 2025-11-01 01:52:44.009 [INFO][5003] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" HandleID="k8s-pod-network.58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--zgzhk-eth0" Nov 1 01:52:44.021373 containerd[1506]: 2025-11-01 01:52:44.012 [INFO][5003] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:44.021373 containerd[1506]: 2025-11-01 01:52:44.016 [INFO][4996] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c" Nov 1 01:52:44.022954 containerd[1506]: time="2025-11-01T01:52:44.021472833Z" level=info msg="TearDown network for sandbox \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\" successfully" Nov 1 01:52:44.031077 containerd[1506]: time="2025-11-01T01:52:44.029516543Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:52:44.031077 containerd[1506]: time="2025-11-01T01:52:44.029652846Z" level=info msg="RemovePodSandbox \"58ba0c9fa5a6c055104d00ec9c16dbe57eb250f80b0596d1361785d52fc3151c\" returns successfully" Nov 1 01:52:44.031077 containerd[1506]: time="2025-11-01T01:52:44.030577216Z" level=info msg="StopPodSandbox for \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\"" Nov 1 01:52:44.203908 containerd[1506]: 2025-11-01 01:52:44.122 [WARNING][5017] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ec724e45-3797-40ba-a9db-970952094e39", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414", Pod:"csi-node-driver-f685l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali101b555a642", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:44.203908 containerd[1506]: 2025-11-01 01:52:44.125 [INFO][5017] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Nov 1 01:52:44.203908 containerd[1506]: 2025-11-01 01:52:44.125 [INFO][5017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" iface="eth0" netns="" Nov 1 01:52:44.203908 containerd[1506]: 2025-11-01 01:52:44.125 [INFO][5017] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Nov 1 01:52:44.203908 containerd[1506]: 2025-11-01 01:52:44.125 [INFO][5017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Nov 1 01:52:44.203908 containerd[1506]: 2025-11-01 01:52:44.183 [INFO][5028] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" HandleID="k8s-pod-network.876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Workload="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:44.203908 containerd[1506]: 2025-11-01 01:52:44.184 [INFO][5028] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:44.203908 containerd[1506]: 2025-11-01 01:52:44.184 [INFO][5028] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:44.203908 containerd[1506]: 2025-11-01 01:52:44.194 [WARNING][5028] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" HandleID="k8s-pod-network.876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Workload="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:44.203908 containerd[1506]: 2025-11-01 01:52:44.195 [INFO][5028] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" HandleID="k8s-pod-network.876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Workload="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:44.203908 containerd[1506]: 2025-11-01 01:52:44.197 [INFO][5028] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:44.203908 containerd[1506]: 2025-11-01 01:52:44.200 [INFO][5017] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Nov 1 01:52:44.204829 containerd[1506]: time="2025-11-01T01:52:44.203866125Z" level=info msg="TearDown network for sandbox \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\" successfully" Nov 1 01:52:44.204829 containerd[1506]: time="2025-11-01T01:52:44.203968818Z" level=info msg="StopPodSandbox for \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\" returns successfully" Nov 1 01:52:44.208822 containerd[1506]: time="2025-11-01T01:52:44.208437807Z" level=info msg="RemovePodSandbox for \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\"" Nov 1 01:52:44.208822 containerd[1506]: time="2025-11-01T01:52:44.208567047Z" level=info msg="Forcibly stopping sandbox \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\"" Nov 1 01:52:44.329775 containerd[1506]: 2025-11-01 01:52:44.267 [WARNING][5042] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ec724e45-3797-40ba-a9db-970952094e39", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"47e2c477244404e69e6d890f05dc8bc32bd27e7ad20006c54346465ee7036414", Pod:"csi-node-driver-f685l", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali101b555a642", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:44.329775 containerd[1506]: 2025-11-01 01:52:44.267 [INFO][5042] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Nov 1 01:52:44.329775 containerd[1506]: 2025-11-01 01:52:44.268 [INFO][5042] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" iface="eth0" netns="" Nov 1 01:52:44.329775 containerd[1506]: 2025-11-01 01:52:44.268 [INFO][5042] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Nov 1 01:52:44.329775 containerd[1506]: 2025-11-01 01:52:44.268 [INFO][5042] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Nov 1 01:52:44.329775 containerd[1506]: 2025-11-01 01:52:44.306 [INFO][5049] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" HandleID="k8s-pod-network.876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Workload="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:44.329775 containerd[1506]: 2025-11-01 01:52:44.307 [INFO][5049] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:44.329775 containerd[1506]: 2025-11-01 01:52:44.307 [INFO][5049] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:44.329775 containerd[1506]: 2025-11-01 01:52:44.322 [WARNING][5049] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" HandleID="k8s-pod-network.876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Workload="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:44.329775 containerd[1506]: 2025-11-01 01:52:44.322 [INFO][5049] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" HandleID="k8s-pod-network.876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Workload="srv--d9muf.gb1.brightbox.com-k8s-csi--node--driver--f685l-eth0" Nov 1 01:52:44.329775 containerd[1506]: 2025-11-01 01:52:44.325 [INFO][5049] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:44.329775 containerd[1506]: 2025-11-01 01:52:44.327 [INFO][5042] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed" Nov 1 01:52:44.330769 containerd[1506]: time="2025-11-01T01:52:44.329867748Z" level=info msg="TearDown network for sandbox \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\" successfully" Nov 1 01:52:44.333797 containerd[1506]: time="2025-11-01T01:52:44.333750465Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:52:44.333903 containerd[1506]: time="2025-11-01T01:52:44.333831411Z" level=info msg="RemovePodSandbox \"876e605ba99a6deed790e8c1677642996a51f5e8d79d33d216f81ae79d82ffed\" returns successfully" Nov 1 01:52:44.334676 containerd[1506]: time="2025-11-01T01:52:44.334629738Z" level=info msg="StopPodSandbox for \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\"" Nov 1 01:52:44.509285 containerd[1506]: 2025-11-01 01:52:44.444 [WARNING][5063] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0", GenerateName:"calico-apiserver-54865fd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"961d53cb-00c8-4e88-869d-034281366b6b", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54865fd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c", Pod:"calico-apiserver-54865fd995-mzf2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f7e8456f30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:44.509285 containerd[1506]: 2025-11-01 01:52:44.445 [INFO][5063] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Nov 1 01:52:44.509285 containerd[1506]: 2025-11-01 01:52:44.445 [INFO][5063] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" iface="eth0" netns="" Nov 1 01:52:44.509285 containerd[1506]: 2025-11-01 01:52:44.445 [INFO][5063] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Nov 1 01:52:44.509285 containerd[1506]: 2025-11-01 01:52:44.445 [INFO][5063] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Nov 1 01:52:44.509285 containerd[1506]: 2025-11-01 01:52:44.491 [INFO][5070] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" HandleID="k8s-pod-network.1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:44.509285 containerd[1506]: 2025-11-01 01:52:44.491 [INFO][5070] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:44.509285 containerd[1506]: 2025-11-01 01:52:44.491 [INFO][5070] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:44.509285 containerd[1506]: 2025-11-01 01:52:44.503 [WARNING][5070] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" HandleID="k8s-pod-network.1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:44.509285 containerd[1506]: 2025-11-01 01:52:44.503 [INFO][5070] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" HandleID="k8s-pod-network.1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:44.509285 containerd[1506]: 2025-11-01 01:52:44.505 [INFO][5070] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:44.509285 containerd[1506]: 2025-11-01 01:52:44.507 [INFO][5063] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Nov 1 01:52:44.510613 containerd[1506]: time="2025-11-01T01:52:44.509344122Z" level=info msg="TearDown network for sandbox \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\" successfully" Nov 1 01:52:44.510613 containerd[1506]: time="2025-11-01T01:52:44.509380518Z" level=info msg="StopPodSandbox for \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\" returns successfully" Nov 1 01:52:44.510613 containerd[1506]: time="2025-11-01T01:52:44.510401097Z" level=info msg="RemovePodSandbox for \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\"" Nov 1 01:52:44.510613 containerd[1506]: time="2025-11-01T01:52:44.510456767Z" level=info msg="Forcibly stopping sandbox \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\"" Nov 1 01:52:44.621304 containerd[1506]: 2025-11-01 01:52:44.571 [WARNING][5084] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0", GenerateName:"calico-apiserver-54865fd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"961d53cb-00c8-4e88-869d-034281366b6b", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54865fd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"07f8dd5db46e505b0581a78983fcb4096ee6117b4a998b08ee5e16aae00deb4c", Pod:"calico-apiserver-54865fd995-mzf2r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6f7e8456f30", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:44.621304 containerd[1506]: 2025-11-01 01:52:44.571 [INFO][5084] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Nov 1 01:52:44.621304 containerd[1506]: 2025-11-01 01:52:44.571 [INFO][5084] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" iface="eth0" netns="" Nov 1 01:52:44.621304 containerd[1506]: 2025-11-01 01:52:44.571 [INFO][5084] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Nov 1 01:52:44.621304 containerd[1506]: 2025-11-01 01:52:44.571 [INFO][5084] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Nov 1 01:52:44.621304 containerd[1506]: 2025-11-01 01:52:44.604 [INFO][5092] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" HandleID="k8s-pod-network.1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:44.621304 containerd[1506]: 2025-11-01 01:52:44.604 [INFO][5092] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:44.621304 containerd[1506]: 2025-11-01 01:52:44.604 [INFO][5092] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:44.621304 containerd[1506]: 2025-11-01 01:52:44.614 [WARNING][5092] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" HandleID="k8s-pod-network.1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:44.621304 containerd[1506]: 2025-11-01 01:52:44.614 [INFO][5092] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" HandleID="k8s-pod-network.1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--mzf2r-eth0" Nov 1 01:52:44.621304 containerd[1506]: 2025-11-01 01:52:44.616 [INFO][5092] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:44.621304 containerd[1506]: 2025-11-01 01:52:44.619 [INFO][5084] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad" Nov 1 01:52:44.622683 containerd[1506]: time="2025-11-01T01:52:44.621358630Z" level=info msg="TearDown network for sandbox \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\" successfully" Nov 1 01:52:44.625187 containerd[1506]: time="2025-11-01T01:52:44.625135400Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:52:44.625298 containerd[1506]: time="2025-11-01T01:52:44.625209351Z" level=info msg="RemovePodSandbox \"1612555e576481a4732b47ee7177ebd38cacf93b0a852e9e717204616c762bad\" returns successfully" Nov 1 01:52:44.625851 containerd[1506]: time="2025-11-01T01:52:44.625809568Z" level=info msg="StopPodSandbox for \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\"" Nov 1 01:52:44.761986 containerd[1506]: 2025-11-01 01:52:44.701 [WARNING][5106] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-whisker--6c74bc7d7--w87m8-eth0" Nov 1 01:52:44.761986 containerd[1506]: 2025-11-01 01:52:44.701 [INFO][5106] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Nov 1 01:52:44.761986 containerd[1506]: 2025-11-01 01:52:44.701 [INFO][5106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" iface="eth0" netns="" Nov 1 01:52:44.761986 containerd[1506]: 2025-11-01 01:52:44.701 [INFO][5106] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Nov 1 01:52:44.761986 containerd[1506]: 2025-11-01 01:52:44.701 [INFO][5106] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Nov 1 01:52:44.761986 containerd[1506]: 2025-11-01 01:52:44.744 [INFO][5112] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" HandleID="k8s-pod-network.8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Workload="srv--d9muf.gb1.brightbox.com-k8s-whisker--6c74bc7d7--w87m8-eth0" Nov 1 01:52:44.761986 containerd[1506]: 2025-11-01 01:52:44.744 [INFO][5112] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:44.761986 containerd[1506]: 2025-11-01 01:52:44.744 [INFO][5112] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:44.761986 containerd[1506]: 2025-11-01 01:52:44.754 [WARNING][5112] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" HandleID="k8s-pod-network.8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Workload="srv--d9muf.gb1.brightbox.com-k8s-whisker--6c74bc7d7--w87m8-eth0" Nov 1 01:52:44.761986 containerd[1506]: 2025-11-01 01:52:44.754 [INFO][5112] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" HandleID="k8s-pod-network.8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Workload="srv--d9muf.gb1.brightbox.com-k8s-whisker--6c74bc7d7--w87m8-eth0" Nov 1 01:52:44.761986 containerd[1506]: 2025-11-01 01:52:44.756 [INFO][5112] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:44.761986 containerd[1506]: 2025-11-01 01:52:44.758 [INFO][5106] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Nov 1 01:52:44.761986 containerd[1506]: time="2025-11-01T01:52:44.760974009Z" level=info msg="TearDown network for sandbox \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\" successfully" Nov 1 01:52:44.761986 containerd[1506]: time="2025-11-01T01:52:44.761078033Z" level=info msg="StopPodSandbox for \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\" returns successfully" Nov 1 01:52:44.762702 containerd[1506]: time="2025-11-01T01:52:44.762366178Z" level=info msg="RemovePodSandbox for \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\"" Nov 1 01:52:44.762702 containerd[1506]: time="2025-11-01T01:52:44.762424446Z" level=info msg="Forcibly stopping sandbox \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\"" Nov 1 01:52:44.864732 containerd[1506]: 2025-11-01 01:52:44.814 [WARNING][5126] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" WorkloadEndpoint="srv--d9muf.gb1.brightbox.com-k8s-whisker--6c74bc7d7--w87m8-eth0" Nov 1 01:52:44.864732 containerd[1506]: 2025-11-01 01:52:44.814 [INFO][5126] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Nov 1 01:52:44.864732 containerd[1506]: 2025-11-01 01:52:44.814 [INFO][5126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" iface="eth0" netns="" Nov 1 01:52:44.864732 containerd[1506]: 2025-11-01 01:52:44.814 [INFO][5126] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Nov 1 01:52:44.864732 containerd[1506]: 2025-11-01 01:52:44.814 [INFO][5126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Nov 1 01:52:44.864732 containerd[1506]: 2025-11-01 01:52:44.848 [INFO][5133] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" HandleID="k8s-pod-network.8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Workload="srv--d9muf.gb1.brightbox.com-k8s-whisker--6c74bc7d7--w87m8-eth0" Nov 1 01:52:44.864732 containerd[1506]: 2025-11-01 01:52:44.848 [INFO][5133] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:44.864732 containerd[1506]: 2025-11-01 01:52:44.848 [INFO][5133] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:44.864732 containerd[1506]: 2025-11-01 01:52:44.857 [WARNING][5133] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" HandleID="k8s-pod-network.8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Workload="srv--d9muf.gb1.brightbox.com-k8s-whisker--6c74bc7d7--w87m8-eth0" Nov 1 01:52:44.864732 containerd[1506]: 2025-11-01 01:52:44.857 [INFO][5133] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" HandleID="k8s-pod-network.8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Workload="srv--d9muf.gb1.brightbox.com-k8s-whisker--6c74bc7d7--w87m8-eth0" Nov 1 01:52:44.864732 containerd[1506]: 2025-11-01 01:52:44.859 [INFO][5133] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:44.864732 containerd[1506]: 2025-11-01 01:52:44.861 [INFO][5126] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab" Nov 1 01:52:44.865989 containerd[1506]: time="2025-11-01T01:52:44.864894466Z" level=info msg="TearDown network for sandbox \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\" successfully" Nov 1 01:52:44.870180 containerd[1506]: time="2025-11-01T01:52:44.870131195Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:52:44.870286 containerd[1506]: time="2025-11-01T01:52:44.870196304Z" level=info msg="RemovePodSandbox \"8b6bad288b001ccdbd7855843aefd079945c64ea9cbc5d5f832a3ee7952845ab\" returns successfully" Nov 1 01:52:44.870774 containerd[1506]: time="2025-11-01T01:52:44.870731421Z" level=info msg="StopPodSandbox for \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\"" Nov 1 01:52:44.992694 containerd[1506]: 2025-11-01 01:52:44.928 [WARNING][5147] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0", GenerateName:"calico-kube-controllers-7dfccfdf99-", Namespace:"calico-system", SelfLink:"", UID:"0303fd48-19b2-41d1-991c-312dc81409eb", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfccfdf99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874", Pod:"calico-kube-controllers-7dfccfdf99-qcr45", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefc640d8452", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:44.992694 containerd[1506]: 2025-11-01 01:52:44.929 [INFO][5147] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Nov 1 01:52:44.992694 containerd[1506]: 2025-11-01 01:52:44.929 [INFO][5147] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" iface="eth0" netns="" Nov 1 01:52:44.992694 containerd[1506]: 2025-11-01 01:52:44.929 [INFO][5147] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Nov 1 01:52:44.992694 containerd[1506]: 2025-11-01 01:52:44.929 [INFO][5147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Nov 1 01:52:44.992694 containerd[1506]: 2025-11-01 01:52:44.961 [INFO][5154] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" HandleID="k8s-pod-network.c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:44.992694 containerd[1506]: 2025-11-01 01:52:44.961 [INFO][5154] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:44.992694 containerd[1506]: 2025-11-01 01:52:44.961 [INFO][5154] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:44.992694 containerd[1506]: 2025-11-01 01:52:44.981 [WARNING][5154] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" HandleID="k8s-pod-network.c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:44.992694 containerd[1506]: 2025-11-01 01:52:44.981 [INFO][5154] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" HandleID="k8s-pod-network.c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:44.992694 containerd[1506]: 2025-11-01 01:52:44.986 [INFO][5154] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:44.992694 containerd[1506]: 2025-11-01 01:52:44.989 [INFO][5147] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Nov 1 01:52:44.996128 containerd[1506]: time="2025-11-01T01:52:44.993122373Z" level=info msg="TearDown network for sandbox \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\" successfully" Nov 1 01:52:44.996128 containerd[1506]: time="2025-11-01T01:52:44.993160597Z" level=info msg="StopPodSandbox for \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\" returns successfully" Nov 1 01:52:44.996128 containerd[1506]: time="2025-11-01T01:52:44.994222602Z" level=info msg="RemovePodSandbox for \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\"" Nov 1 01:52:44.996128 containerd[1506]: time="2025-11-01T01:52:44.994260605Z" level=info msg="Forcibly stopping sandbox \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\"" Nov 1 01:52:45.179989 containerd[1506]: 2025-11-01 01:52:45.095 [WARNING][5168] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0", GenerateName:"calico-kube-controllers-7dfccfdf99-", Namespace:"calico-system", SelfLink:"", UID:"0303fd48-19b2-41d1-991c-312dc81409eb", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7dfccfdf99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"809f6f33ab582776ff4c995d90d24dcfe31ffb7b854dd36f0ec32a28ef65e874", Pod:"calico-kube-controllers-7dfccfdf99-qcr45", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliefc640d8452", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:45.179989 containerd[1506]: 2025-11-01 01:52:45.095 [INFO][5168] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Nov 1 01:52:45.179989 containerd[1506]: 2025-11-01 01:52:45.095 [INFO][5168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" iface="eth0" netns="" Nov 1 01:52:45.179989 containerd[1506]: 2025-11-01 01:52:45.095 [INFO][5168] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Nov 1 01:52:45.179989 containerd[1506]: 2025-11-01 01:52:45.095 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Nov 1 01:52:45.179989 containerd[1506]: 2025-11-01 01:52:45.158 [INFO][5175] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" HandleID="k8s-pod-network.c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:45.179989 containerd[1506]: 2025-11-01 01:52:45.158 [INFO][5175] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:45.179989 containerd[1506]: 2025-11-01 01:52:45.158 [INFO][5175] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:45.179989 containerd[1506]: 2025-11-01 01:52:45.170 [WARNING][5175] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" HandleID="k8s-pod-network.c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:45.179989 containerd[1506]: 2025-11-01 01:52:45.172 [INFO][5175] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" HandleID="k8s-pod-network.c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--kube--controllers--7dfccfdf99--qcr45-eth0" Nov 1 01:52:45.179989 containerd[1506]: 2025-11-01 01:52:45.174 [INFO][5175] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:45.179989 containerd[1506]: 2025-11-01 01:52:45.177 [INFO][5168] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6" Nov 1 01:52:45.179989 containerd[1506]: time="2025-11-01T01:52:45.179767722Z" level=info msg="TearDown network for sandbox \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\" successfully" Nov 1 01:52:45.183695 containerd[1506]: time="2025-11-01T01:52:45.183641026Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:52:45.183798 containerd[1506]: time="2025-11-01T01:52:45.183755268Z" level=info msg="RemovePodSandbox \"c46ec4a79d9ffaca5ec22d018a60057f0a220bee0e208e55fb3d5206c11525e6\" returns successfully" Nov 1 01:52:45.185066 containerd[1506]: time="2025-11-01T01:52:45.184663306Z" level=info msg="StopPodSandbox for \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\"" Nov 1 01:52:45.296974 containerd[1506]: 2025-11-01 01:52:45.244 [WARNING][5189] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5d3e007c-5fa3-444d-bda8-4fe6a895dd94", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b", Pod:"coredns-668d6bf9bc-qvbkg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c2a971497a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:45.296974 containerd[1506]: 2025-11-01 01:52:45.245 [INFO][5189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Nov 1 01:52:45.296974 containerd[1506]: 2025-11-01 01:52:45.245 [INFO][5189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" iface="eth0" netns="" Nov 1 01:52:45.296974 containerd[1506]: 2025-11-01 01:52:45.245 [INFO][5189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Nov 1 01:52:45.296974 containerd[1506]: 2025-11-01 01:52:45.245 [INFO][5189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Nov 1 01:52:45.296974 containerd[1506]: 2025-11-01 01:52:45.280 [INFO][5196] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" HandleID="k8s-pod-network.5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:45.296974 containerd[1506]: 2025-11-01 01:52:45.280 [INFO][5196] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:45.296974 containerd[1506]: 2025-11-01 01:52:45.280 [INFO][5196] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:45.296974 containerd[1506]: 2025-11-01 01:52:45.289 [WARNING][5196] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" HandleID="k8s-pod-network.5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:45.296974 containerd[1506]: 2025-11-01 01:52:45.290 [INFO][5196] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" HandleID="k8s-pod-network.5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:45.296974 containerd[1506]: 2025-11-01 01:52:45.292 [INFO][5196] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:45.296974 containerd[1506]: 2025-11-01 01:52:45.294 [INFO][5189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Nov 1 01:52:45.298169 containerd[1506]: time="2025-11-01T01:52:45.297957244Z" level=info msg="TearDown network for sandbox \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\" successfully" Nov 1 01:52:45.298169 containerd[1506]: time="2025-11-01T01:52:45.298001570Z" level=info msg="StopPodSandbox for \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\" returns successfully" Nov 1 01:52:45.300113 containerd[1506]: time="2025-11-01T01:52:45.299672151Z" level=info msg="RemovePodSandbox for \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\"" Nov 1 01:52:45.300113 containerd[1506]: time="2025-11-01T01:52:45.299722204Z" level=info msg="Forcibly stopping sandbox \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\"" Nov 1 01:52:45.411295 containerd[1506]: 2025-11-01 01:52:45.351 [WARNING][5210] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5d3e007c-5fa3-444d-bda8-4fe6a895dd94", ResourceVersion:"1028", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 51, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"9a939e0b865468e90ae13266dfbc902e708d25f79345defbaa9f925ddd56966b", Pod:"coredns-668d6bf9bc-qvbkg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3c2a971497a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:45.411295 containerd[1506]: 2025-11-01 01:52:45.352 [INFO][5210] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Nov 1 01:52:45.411295 containerd[1506]: 2025-11-01 01:52:45.352 [INFO][5210] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" iface="eth0" netns="" Nov 1 01:52:45.411295 containerd[1506]: 2025-11-01 01:52:45.352 [INFO][5210] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Nov 1 01:52:45.411295 containerd[1506]: 2025-11-01 01:52:45.352 [INFO][5210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Nov 1 01:52:45.411295 containerd[1506]: 2025-11-01 01:52:45.390 [INFO][5217] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" HandleID="k8s-pod-network.5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:45.411295 containerd[1506]: 2025-11-01 01:52:45.390 [INFO][5217] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:45.411295 containerd[1506]: 2025-11-01 01:52:45.391 [INFO][5217] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:45.411295 containerd[1506]: 2025-11-01 01:52:45.404 [WARNING][5217] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" HandleID="k8s-pod-network.5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:45.411295 containerd[1506]: 2025-11-01 01:52:45.404 [INFO][5217] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" HandleID="k8s-pod-network.5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Workload="srv--d9muf.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qvbkg-eth0" Nov 1 01:52:45.411295 containerd[1506]: 2025-11-01 01:52:45.406 [INFO][5217] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:45.411295 containerd[1506]: 2025-11-01 01:52:45.408 [INFO][5210] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289" Nov 1 01:52:45.413485 containerd[1506]: time="2025-11-01T01:52:45.411884710Z" level=info msg="TearDown network for sandbox \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\" successfully" Nov 1 01:52:45.419174 containerd[1506]: time="2025-11-01T01:52:45.418920454Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:52:45.419174 containerd[1506]: time="2025-11-01T01:52:45.418990868Z" level=info msg="RemovePodSandbox \"5dab467ebf90d915ae2bbc0806174e6be398d7d02ca01b5485dd288d9a593289\" returns successfully" Nov 1 01:52:45.419889 containerd[1506]: time="2025-11-01T01:52:45.419738789Z" level=info msg="StopPodSandbox for \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\"" Nov 1 01:52:45.521491 containerd[1506]: 2025-11-01 01:52:45.472 [WARNING][5232] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"692e4d02-4b9e-43c3-8a3c-87f80adc9cda", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18", Pod:"goldmane-666569f655-2z2fz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.73.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5c065fd31a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:45.521491 containerd[1506]: 2025-11-01 01:52:45.473 [INFO][5232] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Nov 1 01:52:45.521491 containerd[1506]: 2025-11-01 01:52:45.473 [INFO][5232] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" iface="eth0" netns="" Nov 1 01:52:45.521491 containerd[1506]: 2025-11-01 01:52:45.473 [INFO][5232] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Nov 1 01:52:45.521491 containerd[1506]: 2025-11-01 01:52:45.473 [INFO][5232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Nov 1 01:52:45.521491 containerd[1506]: 2025-11-01 01:52:45.504 [INFO][5239] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" HandleID="k8s-pod-network.7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Workload="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:45.521491 containerd[1506]: 2025-11-01 01:52:45.505 [INFO][5239] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:45.521491 containerd[1506]: 2025-11-01 01:52:45.505 [INFO][5239] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:45.521491 containerd[1506]: 2025-11-01 01:52:45.514 [WARNING][5239] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" HandleID="k8s-pod-network.7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Workload="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:45.521491 containerd[1506]: 2025-11-01 01:52:45.514 [INFO][5239] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" HandleID="k8s-pod-network.7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Workload="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:45.521491 containerd[1506]: 2025-11-01 01:52:45.516 [INFO][5239] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:45.521491 containerd[1506]: 2025-11-01 01:52:45.518 [INFO][5232] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Nov 1 01:52:45.521491 containerd[1506]: time="2025-11-01T01:52:45.521259140Z" level=info msg="TearDown network for sandbox \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\" successfully" Nov 1 01:52:45.521491 containerd[1506]: time="2025-11-01T01:52:45.521296748Z" level=info msg="StopPodSandbox for \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\" returns successfully" Nov 1 01:52:45.523655 containerd[1506]: time="2025-11-01T01:52:45.523211433Z" level=info msg="RemovePodSandbox for \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\"" Nov 1 01:52:45.523655 containerd[1506]: time="2025-11-01T01:52:45.523275059Z" level=info msg="Forcibly stopping sandbox \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\"" Nov 1 01:52:45.621560 containerd[1506]: 2025-11-01 01:52:45.571 [WARNING][5253] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"692e4d02-4b9e-43c3-8a3c-87f80adc9cda", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"f538355915363891449701afc0e7d6d93080c04c035acba0f01fdc83973acd18", Pod:"goldmane-666569f655-2z2fz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.73.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5c065fd31a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:45.621560 containerd[1506]: 2025-11-01 01:52:45.571 [INFO][5253] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Nov 1 01:52:45.621560 containerd[1506]: 2025-11-01 01:52:45.571 [INFO][5253] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" iface="eth0" netns="" Nov 1 01:52:45.621560 containerd[1506]: 2025-11-01 01:52:45.571 [INFO][5253] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Nov 1 01:52:45.621560 containerd[1506]: 2025-11-01 01:52:45.571 [INFO][5253] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Nov 1 01:52:45.621560 containerd[1506]: 2025-11-01 01:52:45.601 [INFO][5260] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" HandleID="k8s-pod-network.7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Workload="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:45.621560 containerd[1506]: 2025-11-01 01:52:45.602 [INFO][5260] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:45.621560 containerd[1506]: 2025-11-01 01:52:45.602 [INFO][5260] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:45.621560 containerd[1506]: 2025-11-01 01:52:45.614 [WARNING][5260] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" HandleID="k8s-pod-network.7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Workload="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:45.621560 containerd[1506]: 2025-11-01 01:52:45.614 [INFO][5260] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" HandleID="k8s-pod-network.7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Workload="srv--d9muf.gb1.brightbox.com-k8s-goldmane--666569f655--2z2fz-eth0" Nov 1 01:52:45.621560 containerd[1506]: 2025-11-01 01:52:45.616 [INFO][5260] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:45.621560 containerd[1506]: 2025-11-01 01:52:45.619 [INFO][5253] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd" Nov 1 01:52:45.624178 containerd[1506]: time="2025-11-01T01:52:45.621617600Z" level=info msg="TearDown network for sandbox \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\" successfully" Nov 1 01:52:45.625188 containerd[1506]: time="2025-11-01T01:52:45.625150560Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:52:45.625348 containerd[1506]: time="2025-11-01T01:52:45.625232042Z" level=info msg="RemovePodSandbox \"7ec2b1813076670fd76c1aaa2c290f4fdf79b8983e19302f691b24c2b1e3a5fd\" returns successfully" Nov 1 01:52:45.626159 containerd[1506]: time="2025-11-01T01:52:45.626106230Z" level=info msg="StopPodSandbox for \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\"" Nov 1 01:52:45.742723 containerd[1506]: 2025-11-01 01:52:45.679 [WARNING][5275] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0", GenerateName:"calico-apiserver-54865fd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"f54af395-651d-45bc-acec-8a87e82ec93b", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54865fd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9", Pod:"calico-apiserver-54865fd995-lqdk2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib540875e445", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:45.742723 containerd[1506]: 2025-11-01 01:52:45.679 [INFO][5275] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Nov 1 01:52:45.742723 containerd[1506]: 2025-11-01 01:52:45.679 [INFO][5275] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" iface="eth0" netns="" Nov 1 01:52:45.742723 containerd[1506]: 2025-11-01 01:52:45.679 [INFO][5275] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Nov 1 01:52:45.742723 containerd[1506]: 2025-11-01 01:52:45.679 [INFO][5275] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Nov 1 01:52:45.742723 containerd[1506]: 2025-11-01 01:52:45.723 [INFO][5282] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" HandleID="k8s-pod-network.c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:45.742723 containerd[1506]: 2025-11-01 01:52:45.723 [INFO][5282] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:45.742723 containerd[1506]: 2025-11-01 01:52:45.723 [INFO][5282] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:45.742723 containerd[1506]: 2025-11-01 01:52:45.734 [WARNING][5282] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" HandleID="k8s-pod-network.c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:45.742723 containerd[1506]: 2025-11-01 01:52:45.734 [INFO][5282] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" HandleID="k8s-pod-network.c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:45.742723 containerd[1506]: 2025-11-01 01:52:45.737 [INFO][5282] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:45.742723 containerd[1506]: 2025-11-01 01:52:45.739 [INFO][5275] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Nov 1 01:52:45.745095 containerd[1506]: time="2025-11-01T01:52:45.742780319Z" level=info msg="TearDown network for sandbox \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\" successfully" Nov 1 01:52:45.745095 containerd[1506]: time="2025-11-01T01:52:45.742821843Z" level=info msg="StopPodSandbox for \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\" returns successfully" Nov 1 01:52:45.745095 containerd[1506]: time="2025-11-01T01:52:45.744106475Z" level=info msg="RemovePodSandbox for \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\"" Nov 1 01:52:45.745095 containerd[1506]: time="2025-11-01T01:52:45.744146421Z" level=info msg="Forcibly stopping sandbox \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\"" Nov 1 01:52:45.858772 containerd[1506]: 2025-11-01 01:52:45.806 [WARNING][5303] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0", GenerateName:"calico-apiserver-54865fd995-", Namespace:"calico-apiserver", SelfLink:"", UID:"f54af395-651d-45bc-acec-8a87e82ec93b", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 1, 52, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54865fd995", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-d9muf.gb1.brightbox.com", ContainerID:"928433c04370af7a88cf27906b5a2b497bbca2466eb3964d165082651ec824e9", Pod:"calico-apiserver-54865fd995-lqdk2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib540875e445", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 01:52:45.858772 containerd[1506]: 2025-11-01 01:52:45.807 [INFO][5303] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Nov 1 01:52:45.858772 containerd[1506]: 2025-11-01 01:52:45.807 [INFO][5303] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" iface="eth0" netns="" Nov 1 01:52:45.858772 containerd[1506]: 2025-11-01 01:52:45.807 [INFO][5303] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Nov 1 01:52:45.858772 containerd[1506]: 2025-11-01 01:52:45.807 [INFO][5303] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Nov 1 01:52:45.858772 containerd[1506]: 2025-11-01 01:52:45.842 [INFO][5311] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" HandleID="k8s-pod-network.c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:45.858772 containerd[1506]: 2025-11-01 01:52:45.842 [INFO][5311] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 01:52:45.858772 containerd[1506]: 2025-11-01 01:52:45.842 [INFO][5311] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 01:52:45.858772 containerd[1506]: 2025-11-01 01:52:45.851 [WARNING][5311] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" HandleID="k8s-pod-network.c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:45.858772 containerd[1506]: 2025-11-01 01:52:45.851 [INFO][5311] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" HandleID="k8s-pod-network.c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Workload="srv--d9muf.gb1.brightbox.com-k8s-calico--apiserver--54865fd995--lqdk2-eth0" Nov 1 01:52:45.858772 containerd[1506]: 2025-11-01 01:52:45.854 [INFO][5311] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 01:52:45.858772 containerd[1506]: 2025-11-01 01:52:45.856 [INFO][5303] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18" Nov 1 01:52:45.861905 containerd[1506]: time="2025-11-01T01:52:45.859789571Z" level=info msg="TearDown network for sandbox \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\" successfully" Nov 1 01:52:45.864417 containerd[1506]: time="2025-11-01T01:52:45.864326554Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 01:52:45.864490 containerd[1506]: time="2025-11-01T01:52:45.864453617Z" level=info msg="RemovePodSandbox \"c2be869077ddd724cb44ec130c0442d61d9d6874d755fbc9d29a73c2a06ffd18\" returns successfully" Nov 1 01:52:50.607438 containerd[1506]: time="2025-11-01T01:52:50.607318263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:52:50.947745 containerd[1506]: time="2025-11-01T01:52:50.947451008Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:50.949522 containerd[1506]: time="2025-11-01T01:52:50.949066597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:52:50.949522 containerd[1506]: time="2025-11-01T01:52:50.949085853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:52:50.950461 kubelet[2677]: E1101 01:52:50.949852 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:52:50.950461 kubelet[2677]: E1101 01:52:50.950003 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:52:50.950461 kubelet[2677]: E1101 01:52:50.950362 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsz8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2z2fz_calico-system(692e4d02-4b9e-43c3-8a3c-87f80adc9cda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:50.952095 kubelet[2677]: E1101 01:52:50.952008 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2z2fz" podUID="692e4d02-4b9e-43c3-8a3c-87f80adc9cda" Nov 1 01:52:52.607746 containerd[1506]: time="2025-11-01T01:52:52.607227861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:52:52.916786 containerd[1506]: time="2025-11-01T01:52:52.916363564Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:52.917950 containerd[1506]: time="2025-11-01T01:52:52.917751907Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:52:52.917950 containerd[1506]: time="2025-11-01T01:52:52.917865180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:52:52.918275 kubelet[2677]: E1101 01:52:52.918230 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:52:52.918722 kubelet[2677]: E1101 01:52:52.918299 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:52:52.918722 kubelet[2677]: E1101 01:52:52.918488 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vbx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f685l_calico-system(ec724e45-3797-40ba-a9db-970952094e39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:52.921521 containerd[1506]: time="2025-11-01T01:52:52.921479932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:52:53.231848 containerd[1506]: time="2025-11-01T01:52:53.231722896Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:53.233113 containerd[1506]: time="2025-11-01T01:52:53.233039202Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:52:53.233233 containerd[1506]: time="2025-11-01T01:52:53.233168222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:52:53.233469 kubelet[2677]: E1101 01:52:53.233387 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:52:53.233576 kubelet[2677]: E1101 01:52:53.233470 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:52:53.233692 kubelet[2677]: E1101 01:52:53.233632 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vbx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f685l_calico-system(ec724e45-3797-40ba-a9db-970952094e39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:53.235374 kubelet[2677]: E1101 01:52:53.235263 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:52:53.609114 containerd[1506]: time="2025-11-01T01:52:53.608509715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:52:53.929291 containerd[1506]: time="2025-11-01T01:52:53.929086757Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:53.930641 containerd[1506]: time="2025-11-01T01:52:53.930542762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:52:53.930788 containerd[1506]: time="2025-11-01T01:52:53.930688094Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:52:53.931208 kubelet[2677]: E1101 01:52:53.931120 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:52:53.932569 kubelet[2677]: E1101 01:52:53.931242 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:52:53.932569 kubelet[2677]: E1101 01:52:53.931684 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4c8f59ef31024bbcaacb44f54e1035cb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zjzk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ffbb49cbc-m9nb9_calico-system(9bcbd5c6-4789-405c-8bd4-745ed14fab4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:53.932786 containerd[1506]: time="2025-11-01T01:52:53.931669491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:52:54.254332 containerd[1506]: time="2025-11-01T01:52:54.254208019Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:54.255651 containerd[1506]: time="2025-11-01T01:52:54.255534336Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:52:54.255651 containerd[1506]: time="2025-11-01T01:52:54.255572358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:52:54.255919 kubelet[2677]: E1101 01:52:54.255841 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:52:54.256050 kubelet[2677]: E1101 01:52:54.255937 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:52:54.256357 kubelet[2677]: E1101 01:52:54.256282 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d47l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54865fd995-mzf2r_calico-apiserver(961d53cb-00c8-4e88-869d-034281366b6b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:54.257664 containerd[1506]: time="2025-11-01T01:52:54.257342510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:52:54.257975 kubelet[2677]: E1101 01:52:54.257711 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" podUID="961d53cb-00c8-4e88-869d-034281366b6b" Nov 1 01:52:54.584049 containerd[1506]: time="2025-11-01T01:52:54.583576071Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:54.586235 containerd[1506]: time="2025-11-01T01:52:54.586083208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:52:54.586235 containerd[1506]: time="2025-11-01T01:52:54.586157657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:52:54.587158 kubelet[2677]: E1101 01:52:54.586681 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:52:54.587158 kubelet[2677]: E1101 01:52:54.586772 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:52:54.587158 kubelet[2677]: E1101 01:52:54.587070 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zjzk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ffbb49cbc-m9nb9_calico-system(9bcbd5c6-4789-405c-8bd4-745ed14fab4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:54.589234 kubelet[2677]: E1101 01:52:54.589156 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ffbb49cbc-m9nb9" podUID="9bcbd5c6-4789-405c-8bd4-745ed14fab4a" Nov 1 01:52:54.606170 containerd[1506]: time="2025-11-01T01:52:54.605998521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:52:54.913478 containerd[1506]: time="2025-11-01T01:52:54.913229481Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:54.914979 containerd[1506]: time="2025-11-01T01:52:54.914820371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:52:54.914979 containerd[1506]: time="2025-11-01T01:52:54.914887237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:52:54.915291 kubelet[2677]: E1101 01:52:54.915221 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:52:54.915409 kubelet[2677]: E1101 01:52:54.915355 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:52:54.916070 kubelet[2677]: E1101 01:52:54.915613 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxth2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7dfccfdf99-qcr45_calico-system(0303fd48-19b2-41d1-991c-312dc81409eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:54.917008 kubelet[2677]: E1101 01:52:54.916970 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" podUID="0303fd48-19b2-41d1-991c-312dc81409eb" Nov 1 01:52:56.607138 containerd[1506]: time="2025-11-01T01:52:56.606639565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:52:56.917536 containerd[1506]: time="2025-11-01T01:52:56.917334235Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:52:56.919060 containerd[1506]: time="2025-11-01T01:52:56.918961609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:52:56.919242 containerd[1506]: time="2025-11-01T01:52:56.919157149Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:52:56.919777 kubelet[2677]: E1101 01:52:56.919437 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:52:56.919777 kubelet[2677]: E1101 01:52:56.919517 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:52:56.920684 kubelet[2677]: E1101 01:52:56.919723 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwp9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54865fd995-lqdk2_calico-apiserver(f54af395-651d-45bc-acec-8a87e82ec93b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:52:56.922113 kubelet[2677]: E1101 01:52:56.921551 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" podUID="f54af395-651d-45bc-acec-8a87e82ec93b" Nov 1 01:53:03.609559 kubelet[2677]: E1101 01:53:03.609250 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2z2fz" podUID="692e4d02-4b9e-43c3-8a3c-87f80adc9cda" Nov 1 01:53:05.611483 kubelet[2677]: E1101 01:53:05.610350 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" podUID="0303fd48-19b2-41d1-991c-312dc81409eb" Nov 1 01:53:06.606351 kubelet[2677]: E1101 01:53:06.605296 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" podUID="961d53cb-00c8-4e88-869d-034281366b6b" Nov 1 01:53:06.607296 kubelet[2677]: E1101 01:53:06.607195 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ffbb49cbc-m9nb9" podUID="9bcbd5c6-4789-405c-8bd4-745ed14fab4a" Nov 1 01:53:07.614352 kubelet[2677]: E1101 01:53:07.614230 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:53:09.611898 kubelet[2677]: E1101 01:53:09.611375 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" podUID="f54af395-651d-45bc-acec-8a87e82ec93b" Nov 1 01:53:16.384411 systemd[1]: Started sshd@9-10.230.17.2:22-147.75.109.163:52942.service - OpenSSH per-connection server daemon (147.75.109.163:52942). Nov 1 01:53:17.371372 sshd[5366]: Accepted publickey for core from 147.75.109.163 port 52942 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:53:17.375976 sshd[5366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:53:17.390270 systemd-logind[1487]: New session 12 of user core. Nov 1 01:53:17.401890 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 01:53:17.614442 containerd[1506]: time="2025-11-01T01:53:17.614361738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:53:17.943424 containerd[1506]: time="2025-11-01T01:53:17.943356799Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:53:17.945222 containerd[1506]: time="2025-11-01T01:53:17.945153256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:53:17.946476 kubelet[2677]: E1101 01:53:17.945892 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:53:17.946476 kubelet[2677]: E1101 01:53:17.946070 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:53:17.946476 kubelet[2677]: E1101 01:53:17.946414 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxth2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7dfccfdf99-qcr45_calico-system(0303fd48-19b2-41d1-991c-312dc81409eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:53:17.952157 kubelet[2677]: E1101 01:53:17.948105 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" podUID="0303fd48-19b2-41d1-991c-312dc81409eb" Nov 1 01:53:17.952283 containerd[1506]: time="2025-11-01T01:53:17.945360912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:53:18.617536 containerd[1506]: time="2025-11-01T01:53:18.617199130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:53:18.755501 sshd[5366]: pam_unix(sshd:session): session closed for user core Nov 1 01:53:18.765141 systemd[1]: sshd@9-10.230.17.2:22-147.75.109.163:52942.service: Deactivated successfully. Nov 1 01:53:18.770731 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 01:53:18.774288 systemd-logind[1487]: Session 12 logged out. Waiting for processes to exit. Nov 1 01:53:18.778524 systemd-logind[1487]: Removed session 12. Nov 1 01:53:18.944893 containerd[1506]: time="2025-11-01T01:53:18.944255268Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:53:18.947107 containerd[1506]: time="2025-11-01T01:53:18.946302489Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:53:18.947107 containerd[1506]: time="2025-11-01T01:53:18.946380741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:53:18.947522 kubelet[2677]: E1101 01:53:18.947451 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:53:18.948401 kubelet[2677]: E1101 01:53:18.947547 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:53:18.948401 kubelet[2677]: E1101 01:53:18.947755 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsz8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2z2fz_calico-system(692e4d02-4b9e-43c3-8a3c-87f80adc9cda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:53:18.950707 kubelet[2677]: E1101 01:53:18.949260 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2z2fz" podUID="692e4d02-4b9e-43c3-8a3c-87f80adc9cda" Nov 1 01:53:19.608124 containerd[1506]: time="2025-11-01T01:53:19.607152132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:53:19.953832 containerd[1506]: time="2025-11-01T01:53:19.953095840Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:53:19.955495 containerd[1506]: time="2025-11-01T01:53:19.955426176Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:53:19.955670 containerd[1506]: time="2025-11-01T01:53:19.955451254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:53:19.955961 kubelet[2677]: E1101 01:53:19.955877 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:53:19.956466 kubelet[2677]: E1101 01:53:19.955984 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:53:19.956466 kubelet[2677]: E1101 01:53:19.956330 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d47l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54865fd995-mzf2r_calico-apiserver(961d53cb-00c8-4e88-869d-034281366b6b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:53:19.958109 kubelet[2677]: E1101 01:53:19.958041 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" podUID="961d53cb-00c8-4e88-869d-034281366b6b" Nov 1 01:53:21.611742 containerd[1506]: time="2025-11-01T01:53:21.611653839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:53:21.934996 containerd[1506]: time="2025-11-01T01:53:21.934378940Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:53:21.937276 containerd[1506]: time="2025-11-01T01:53:21.937193542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:53:21.937475 containerd[1506]: time="2025-11-01T01:53:21.937353056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:53:21.938379 kubelet[2677]: E1101 01:53:21.938279 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:53:21.939516 kubelet[2677]: E1101 01:53:21.938437 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:53:21.941488 containerd[1506]: time="2025-11-01T01:53:21.939667049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:53:21.941647 kubelet[2677]: E1101 01:53:21.941102 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4c8f59ef31024bbcaacb44f54e1035cb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zjzk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ffbb49cbc-m9nb9_calico-system(9bcbd5c6-4789-405c-8bd4-745ed14fab4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:53:22.270134 containerd[1506]: time="2025-11-01T01:53:22.270035408Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:53:22.272036 containerd[1506]: time="2025-11-01T01:53:22.271883936Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:53:22.272036 containerd[1506]: time="2025-11-01T01:53:22.271942644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:53:22.272341 kubelet[2677]: E1101 01:53:22.272276 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:53:22.272442 kubelet[2677]: E1101 01:53:22.272367 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:53:22.272830 kubelet[2677]: E1101 01:53:22.272759 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vbx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f685l_calico-system(ec724e45-3797-40ba-a9db-970952094e39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:53:22.273584 containerd[1506]: time="2025-11-01T01:53:22.273545834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:53:22.601191 containerd[1506]: time="2025-11-01T01:53:22.600482852Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:53:22.602335 containerd[1506]: time="2025-11-01T01:53:22.602040053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:53:22.602335 containerd[1506]: time="2025-11-01T01:53:22.602215994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:53:22.602776 kubelet[2677]: E1101 01:53:22.602627 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:53:22.606134 kubelet[2677]: E1101 01:53:22.602796 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:53:22.606134 kubelet[2677]: E1101 01:53:22.603585 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zjzk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ffbb49cbc-m9nb9_calico-system(9bcbd5c6-4789-405c-8bd4-745ed14fab4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:53:22.606134 kubelet[2677]: E1101 01:53:22.605372 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ffbb49cbc-m9nb9" podUID="9bcbd5c6-4789-405c-8bd4-745ed14fab4a" Nov 1 01:53:22.606445 containerd[1506]: time="2025-11-01T01:53:22.604110802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:53:22.921589 containerd[1506]: time="2025-11-01T01:53:22.920684284Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:53:22.927147 containerd[1506]: time="2025-11-01T01:53:22.924192442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:53:22.927147 containerd[1506]: time="2025-11-01T01:53:22.924226643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:53:22.927147 containerd[1506]: time="2025-11-01T01:53:22.926789102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:53:22.927537 kubelet[2677]: E1101 01:53:22.926153 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:53:22.927537 kubelet[2677]: E1101 01:53:22.926242 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:53:22.927537 kubelet[2677]: E1101 01:53:22.926649 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vbx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f685l_calico-system(ec724e45-3797-40ba-a9db-970952094e39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:53:22.927967 kubelet[2677]: E1101 01:53:22.927903 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:53:23.243296 containerd[1506]: time="2025-11-01T01:53:23.241494378Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:53:23.246078 containerd[1506]: time="2025-11-01T01:53:23.245867352Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:53:23.246078 containerd[1506]: time="2025-11-01T01:53:23.245988566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:53:23.246661 kubelet[2677]: E1101 01:53:23.246333 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:53:23.249262 kubelet[2677]: E1101 01:53:23.248058 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:53:23.249262 kubelet[2677]: E1101 01:53:23.248281 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwp9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54865fd995-lqdk2_calico-apiserver(f54af395-651d-45bc-acec-8a87e82ec93b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:53:23.249614 kubelet[2677]: E1101 01:53:23.249554 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" podUID="f54af395-651d-45bc-acec-8a87e82ec93b" Nov 1 01:53:23.921538 systemd[1]: Started sshd@10-10.230.17.2:22-147.75.109.163:60852.service - OpenSSH per-connection server daemon (147.75.109.163:60852). Nov 1 01:53:24.849750 sshd[5395]: Accepted publickey for core from 147.75.109.163 port 60852 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:53:24.852574 sshd[5395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:53:24.869140 systemd-logind[1487]: New session 13 of user core. Nov 1 01:53:24.876256 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 01:53:25.783669 sshd[5395]: pam_unix(sshd:session): session closed for user core Nov 1 01:53:25.791889 systemd[1]: sshd@10-10.230.17.2:22-147.75.109.163:60852.service: Deactivated successfully. Nov 1 01:53:25.797264 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 01:53:25.799616 systemd-logind[1487]: Session 13 logged out. Waiting for processes to exit. Nov 1 01:53:25.802756 systemd-logind[1487]: Removed session 13. Nov 1 01:53:30.607567 kubelet[2677]: E1101 01:53:30.607354 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2z2fz" podUID="692e4d02-4b9e-43c3-8a3c-87f80adc9cda" Nov 1 01:53:30.950306 systemd[1]: Started sshd@11-10.230.17.2:22-147.75.109.163:35076.service - OpenSSH per-connection server daemon (147.75.109.163:35076). Nov 1 01:53:31.866562 sshd[5409]: Accepted publickey for core from 147.75.109.163 port 35076 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:53:31.871262 sshd[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:53:31.889113 systemd-logind[1487]: New session 14 of user core. Nov 1 01:53:31.893066 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 01:53:32.610045 kubelet[2677]: E1101 01:53:32.609819 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" podUID="0303fd48-19b2-41d1-991c-312dc81409eb" Nov 1 01:53:32.642702 sshd[5409]: pam_unix(sshd:session): session closed for user core Nov 1 01:53:32.649660 systemd-logind[1487]: Session 14 logged out. Waiting for processes to exit. Nov 1 01:53:32.650325 systemd[1]: sshd@11-10.230.17.2:22-147.75.109.163:35076.service: Deactivated successfully. Nov 1 01:53:32.655500 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 01:53:32.667686 systemd-logind[1487]: Removed session 14. Nov 1 01:53:32.811432 systemd[1]: Started sshd@12-10.230.17.2:22-147.75.109.163:35078.service - OpenSSH per-connection server daemon (147.75.109.163:35078). Nov 1 01:53:33.733098 sshd[5423]: Accepted publickey for core from 147.75.109.163 port 35078 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:53:33.736571 sshd[5423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:53:33.744333 systemd-logind[1487]: New session 15 of user core. Nov 1 01:53:33.750783 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 01:53:34.608196 sshd[5423]: pam_unix(sshd:session): session closed for user core Nov 1 01:53:34.611038 kubelet[2677]: E1101 01:53:34.608727 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" podUID="961d53cb-00c8-4e88-869d-034281366b6b" Nov 1 01:53:34.618381 kubelet[2677]: E1101 01:53:34.613446 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ffbb49cbc-m9nb9" podUID="9bcbd5c6-4789-405c-8bd4-745ed14fab4a" Nov 1 01:53:34.620786 systemd[1]: sshd@12-10.230.17.2:22-147.75.109.163:35078.service: Deactivated successfully. Nov 1 01:53:34.633206 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 01:53:34.638002 systemd-logind[1487]: Session 15 logged out. Waiting for processes to exit. Nov 1 01:53:34.640754 systemd-logind[1487]: Removed session 15. Nov 1 01:53:34.771591 systemd[1]: Started sshd@13-10.230.17.2:22-147.75.109.163:35094.service - OpenSSH per-connection server daemon (147.75.109.163:35094). Nov 1 01:53:35.721865 sshd[5434]: Accepted publickey for core from 147.75.109.163 port 35094 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:53:35.724216 sshd[5434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:53:35.736302 systemd-logind[1487]: New session 16 of user core. Nov 1 01:53:35.747476 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 01:53:36.844899 sshd[5434]: pam_unix(sshd:session): session closed for user core Nov 1 01:53:36.853725 systemd[1]: sshd@13-10.230.17.2:22-147.75.109.163:35094.service: Deactivated successfully. Nov 1 01:53:36.858392 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 01:53:36.859800 systemd-logind[1487]: Session 16 logged out. Waiting for processes to exit. Nov 1 01:53:36.862208 systemd-logind[1487]: Removed session 16. Nov 1 01:53:37.611806 kubelet[2677]: E1101 01:53:37.611611 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" podUID="f54af395-651d-45bc-acec-8a87e82ec93b" Nov 1 01:53:37.614861 kubelet[2677]: E1101 01:53:37.612832 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:53:42.009469 systemd[1]: Started sshd@14-10.230.17.2:22-147.75.109.163:60584.service - OpenSSH per-connection server daemon (147.75.109.163:60584). Nov 1 01:53:42.939161 sshd[5467]: Accepted publickey for core from 147.75.109.163 port 60584 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:53:42.941223 sshd[5467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:53:42.954508 systemd-logind[1487]: New session 17 of user core. Nov 1 01:53:42.961532 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 01:53:43.722902 sshd[5467]: pam_unix(sshd:session): session closed for user core Nov 1 01:53:43.733401 systemd[1]: sshd@14-10.230.17.2:22-147.75.109.163:60584.service: Deactivated successfully. Nov 1 01:53:43.740051 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 01:53:43.743137 systemd-logind[1487]: Session 17 logged out. Waiting for processes to exit. Nov 1 01:53:43.745216 systemd-logind[1487]: Removed session 17. Nov 1 01:53:45.608140 kubelet[2677]: E1101 01:53:45.607480 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" podUID="0303fd48-19b2-41d1-991c-312dc81409eb" Nov 1 01:53:45.608140 kubelet[2677]: E1101 01:53:45.607657 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2z2fz" podUID="692e4d02-4b9e-43c3-8a3c-87f80adc9cda" Nov 1 01:53:46.607058 kubelet[2677]: E1101 01:53:46.606575 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" podUID="961d53cb-00c8-4e88-869d-034281366b6b" Nov 1 01:53:46.609346 kubelet[2677]: E1101 01:53:46.609220 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ffbb49cbc-m9nb9" podUID="9bcbd5c6-4789-405c-8bd4-745ed14fab4a" Nov 1 01:53:48.608136 kubelet[2677]: E1101 01:53:48.607989 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" podUID="f54af395-651d-45bc-acec-8a87e82ec93b" Nov 1 01:53:48.889598 systemd[1]: Started sshd@15-10.230.17.2:22-147.75.109.163:60600.service - OpenSSH per-connection server daemon (147.75.109.163:60600). Nov 1 01:53:49.612278 kubelet[2677]: E1101 01:53:49.612210 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:53:49.807637 sshd[5488]: Accepted publickey for core from 147.75.109.163 port 60600 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:53:49.809096 sshd[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:53:49.827259 systemd-logind[1487]: New session 18 of user core. Nov 1 01:53:49.836408 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 01:53:50.574372 sshd[5488]: pam_unix(sshd:session): session closed for user core Nov 1 01:53:50.582602 systemd[1]: sshd@15-10.230.17.2:22-147.75.109.163:60600.service: Deactivated successfully. Nov 1 01:53:50.588382 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 01:53:50.591705 systemd-logind[1487]: Session 18 logged out. Waiting for processes to exit. Nov 1 01:53:50.594173 systemd-logind[1487]: Removed session 18. Nov 1 01:53:55.742588 systemd[1]: Started sshd@16-10.230.17.2:22-147.75.109.163:42368.service - OpenSSH per-connection server daemon (147.75.109.163:42368). Nov 1 01:53:56.605757 kubelet[2677]: E1101 01:53:56.605570 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2z2fz" podUID="692e4d02-4b9e-43c3-8a3c-87f80adc9cda" Nov 1 01:53:56.658064 sshd[5505]: Accepted publickey for core from 147.75.109.163 port 42368 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:53:56.659754 sshd[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:53:56.671883 systemd-logind[1487]: New session 19 of user core. Nov 1 01:53:56.690747 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 01:53:57.465241 sshd[5505]: pam_unix(sshd:session): session closed for user core Nov 1 01:53:57.482554 systemd[1]: sshd@16-10.230.17.2:22-147.75.109.163:42368.service: Deactivated successfully. Nov 1 01:53:57.490453 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 01:53:57.499383 systemd-logind[1487]: Session 19 logged out. Waiting for processes to exit. Nov 1 01:53:57.505151 systemd-logind[1487]: Removed session 19. Nov 1 01:53:57.611880 kubelet[2677]: E1101 01:53:57.611737 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" podUID="0303fd48-19b2-41d1-991c-312dc81409eb" Nov 1 01:53:57.630784 systemd[1]: Started sshd@17-10.230.17.2:22-147.75.109.163:42376.service - OpenSSH per-connection server daemon (147.75.109.163:42376). Nov 1 01:53:58.580500 sshd[5517]: Accepted publickey for core from 147.75.109.163 port 42376 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:53:58.585687 sshd[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:53:58.599113 systemd-logind[1487]: New session 20 of user core. Nov 1 01:53:58.607412 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 01:53:59.614846 kubelet[2677]: E1101 01:53:59.613849 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" podUID="f54af395-651d-45bc-acec-8a87e82ec93b" Nov 1 01:53:59.729714 sshd[5517]: pam_unix(sshd:session): session closed for user core Nov 1 01:53:59.751527 systemd[1]: sshd@17-10.230.17.2:22-147.75.109.163:42376.service: Deactivated successfully. Nov 1 01:53:59.758694 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 01:53:59.762289 systemd-logind[1487]: Session 20 logged out. Waiting for processes to exit. Nov 1 01:53:59.766351 systemd-logind[1487]: Removed session 20. Nov 1 01:53:59.887396 systemd[1]: Started sshd@18-10.230.17.2:22-147.75.109.163:42382.service - OpenSSH per-connection server daemon (147.75.109.163:42382). Nov 1 01:54:00.607455 kubelet[2677]: E1101 01:54:00.607301 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ffbb49cbc-m9nb9" podUID="9bcbd5c6-4789-405c-8bd4-745ed14fab4a" Nov 1 01:54:00.816116 sshd[5528]: Accepted publickey for core from 147.75.109.163 port 42382 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:54:00.819193 sshd[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:54:00.832282 systemd-logind[1487]: New session 21 of user core. Nov 1 01:54:00.842642 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 01:54:01.611646 containerd[1506]: time="2025-11-01T01:54:01.611467428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:54:01.991859 containerd[1506]: time="2025-11-01T01:54:01.991794818Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:54:01.993266 containerd[1506]: time="2025-11-01T01:54:01.993199618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:54:01.993416 containerd[1506]: time="2025-11-01T01:54:01.993354281Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:54:01.993872 kubelet[2677]: E1101 01:54:01.993791 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:54:01.994505 kubelet[2677]: E1101 01:54:01.993902 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:54:01.995916 kubelet[2677]: E1101 01:54:01.995346 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d47l7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54865fd995-mzf2r_calico-apiserver(961d53cb-00c8-4e88-869d-034281366b6b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:54:01.997261 kubelet[2677]: E1101 01:54:01.997075 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" podUID="961d53cb-00c8-4e88-869d-034281366b6b" Nov 1 01:54:02.488381 sshd[5528]: pam_unix(sshd:session): session closed for user core Nov 1 01:54:02.494656 systemd[1]: sshd@18-10.230.17.2:22-147.75.109.163:42382.service: Deactivated successfully. Nov 1 01:54:02.499508 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 01:54:02.503997 systemd-logind[1487]: Session 21 logged out. Waiting for processes to exit. Nov 1 01:54:02.507277 systemd-logind[1487]: Removed session 21. Nov 1 01:54:02.654234 systemd[1]: Started sshd@19-10.230.17.2:22-147.75.109.163:51544.service - OpenSSH per-connection server daemon (147.75.109.163:51544). Nov 1 01:54:03.609313 containerd[1506]: time="2025-11-01T01:54:03.607991434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 01:54:03.629139 sshd[5554]: Accepted publickey for core from 147.75.109.163 port 51544 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:54:03.628865 sshd[5554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:54:03.645675 systemd-logind[1487]: New session 22 of user core. Nov 1 01:54:03.655445 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 01:54:03.945263 containerd[1506]: time="2025-11-01T01:54:03.944453059Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:54:03.946734 containerd[1506]: time="2025-11-01T01:54:03.946026769Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 01:54:03.946837 containerd[1506]: time="2025-11-01T01:54:03.946038911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 01:54:03.947478 kubelet[2677]: E1101 01:54:03.947404 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:54:03.948621 kubelet[2677]: E1101 01:54:03.947509 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 01:54:03.948621 kubelet[2677]: E1101 01:54:03.947728 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vbx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f685l_calico-system(ec724e45-3797-40ba-a9db-970952094e39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 01:54:03.953343 containerd[1506]: time="2025-11-01T01:54:03.953288580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 01:54:04.277785 containerd[1506]: time="2025-11-01T01:54:04.277715619Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:54:04.278840 containerd[1506]: time="2025-11-01T01:54:04.278775287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 01:54:04.278955 containerd[1506]: time="2025-11-01T01:54:04.278905098Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 01:54:04.280169 kubelet[2677]: E1101 01:54:04.280100 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:54:04.280662 kubelet[2677]: E1101 01:54:04.280367 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 01:54:04.280856 kubelet[2677]: E1101 01:54:04.280637 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2vbx2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f685l_calico-system(ec724e45-3797-40ba-a9db-970952094e39): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 01:54:04.282134 kubelet[2677]: E1101 01:54:04.282067 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:54:04.851355 sshd[5554]: pam_unix(sshd:session): session closed for user core Nov 1 01:54:04.859341 systemd[1]: sshd@19-10.230.17.2:22-147.75.109.163:51544.service: Deactivated successfully. Nov 1 01:54:04.864639 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 01:54:04.867216 systemd-logind[1487]: Session 22 logged out. Waiting for processes to exit. Nov 1 01:54:04.870356 systemd-logind[1487]: Removed session 22. Nov 1 01:54:05.010473 systemd[1]: Started sshd@20-10.230.17.2:22-147.75.109.163:51556.service - OpenSSH per-connection server daemon (147.75.109.163:51556). Nov 1 01:54:05.940551 sshd[5566]: Accepted publickey for core from 147.75.109.163 port 51556 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:54:05.944445 sshd[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:54:05.956811 systemd-logind[1487]: New session 23 of user core. Nov 1 01:54:05.962577 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 01:54:06.769963 sshd[5566]: pam_unix(sshd:session): session closed for user core Nov 1 01:54:06.781299 systemd[1]: sshd@20-10.230.17.2:22-147.75.109.163:51556.service: Deactivated successfully. Nov 1 01:54:06.784487 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 01:54:06.787596 systemd-logind[1487]: Session 23 logged out. Waiting for processes to exit. Nov 1 01:54:06.790248 systemd-logind[1487]: Removed session 23. Nov 1 01:54:10.607275 containerd[1506]: time="2025-11-01T01:54:10.607106516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 01:54:10.947466 containerd[1506]: time="2025-11-01T01:54:10.947063632Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:54:10.950155 containerd[1506]: time="2025-11-01T01:54:10.949920771Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 01:54:10.950155 containerd[1506]: time="2025-11-01T01:54:10.950000839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 01:54:10.952228 kubelet[2677]: E1101 01:54:10.950598 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:54:10.952228 kubelet[2677]: E1101 01:54:10.950798 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 01:54:10.952228 kubelet[2677]: E1101 01:54:10.951665 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwp9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-54865fd995-lqdk2_calico-apiserver(f54af395-651d-45bc-acec-8a87e82ec93b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 01:54:10.953331 kubelet[2677]: E1101 01:54:10.952806 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" podUID="f54af395-651d-45bc-acec-8a87e82ec93b" Nov 1 01:54:10.953654 containerd[1506]: time="2025-11-01T01:54:10.953610163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 01:54:11.271485 containerd[1506]: time="2025-11-01T01:54:11.270047786Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:54:11.271485 containerd[1506]: time="2025-11-01T01:54:11.271427614Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 01:54:11.271761 containerd[1506]: time="2025-11-01T01:54:11.271537222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 01:54:11.272088 kubelet[2677]: E1101 01:54:11.272001 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:54:11.272628 kubelet[2677]: E1101 01:54:11.272102 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 01:54:11.272628 kubelet[2677]: E1101 01:54:11.272495 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rsz8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-2z2fz_calico-system(692e4d02-4b9e-43c3-8a3c-87f80adc9cda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 01:54:11.273540 containerd[1506]: time="2025-11-01T01:54:11.273405680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 01:54:11.273943 kubelet[2677]: E1101 01:54:11.273856 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2z2fz" podUID="692e4d02-4b9e-43c3-8a3c-87f80adc9cda" Nov 1 01:54:11.579991 containerd[1506]: time="2025-11-01T01:54:11.579790491Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:54:11.582382 containerd[1506]: time="2025-11-01T01:54:11.582296828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 01:54:11.582484 containerd[1506]: time="2025-11-01T01:54:11.582408264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 01:54:11.583038 kubelet[2677]: E1101 01:54:11.582755 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:54:11.583038 kubelet[2677]: E1101 01:54:11.582841 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 01:54:11.583438 kubelet[2677]: E1101 01:54:11.583329 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dxth2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7dfccfdf99-qcr45_calico-system(0303fd48-19b2-41d1-991c-312dc81409eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 01:54:11.584603 kubelet[2677]: E1101 01:54:11.584546 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" podUID="0303fd48-19b2-41d1-991c-312dc81409eb" Nov 1 01:54:11.939692 systemd[1]: Started sshd@21-10.230.17.2:22-147.75.109.163:51632.service - OpenSSH per-connection server daemon (147.75.109.163:51632). Nov 1 01:54:12.882088 sshd[5615]: Accepted publickey for core from 147.75.109.163 port 51632 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:54:12.885332 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:54:12.900476 systemd-logind[1487]: New session 24 of user core. Nov 1 01:54:12.907554 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 01:54:13.705536 sshd[5615]: pam_unix(sshd:session): session closed for user core Nov 1 01:54:13.712567 systemd[1]: sshd@21-10.230.17.2:22-147.75.109.163:51632.service: Deactivated successfully. Nov 1 01:54:13.717329 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 01:54:13.718656 systemd-logind[1487]: Session 24 logged out. Waiting for processes to exit. Nov 1 01:54:13.722403 systemd-logind[1487]: Removed session 24. Nov 1 01:54:14.606065 containerd[1506]: time="2025-11-01T01:54:14.605771372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 01:54:14.921217 containerd[1506]: time="2025-11-01T01:54:14.921005673Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:54:14.922677 containerd[1506]: time="2025-11-01T01:54:14.922632013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 01:54:14.922846 containerd[1506]: time="2025-11-01T01:54:14.922656179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 01:54:14.924056 kubelet[2677]: E1101 01:54:14.923110 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:54:14.924056 kubelet[2677]: E1101 01:54:14.923208 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 01:54:14.924056 kubelet[2677]: E1101 01:54:14.923373 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4c8f59ef31024bbcaacb44f54e1035cb,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zjzk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ffbb49cbc-m9nb9_calico-system(9bcbd5c6-4789-405c-8bd4-745ed14fab4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 01:54:14.926984 containerd[1506]: time="2025-11-01T01:54:14.926847466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 01:54:15.269532 containerd[1506]: time="2025-11-01T01:54:15.269460039Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 01:54:15.271967 containerd[1506]: time="2025-11-01T01:54:15.271280606Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 01:54:15.271967 containerd[1506]: time="2025-11-01T01:54:15.271392925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 01:54:15.272189 kubelet[2677]: E1101 01:54:15.271683 2677 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:54:15.272189 kubelet[2677]: E1101 01:54:15.271748 2677 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 01:54:15.274311 kubelet[2677]: E1101 01:54:15.272935 2677 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zjzk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-ffbb49cbc-m9nb9_calico-system(9bcbd5c6-4789-405c-8bd4-745ed14fab4a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 01:54:15.274311 kubelet[2677]: E1101 01:54:15.274233 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ffbb49cbc-m9nb9" podUID="9bcbd5c6-4789-405c-8bd4-745ed14fab4a" Nov 1 01:54:15.608203 kubelet[2677]: E1101 01:54:15.607248 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" podUID="961d53cb-00c8-4e88-869d-034281366b6b" Nov 1 01:54:15.609003 kubelet[2677]: E1101 01:54:15.608922 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:54:18.868412 systemd[1]: Started sshd@22-10.230.17.2:22-147.75.109.163:51646.service - OpenSSH per-connection server daemon (147.75.109.163:51646). Nov 1 01:54:19.834329 sshd[5637]: Accepted publickey for core from 147.75.109.163 port 51646 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:54:19.837063 sshd[5637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:54:19.851346 systemd-logind[1487]: New session 25 of user core. Nov 1 01:54:19.858399 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 01:54:20.653547 sshd[5637]: pam_unix(sshd:session): session closed for user core Nov 1 01:54:20.659850 systemd[1]: sshd@22-10.230.17.2:22-147.75.109.163:51646.service: Deactivated successfully. Nov 1 01:54:20.666682 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 01:54:20.669186 systemd-logind[1487]: Session 25 logged out. Waiting for processes to exit. Nov 1 01:54:20.670944 systemd-logind[1487]: Removed session 25. Nov 1 01:54:24.606361 kubelet[2677]: E1101 01:54:24.606240 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-lqdk2" podUID="f54af395-651d-45bc-acec-8a87e82ec93b" Nov 1 01:54:25.608311 kubelet[2677]: E1101 01:54:25.607808 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-2z2fz" podUID="692e4d02-4b9e-43c3-8a3c-87f80adc9cda" Nov 1 01:54:25.608311 kubelet[2677]: E1101 01:54:25.607949 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7dfccfdf99-qcr45" podUID="0303fd48-19b2-41d1-991c-312dc81409eb" Nov 1 01:54:25.820467 systemd[1]: Started sshd@23-10.230.17.2:22-147.75.109.163:43694.service - OpenSSH per-connection server daemon (147.75.109.163:43694). Nov 1 01:54:26.735747 sshd[5652]: Accepted publickey for core from 147.75.109.163 port 43694 ssh2: RSA SHA256:wsKwS9St2o/aOqVTG3xb6exC9ZpBVPv1COf4/SxmH0A Nov 1 01:54:26.738505 sshd[5652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 01:54:26.750228 systemd-logind[1487]: New session 26 of user core. Nov 1 01:54:26.763461 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 1 01:54:27.552873 sshd[5652]: pam_unix(sshd:session): session closed for user core Nov 1 01:54:27.559762 systemd[1]: sshd@23-10.230.17.2:22-147.75.109.163:43694.service: Deactivated successfully. Nov 1 01:54:27.566267 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 01:54:27.568334 systemd-logind[1487]: Session 26 logged out. Waiting for processes to exit. Nov 1 01:54:27.572217 systemd-logind[1487]: Removed session 26. Nov 1 01:54:27.613300 kubelet[2677]: E1101 01:54:27.612584 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54865fd995-mzf2r" podUID="961d53cb-00c8-4e88-869d-034281366b6b" Nov 1 01:54:28.608346 kubelet[2677]: E1101 01:54:28.608222 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f685l" podUID="ec724e45-3797-40ba-a9db-970952094e39" Nov 1 01:54:29.611463 kubelet[2677]: E1101 01:54:29.611374 2677 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-ffbb49cbc-m9nb9" podUID="9bcbd5c6-4789-405c-8bd4-745ed14fab4a"