Oct 31 02:10:55.059242 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Oct 30 22:59:39 -00 2025 Oct 31 02:10:55.059279 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 02:10:55.059293 kernel: BIOS-provided physical RAM map: Oct 31 02:10:55.059310 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 31 02:10:55.059320 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 31 02:10:55.059331 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 31 02:10:55.059343 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Oct 31 02:10:55.059354 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Oct 31 02:10:55.059365 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 31 02:10:55.059376 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 31 02:10:55.059387 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 31 02:10:55.059398 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 31 02:10:55.059423 kernel: NX (Execute Disable) protection: active Oct 31 02:10:55.059436 kernel: APIC: Static calls initialized Oct 31 02:10:55.059449 kernel: SMBIOS 2.8 present. Oct 31 02:10:55.059466 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Oct 31 02:10:55.059479 kernel: Hypervisor detected: KVM Oct 31 02:10:55.059496 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 31 02:10:55.059508 kernel: kvm-clock: using sched offset of 5674658992 cycles Oct 31 02:10:55.059521 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 31 02:10:55.059533 kernel: tsc: Detected 2499.998 MHz processor Oct 31 02:10:55.059545 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 02:10:55.059557 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 02:10:55.059569 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Oct 31 02:10:55.059581 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 31 02:10:55.059593 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 02:10:55.059610 kernel: Using GB pages for direct mapping Oct 31 02:10:55.059622 kernel: ACPI: Early table checksum verification disabled Oct 31 02:10:55.059633 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Oct 31 02:10:55.059645 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 02:10:55.059657 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 02:10:55.059669 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 02:10:55.059681 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Oct 31 02:10:55.059692 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 02:10:55.059704 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 02:10:55.059721 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 02:10:55.059733 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 02:10:55.059745 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Oct 31 02:10:55.059756 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Oct 31 02:10:55.059768 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Oct 31 02:10:55.059787 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Oct 31 02:10:55.059799 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Oct 31 02:10:55.059816 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Oct 31 02:10:55.059829 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Oct 31 02:10:55.059841 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 31 02:10:55.059868 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 31 02:10:55.059882 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Oct 31 02:10:55.059894 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Oct 31 02:10:55.059906 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Oct 31 02:10:55.059919 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Oct 31 02:10:55.059937 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Oct 31 02:10:55.059949 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Oct 31 02:10:55.059962 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Oct 31 02:10:55.059974 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Oct 31 02:10:55.059986 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Oct 31 02:10:55.059999 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Oct 31 02:10:55.060011 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Oct 31 02:10:55.060024 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Oct 31 02:10:55.060042 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Oct 31 02:10:55.060061 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Oct 31 02:10:55.060073 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 31 02:10:55.060086 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 31 02:10:55.060098 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Oct 31 02:10:55.060111 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Oct 31 02:10:55.060124 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Oct 31 02:10:55.060136 kernel: Zone ranges: Oct 31 02:10:55.060149 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 02:10:55.060174 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Oct 31 02:10:55.060194 kernel: Normal empty Oct 31 02:10:55.060207 kernel: Movable zone start for each node Oct 31 02:10:55.060219 kernel: Early memory node ranges Oct 31 02:10:55.060231 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 31 02:10:55.060244 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Oct 31 02:10:55.060256 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Oct 31 02:10:55.060268 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 02:10:55.060280 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 31 02:10:55.060299 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Oct 31 02:10:55.060312 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 31 02:10:55.060331 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 31 02:10:55.060343 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 31 02:10:55.060356 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 31 02:10:55.060368 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 31 02:10:55.060381 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 31 02:10:55.060393 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 31 02:10:55.060405 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 31 02:10:55.060418 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 02:10:55.060430 kernel: TSC deadline timer available Oct 31 02:10:55.060448 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Oct 31 02:10:55.060461 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 31 02:10:55.060473 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 31 02:10:55.060486 kernel: Booting paravirtualized kernel on KVM Oct 31 02:10:55.060510 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 02:10:55.060523 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Oct 31 02:10:55.060552 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Oct 31 02:10:55.060567 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Oct 31 02:10:55.060579 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Oct 31 02:10:55.060597 kernel: kvm-guest: PV spinlocks enabled Oct 31 02:10:55.060610 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 31 02:10:55.060624 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 02:10:55.060637 kernel: random: crng init done Oct 31 02:10:55.060649 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 02:10:55.060662 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 31 02:10:55.060674 kernel: Fallback order for Node 0: 0 Oct 31 02:10:55.060687 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Oct 31 02:10:55.060704 kernel: Policy zone: DMA32 Oct 31 02:10:55.060723 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 02:10:55.060736 kernel: software IO TLB: area num 16. Oct 31 02:10:55.060749 kernel: Memory: 1901532K/2096616K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 194824K reserved, 0K cma-reserved) Oct 31 02:10:55.060762 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Oct 31 02:10:55.060775 kernel: Kernel/User page tables isolation: enabled Oct 31 02:10:55.060787 kernel: ftrace: allocating 37980 entries in 149 pages Oct 31 02:10:55.060800 kernel: ftrace: allocated 149 pages with 4 groups Oct 31 02:10:55.060812 kernel: Dynamic Preempt: voluntary Oct 31 02:10:55.060831 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 02:10:55.060844 kernel: rcu: RCU event tracing is enabled. Oct 31 02:10:55.060883 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Oct 31 02:10:55.060901 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 02:10:55.060924 kernel: Rude variant of Tasks RCU enabled. Oct 31 02:10:55.060962 kernel: Tracing variant of Tasks RCU enabled. Oct 31 02:10:55.060980 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 02:10:55.060994 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Oct 31 02:10:55.061007 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Oct 31 02:10:55.061020 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 31 02:10:55.061033 kernel: Console: colour VGA+ 80x25 Oct 31 02:10:55.061050 kernel: printk: console [tty0] enabled Oct 31 02:10:55.061064 kernel: printk: console [ttyS0] enabled Oct 31 02:10:55.061077 kernel: ACPI: Core revision 20230628 Oct 31 02:10:55.061090 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 02:10:55.061103 kernel: x2apic enabled Oct 31 02:10:55.061116 kernel: APIC: Switched APIC routing to: physical x2apic Oct 31 02:10:55.061134 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Oct 31 02:10:55.061154 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Oct 31 02:10:55.062323 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 31 02:10:55.062338 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 31 02:10:55.062352 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 31 02:10:55.062365 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 02:10:55.062378 kernel: Spectre V2 : Mitigation: Retpolines Oct 31 02:10:55.062391 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 31 02:10:55.062404 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 31 02:10:55.062425 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 02:10:55.062439 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 31 02:10:55.062452 kernel: MDS: Mitigation: Clear CPU buffers Oct 31 02:10:55.062465 kernel: MMIO Stale Data: Unknown: No mitigations Oct 31 02:10:55.062478 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 31 02:10:55.062490 kernel: active return thunk: its_return_thunk Oct 31 02:10:55.062503 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 31 02:10:55.062517 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 02:10:55.062530 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 02:10:55.062543 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 02:10:55.062556 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 02:10:55.062574 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 31 02:10:55.062588 kernel: Freeing SMP alternatives memory: 32K Oct 31 02:10:55.062608 kernel: pid_max: default: 32768 minimum: 301 Oct 31 02:10:55.062623 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 31 02:10:55.062636 kernel: landlock: Up and running. Oct 31 02:10:55.062649 kernel: SELinux: Initializing. Oct 31 02:10:55.062662 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 02:10:55.062675 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 02:10:55.062688 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Oct 31 02:10:55.062701 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Oct 31 02:10:55.062714 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Oct 31 02:10:55.062733 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Oct 31 02:10:55.062747 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Oct 31 02:10:55.062760 kernel: signal: max sigframe size: 1776 Oct 31 02:10:55.062773 kernel: rcu: Hierarchical SRCU implementation. Oct 31 02:10:55.062787 kernel: rcu: Max phase no-delay instances is 400. Oct 31 02:10:55.062800 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 31 02:10:55.062813 kernel: smp: Bringing up secondary CPUs ... Oct 31 02:10:55.062826 kernel: smpboot: x86: Booting SMP configuration: Oct 31 02:10:55.062839 kernel: .... node #0, CPUs: #1 Oct 31 02:10:55.062866 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Oct 31 02:10:55.062881 kernel: smp: Brought up 1 node, 2 CPUs Oct 31 02:10:55.062894 kernel: smpboot: Max logical packages: 16 Oct 31 02:10:55.062908 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Oct 31 02:10:55.062921 kernel: devtmpfs: initialized Oct 31 02:10:55.062934 kernel: x86/mm: Memory block size: 128MB Oct 31 02:10:55.062947 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 02:10:55.062960 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Oct 31 02:10:55.062974 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 02:10:55.062993 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 02:10:55.063006 kernel: audit: initializing netlink subsys (disabled) Oct 31 02:10:55.063020 kernel: audit: type=2000 audit(1761876653.066:1): state=initialized audit_enabled=0 res=1 Oct 31 02:10:55.063032 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 02:10:55.063046 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 02:10:55.063059 kernel: cpuidle: using governor menu Oct 31 02:10:55.063072 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 02:10:55.063085 kernel: dca service started, version 1.12.1 Oct 31 02:10:55.063098 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 31 02:10:55.063117 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 31 02:10:55.063130 kernel: PCI: Using configuration type 1 for base access Oct 31 02:10:55.063143 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 02:10:55.063157 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 02:10:55.063190 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 02:10:55.063203 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 02:10:55.063217 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 02:10:55.063230 kernel: ACPI: Added _OSI(Module Device) Oct 31 02:10:55.063243 kernel: ACPI: Added _OSI(Processor Device) Oct 31 02:10:55.063263 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 02:10:55.063276 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 02:10:55.063289 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 31 02:10:55.063302 kernel: ACPI: Interpreter enabled Oct 31 02:10:55.063315 kernel: ACPI: PM: (supports S0 S5) Oct 31 02:10:55.063328 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 02:10:55.063341 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 02:10:55.063354 kernel: PCI: Using E820 reservations for host bridge windows Oct 31 02:10:55.063367 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 31 02:10:55.063386 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 02:10:55.063703 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 02:10:55.063912 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 31 02:10:55.065898 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 31 02:10:55.065922 kernel: PCI host bridge to bus 0000:00 Oct 31 02:10:55.066225 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 02:10:55.066397 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 31 02:10:55.066604 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 02:10:55.066766 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Oct 31 02:10:55.066941 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 02:10:55.067105 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Oct 31 02:10:55.067315 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 02:10:55.067535 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 31 02:10:55.067755 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Oct 31 02:10:55.067974 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Oct 31 02:10:55.069919 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Oct 31 02:10:55.070137 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Oct 31 02:10:55.070354 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 02:10:55.070582 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 31 02:10:55.070782 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Oct 31 02:10:55.071016 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 31 02:10:55.071321 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Oct 31 02:10:55.071542 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 31 02:10:55.071728 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Oct 31 02:10:55.071954 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 31 02:10:55.072151 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Oct 31 02:10:55.072435 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 31 02:10:55.072655 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Oct 31 02:10:55.072892 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 31 02:10:55.073105 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Oct 31 02:10:55.073444 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 31 02:10:55.073648 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Oct 31 02:10:55.073887 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 31 02:10:55.074067 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Oct 31 02:10:55.074392 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 31 02:10:55.074598 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 31 02:10:55.074780 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Oct 31 02:10:55.074971 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Oct 31 02:10:55.075149 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Oct 31 02:10:55.075461 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 31 02:10:55.075641 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 31 02:10:55.075873 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Oct 31 02:10:55.076058 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Oct 31 02:10:55.076293 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 31 02:10:55.076485 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 31 02:10:55.076687 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 31 02:10:55.076910 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Oct 31 02:10:55.077089 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Oct 31 02:10:55.077348 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 31 02:10:55.077526 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 31 02:10:55.077730 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Oct 31 02:10:55.077957 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Oct 31 02:10:55.078149 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Oct 31 02:10:55.078357 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Oct 31 02:10:55.078537 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 31 02:10:55.078770 kernel: pci_bus 0000:02: extended config space not accessible Oct 31 02:10:55.078998 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Oct 31 02:10:55.079215 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Oct 31 02:10:55.079412 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Oct 31 02:10:55.079594 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 31 02:10:55.079805 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 31 02:10:55.080013 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Oct 31 02:10:55.080312 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Oct 31 02:10:55.080512 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Oct 31 02:10:55.080701 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 31 02:10:55.080921 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 31 02:10:55.081119 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Oct 31 02:10:55.081316 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Oct 31 02:10:55.081493 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Oct 31 02:10:55.081668 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 31 02:10:55.081843 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Oct 31 02:10:55.082043 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Oct 31 02:10:55.082267 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 31 02:10:55.082471 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Oct 31 02:10:55.082648 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Oct 31 02:10:55.082864 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 31 02:10:55.083066 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Oct 31 02:10:55.083259 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Oct 31 02:10:55.083434 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 31 02:10:55.083625 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Oct 31 02:10:55.083803 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Oct 31 02:10:55.084019 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 31 02:10:55.084328 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Oct 31 02:10:55.084553 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Oct 31 02:10:55.084740 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 31 02:10:55.084761 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 31 02:10:55.084776 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 31 02:10:55.084789 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 02:10:55.084803 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 31 02:10:55.084825 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 31 02:10:55.084839 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 31 02:10:55.084852 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 31 02:10:55.084956 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 31 02:10:55.084981 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 31 02:10:55.084995 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 31 02:10:55.085009 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 31 02:10:55.085022 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 31 02:10:55.085035 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 31 02:10:55.085057 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 31 02:10:55.085071 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 31 02:10:55.085084 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 31 02:10:55.085098 kernel: iommu: Default domain type: Translated Oct 31 02:10:55.085112 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 02:10:55.085126 kernel: PCI: Using ACPI for IRQ routing Oct 31 02:10:55.085140 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 02:10:55.085153 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 31 02:10:55.085188 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Oct 31 02:10:55.085385 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 31 02:10:55.085593 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 31 02:10:55.085789 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 02:10:55.085810 kernel: vgaarb: loaded Oct 31 02:10:55.085824 kernel: clocksource: Switched to clocksource kvm-clock Oct 31 02:10:55.085837 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 02:10:55.085851 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 02:10:55.085927 kernel: pnp: PnP ACPI init Oct 31 02:10:55.086139 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 31 02:10:55.086277 kernel: pnp: PnP ACPI: found 5 devices Oct 31 02:10:55.086296 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 02:10:55.086310 kernel: NET: Registered PF_INET protocol family Oct 31 02:10:55.086324 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 02:10:55.086338 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 31 02:10:55.086351 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 02:10:55.086364 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 31 02:10:55.086378 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 31 02:10:55.086400 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 31 02:10:55.086413 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 02:10:55.086427 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 02:10:55.086441 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 02:10:55.086454 kernel: NET: Registered PF_XDP protocol family Oct 31 02:10:55.086638 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Oct 31 02:10:55.086824 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Oct 31 02:10:55.087040 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Oct 31 02:10:55.087281 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Oct 31 02:10:55.087484 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 31 02:10:55.087709 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 31 02:10:55.087899 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 31 02:10:55.088075 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 31 02:10:55.088280 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Oct 31 02:10:55.088465 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Oct 31 02:10:55.088640 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Oct 31 02:10:55.088828 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Oct 31 02:10:55.089016 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Oct 31 02:10:55.089208 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Oct 31 02:10:55.089387 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Oct 31 02:10:55.089564 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Oct 31 02:10:55.089749 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Oct 31 02:10:55.089980 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 31 02:10:55.090178 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Oct 31 02:10:55.090362 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Oct 31 02:10:55.090548 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Oct 31 02:10:55.090727 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 31 02:10:55.090918 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Oct 31 02:10:55.091098 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Oct 31 02:10:55.091291 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Oct 31 02:10:55.091479 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 31 02:10:55.091684 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Oct 31 02:10:55.091893 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Oct 31 02:10:55.092078 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Oct 31 02:10:55.092285 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 31 02:10:55.092467 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Oct 31 02:10:55.092656 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Oct 31 02:10:55.092838 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Oct 31 02:10:55.093096 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 31 02:10:55.093310 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Oct 31 02:10:55.093508 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Oct 31 02:10:55.093693 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Oct 31 02:10:55.093887 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 31 02:10:55.094085 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Oct 31 02:10:55.094379 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Oct 31 02:10:55.094577 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Oct 31 02:10:55.094763 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 31 02:10:55.094970 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Oct 31 02:10:55.095152 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Oct 31 02:10:55.095390 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Oct 31 02:10:55.095616 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 31 02:10:55.095795 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Oct 31 02:10:55.095985 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Oct 31 02:10:55.096183 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Oct 31 02:10:55.096379 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 31 02:10:55.096551 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 31 02:10:55.096712 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 31 02:10:55.096885 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 31 02:10:55.097046 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Oct 31 02:10:55.097240 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 31 02:10:55.097412 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Oct 31 02:10:55.097632 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Oct 31 02:10:55.097805 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Oct 31 02:10:55.097999 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 31 02:10:55.098202 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 31 02:10:55.098418 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Oct 31 02:10:55.098634 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 31 02:10:55.098807 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 31 02:10:55.099078 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Oct 31 02:10:55.099303 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 31 02:10:55.099505 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 31 02:10:55.099694 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Oct 31 02:10:55.099888 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 31 02:10:55.100058 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 31 02:10:55.100274 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Oct 31 02:10:55.100477 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 31 02:10:55.100646 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 31 02:10:55.100849 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Oct 31 02:10:55.101071 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 31 02:10:55.101353 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 31 02:10:55.101534 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Oct 31 02:10:55.101700 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Oct 31 02:10:55.101876 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 31 02:10:55.102067 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Oct 31 02:10:55.102302 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 31 02:10:55.102473 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 31 02:10:55.102504 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 31 02:10:55.102519 kernel: PCI: CLS 0 bytes, default 64 Oct 31 02:10:55.102534 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 31 02:10:55.102548 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Oct 31 02:10:55.102563 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 31 02:10:55.102577 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Oct 31 02:10:55.102591 kernel: Initialise system trusted keyrings Oct 31 02:10:55.102606 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 31 02:10:55.102626 kernel: Key type asymmetric registered Oct 31 02:10:55.102640 kernel: Asymmetric key parser 'x509' registered Oct 31 02:10:55.102654 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 31 02:10:55.102668 kernel: io scheduler mq-deadline registered Oct 31 02:10:55.102682 kernel: io scheduler kyber registered Oct 31 02:10:55.102697 kernel: io scheduler bfq registered Oct 31 02:10:55.102886 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 31 02:10:55.103069 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 31 02:10:55.103282 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 02:10:55.103472 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 31 02:10:55.103699 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 31 02:10:55.103901 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 02:10:55.104082 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 31 02:10:55.104315 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 31 02:10:55.104507 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 02:10:55.104697 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 31 02:10:55.104889 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 31 02:10:55.105067 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 02:10:55.105272 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 31 02:10:55.105481 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 31 02:10:55.105678 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 02:10:55.105895 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 31 02:10:55.106075 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 31 02:10:55.106269 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 02:10:55.106451 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 31 02:10:55.106628 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 31 02:10:55.106809 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 02:10:55.107011 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 31 02:10:55.107255 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 31 02:10:55.107436 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 02:10:55.107458 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 02:10:55.107473 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 31 02:10:55.107488 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 31 02:10:55.107510 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 02:10:55.107525 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 02:10:55.107540 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 31 02:10:55.107554 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 02:10:55.107568 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 02:10:55.107583 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 02:10:55.107770 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 31 02:10:55.107951 kernel: rtc_cmos 00:03: registered as rtc0 Oct 31 02:10:55.108128 kernel: rtc_cmos 00:03: setting system clock to 2025-10-31T02:10:54 UTC (1761876654) Oct 31 02:10:55.108366 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 31 02:10:55.108388 kernel: intel_pstate: CPU model not supported Oct 31 02:10:55.108402 kernel: NET: Registered PF_INET6 protocol family Oct 31 02:10:55.108416 kernel: Segment Routing with IPv6 Oct 31 02:10:55.108430 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 02:10:55.108444 kernel: NET: Registered PF_PACKET protocol family Oct 31 02:10:55.108459 kernel: Key type dns_resolver registered Oct 31 02:10:55.108474 kernel: IPI shorthand broadcast: enabled Oct 31 02:10:55.108496 kernel: sched_clock: Marking stable (1526003864, 227022740)->(2011303633, -258277029) Oct 31 02:10:55.108511 kernel: registered taskstats version 1 Oct 31 02:10:55.108525 kernel: Loading compiled-in X.509 certificates Oct 31 02:10:55.108539 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: 3640cadef2ce00a652278ae302be325ebb54a228' Oct 31 02:10:55.108553 kernel: Key type .fscrypt registered Oct 31 02:10:55.108567 kernel: Key type fscrypt-provisioning registered Oct 31 02:10:55.108580 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 02:10:55.108595 kernel: ima: Allocated hash algorithm: sha1 Oct 31 02:10:55.108609 kernel: ima: No architecture policies found Oct 31 02:10:55.108628 kernel: clk: Disabling unused clocks Oct 31 02:10:55.108642 kernel: Freeing unused kernel image (initmem) memory: 42880K Oct 31 02:10:55.108656 kernel: Write protecting the kernel read-only data: 36864k Oct 31 02:10:55.108670 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Oct 31 02:10:55.108684 kernel: Run /init as init process Oct 31 02:10:55.108698 kernel: with arguments: Oct 31 02:10:55.108713 kernel: /init Oct 31 02:10:55.108726 kernel: with environment: Oct 31 02:10:55.108740 kernel: HOME=/ Oct 31 02:10:55.108753 kernel: TERM=linux Oct 31 02:10:55.108776 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 02:10:55.108798 systemd[1]: Detected virtualization kvm. Oct 31 02:10:55.108814 systemd[1]: Detected architecture x86-64. Oct 31 02:10:55.108828 systemd[1]: Running in initrd. Oct 31 02:10:55.108843 systemd[1]: No hostname configured, using default hostname. Oct 31 02:10:55.108868 systemd[1]: Hostname set to . Oct 31 02:10:55.108885 systemd[1]: Initializing machine ID from VM UUID. Oct 31 02:10:55.108906 systemd[1]: Queued start job for default target initrd.target. Oct 31 02:10:55.108931 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 02:10:55.108947 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 02:10:55.108963 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 02:10:55.108985 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 02:10:55.109001 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 02:10:55.109016 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 02:10:55.109040 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 31 02:10:55.109055 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 31 02:10:55.109070 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 02:10:55.109086 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 02:10:55.109101 systemd[1]: Reached target paths.target - Path Units. Oct 31 02:10:55.109116 systemd[1]: Reached target slices.target - Slice Units. Oct 31 02:10:55.109131 systemd[1]: Reached target swap.target - Swaps. Oct 31 02:10:55.109145 systemd[1]: Reached target timers.target - Timer Units. Oct 31 02:10:55.109180 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 02:10:55.109196 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 02:10:55.109211 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 02:10:55.109226 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 31 02:10:55.109240 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 02:10:55.109255 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 02:10:55.109270 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 02:10:55.109285 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 02:10:55.109306 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 02:10:55.109322 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 02:10:55.109337 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 02:10:55.109352 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 02:10:55.109366 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 02:10:55.109381 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 02:10:55.109396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 02:10:55.109462 systemd-journald[202]: Collecting audit messages is disabled. Oct 31 02:10:55.109503 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 02:10:55.109519 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 02:10:55.109534 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 02:10:55.109555 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 02:10:55.109571 systemd-journald[202]: Journal started Oct 31 02:10:55.109599 systemd-journald[202]: Runtime Journal (/run/log/journal/b28a58b41aa1417791fd3315bc386fa0) is 4.7M, max 38.0M, 33.2M free. Oct 31 02:10:55.089205 systemd-modules-load[203]: Inserted module 'overlay' Oct 31 02:10:55.175366 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 02:10:55.175402 kernel: Bridge firewalling registered Oct 31 02:10:55.175423 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 02:10:55.140215 systemd-modules-load[203]: Inserted module 'br_netfilter' Oct 31 02:10:55.178343 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 02:10:55.180220 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 02:10:55.185141 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 02:10:55.194402 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 02:10:55.203241 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 02:10:55.210404 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 02:10:55.223391 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 02:10:55.227285 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 02:10:55.235614 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 02:10:55.237904 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 02:10:55.248383 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 02:10:55.251114 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 02:10:55.257371 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 02:10:55.267188 dracut-cmdline[236]: dracut-dracut-053 Oct 31 02:10:55.271696 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 02:10:55.308205 systemd-resolved[240]: Positive Trust Anchors: Oct 31 02:10:55.308230 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 02:10:55.308276 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 02:10:55.313866 systemd-resolved[240]: Defaulting to hostname 'linux'. Oct 31 02:10:55.319275 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 02:10:55.320799 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 02:10:55.377210 kernel: SCSI subsystem initialized Oct 31 02:10:55.388213 kernel: Loading iSCSI transport class v2.0-870. Oct 31 02:10:55.402224 kernel: iscsi: registered transport (tcp) Oct 31 02:10:55.430316 kernel: iscsi: registered transport (qla4xxx) Oct 31 02:10:55.430423 kernel: QLogic iSCSI HBA Driver Oct 31 02:10:55.492326 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 02:10:55.499429 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 02:10:55.532829 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 02:10:55.532942 kernel: device-mapper: uevent: version 1.0.3 Oct 31 02:10:55.534224 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 31 02:10:55.583212 kernel: raid6: sse2x4 gen() 7907 MB/s Oct 31 02:10:55.601206 kernel: raid6: sse2x2 gen() 5563 MB/s Oct 31 02:10:55.619770 kernel: raid6: sse2x1 gen() 5545 MB/s Oct 31 02:10:55.619866 kernel: raid6: using algorithm sse2x4 gen() 7907 MB/s Oct 31 02:10:55.638872 kernel: raid6: .... xor() 5061 MB/s, rmw enabled Oct 31 02:10:55.638935 kernel: raid6: using ssse3x2 recovery algorithm Oct 31 02:10:55.665210 kernel: xor: automatically using best checksumming function avx Oct 31 02:10:55.863578 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 02:10:55.878610 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 02:10:55.885384 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 02:10:55.907764 systemd-udevd[422]: Using default interface naming scheme 'v255'. Oct 31 02:10:55.916128 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 02:10:55.924383 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 02:10:55.946783 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Oct 31 02:10:55.989148 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 02:10:56.001417 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 02:10:56.126321 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 02:10:56.136402 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 02:10:56.169302 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 02:10:56.173491 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 02:10:56.175247 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 02:10:56.176941 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 02:10:56.183379 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 02:10:56.215249 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 02:10:56.249187 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Oct 31 02:10:56.267481 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 31 02:10:56.274196 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 02:10:56.301192 kernel: AVX version of gcm_enc/dec engaged. Oct 31 02:10:56.301844 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 02:10:56.306949 kernel: AES CTR mode by8 optimization enabled Oct 31 02:10:56.302332 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 02:10:56.304049 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 02:10:56.304778 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 02:10:56.308063 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 02:10:56.312714 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 02:10:56.321181 kernel: libata version 3.00 loaded. Oct 31 02:10:56.323513 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 02:10:56.344420 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 02:10:56.344454 kernel: GPT:17805311 != 125829119 Oct 31 02:10:56.344473 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 02:10:56.344490 kernel: GPT:17805311 != 125829119 Oct 31 02:10:56.344508 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 02:10:56.344525 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 02:10:56.344543 kernel: ACPI: bus type USB registered Oct 31 02:10:56.344561 kernel: usbcore: registered new interface driver usbfs Oct 31 02:10:56.351857 kernel: usbcore: registered new interface driver hub Oct 31 02:10:56.351899 kernel: usbcore: registered new device driver usb Oct 31 02:10:56.370191 kernel: ahci 0000:00:1f.2: version 3.0 Oct 31 02:10:56.372228 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 31 02:10:56.377231 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 31 02:10:56.377495 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 31 02:10:56.398197 kernel: scsi host0: ahci Oct 31 02:10:56.406078 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 31 02:10:56.535980 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Oct 31 02:10:56.536018 kernel: scsi host1: ahci Oct 31 02:10:56.536338 kernel: scsi host2: ahci Oct 31 02:10:56.536559 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Oct 31 02:10:56.536799 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Oct 31 02:10:56.537047 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 31 02:10:56.537306 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Oct 31 02:10:56.537530 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Oct 31 02:10:56.537745 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Oct 31 02:10:56.537991 kernel: hub 1-0:1.0: USB hub found Oct 31 02:10:56.538263 kernel: hub 1-0:1.0: 4 ports detected Oct 31 02:10:56.538497 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 31 02:10:56.538798 kernel: hub 2-0:1.0: USB hub found Oct 31 02:10:56.539067 kernel: hub 2-0:1.0: 4 ports detected Oct 31 02:10:56.539337 kernel: scsi host3: ahci Oct 31 02:10:56.539565 kernel: scsi host4: ahci Oct 31 02:10:56.539794 kernel: BTRFS: device fsid 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (467) Oct 31 02:10:56.539830 kernel: scsi host5: ahci Oct 31 02:10:56.540064 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Oct 31 02:10:56.540088 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Oct 31 02:10:56.540107 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Oct 31 02:10:56.540126 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Oct 31 02:10:56.540145 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Oct 31 02:10:56.540188 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Oct 31 02:10:56.541836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 02:10:56.554919 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 31 02:10:56.567501 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 02:10:56.573439 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 31 02:10:56.574294 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 31 02:10:56.583383 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 02:10:56.585332 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 02:10:56.597967 disk-uuid[565]: Primary Header is updated. Oct 31 02:10:56.597967 disk-uuid[565]: Secondary Entries is updated. Oct 31 02:10:56.597967 disk-uuid[565]: Secondary Header is updated. Oct 31 02:10:56.607182 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 02:10:56.615196 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 02:10:56.622779 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 02:10:56.620986 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 02:10:56.677613 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 31 02:10:56.764125 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 31 02:10:56.764222 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 31 02:10:56.765966 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 31 02:10:56.769552 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 31 02:10:56.769592 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 31 02:10:56.772434 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 31 02:10:56.831189 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 31 02:10:56.846811 kernel: usbcore: registered new interface driver usbhid Oct 31 02:10:56.846879 kernel: usbhid: USB HID core driver Oct 31 02:10:56.859470 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Oct 31 02:10:56.859528 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Oct 31 02:10:57.617950 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 02:10:57.619219 disk-uuid[566]: The operation has completed successfully. Oct 31 02:10:57.678620 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 02:10:57.678815 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 02:10:57.696400 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 31 02:10:57.711385 sh[588]: Success Oct 31 02:10:57.730203 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Oct 31 02:10:57.801895 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 31 02:10:57.813284 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 31 02:10:57.815241 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 31 02:10:57.845831 kernel: BTRFS info (device dm-0): first mount of filesystem 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 Oct 31 02:10:57.845906 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 31 02:10:57.849582 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 31 02:10:57.849626 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 02:10:57.851291 kernel: BTRFS info (device dm-0): using free space tree Oct 31 02:10:57.861959 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 31 02:10:57.863509 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 31 02:10:57.868349 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 02:10:57.871736 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 02:10:57.887742 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 02:10:57.887819 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 02:10:57.889560 kernel: BTRFS info (device vda6): using free space tree Oct 31 02:10:57.895194 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 02:10:57.909320 kernel: BTRFS info (device vda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 02:10:57.908938 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 31 02:10:57.918078 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 02:10:57.923372 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 02:10:58.168363 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 02:10:58.252907 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 02:10:58.273041 ignition[668]: Ignition 2.19.0 Oct 31 02:10:58.277380 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 02:10:58.273068 ignition[668]: Stage: fetch-offline Oct 31 02:10:58.273156 ignition[668]: no configs at "/usr/lib/ignition/base.d" Oct 31 02:10:58.273488 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 02:10:58.273734 ignition[668]: parsed url from cmdline: "" Oct 31 02:10:58.273742 ignition[668]: no config URL provided Oct 31 02:10:58.273753 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 02:10:58.273785 ignition[668]: no config at "/usr/lib/ignition/user.ign" Oct 31 02:10:58.273795 ignition[668]: failed to fetch config: resource requires networking Oct 31 02:10:58.274118 ignition[668]: Ignition finished successfully Oct 31 02:10:58.287900 systemd-networkd[773]: lo: Link UP Oct 31 02:10:58.287917 systemd-networkd[773]: lo: Gained carrier Oct 31 02:10:58.290578 systemd-networkd[773]: Enumeration completed Oct 31 02:10:58.290791 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 02:10:58.291984 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 02:10:58.291999 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 02:10:58.293404 systemd-networkd[773]: eth0: Link UP Oct 31 02:10:58.293412 systemd-networkd[773]: eth0: Gained carrier Oct 31 02:10:58.293436 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 02:10:58.293561 systemd[1]: Reached target network.target - Network. Oct 31 02:10:58.301519 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 31 02:10:58.326640 ignition[777]: Ignition 2.19.0 Oct 31 02:10:58.326663 ignition[777]: Stage: fetch Oct 31 02:10:58.327055 ignition[777]: no configs at "/usr/lib/ignition/base.d" Oct 31 02:10:58.327079 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 02:10:58.327272 ignition[777]: parsed url from cmdline: "" Oct 31 02:10:58.327280 ignition[777]: no config URL provided Oct 31 02:10:58.327291 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 02:10:58.327308 ignition[777]: no config at "/usr/lib/ignition/user.ign" Oct 31 02:10:58.327553 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Oct 31 02:10:58.327841 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Oct 31 02:10:58.327881 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Oct 31 02:10:58.328034 ignition[777]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Oct 31 02:10:58.364288 systemd-networkd[773]: eth0: DHCPv4 address 10.230.61.6/30, gateway 10.230.61.5 acquired from 10.230.61.5 Oct 31 02:10:58.529052 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Oct 31 02:10:58.559486 ignition[777]: GET result: OK Oct 31 02:10:58.560255 ignition[777]: parsing config with SHA512: 178fbddd66bde36423c8da1913b0a7ae823c66e7365d59ac367fb5491e8d85c91bc34ba2f2d8677bdd8a85f83c3753ea025e4d37d0c02f0e5436be3d8e228959 Oct 31 02:10:58.567599 unknown[777]: fetched base config from "system" Oct 31 02:10:58.567624 unknown[777]: fetched base config from "system" Oct 31 02:10:58.567635 unknown[777]: fetched user config from "openstack" Oct 31 02:10:58.571461 ignition[777]: fetch: fetch complete Oct 31 02:10:58.571473 ignition[777]: fetch: fetch passed Oct 31 02:10:58.576999 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 31 02:10:58.571566 ignition[777]: Ignition finished successfully Oct 31 02:10:58.589440 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 02:10:58.617442 ignition[784]: Ignition 2.19.0 Oct 31 02:10:58.618637 ignition[784]: Stage: kargs Oct 31 02:10:58.618920 ignition[784]: no configs at "/usr/lib/ignition/base.d" Oct 31 02:10:58.618942 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 02:10:58.621712 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 02:10:58.620141 ignition[784]: kargs: kargs passed Oct 31 02:10:58.620236 ignition[784]: Ignition finished successfully Oct 31 02:10:58.630377 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 02:10:58.650335 ignition[791]: Ignition 2.19.0 Oct 31 02:10:58.650364 ignition[791]: Stage: disks Oct 31 02:10:58.650617 ignition[791]: no configs at "/usr/lib/ignition/base.d" Oct 31 02:10:58.650640 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 02:10:58.654238 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 02:10:58.651983 ignition[791]: disks: disks passed Oct 31 02:10:58.655896 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 02:10:58.652059 ignition[791]: Ignition finished successfully Oct 31 02:10:58.656925 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 02:10:58.658407 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 02:10:58.659894 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 02:10:58.661344 systemd[1]: Reached target basic.target - Basic System. Oct 31 02:10:58.683441 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 02:10:58.706035 systemd-fsck[799]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 31 02:10:58.709659 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 02:10:58.716309 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 02:10:58.843202 kernel: EXT4-fs (vda9): mounted filesystem 044ea9d4-3e15-48f6-be3f-240ec74f6b62 r/w with ordered data mode. Quota mode: none. Oct 31 02:10:58.843675 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 02:10:58.845001 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 02:10:58.853300 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 02:10:58.856303 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 02:10:58.857974 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 02:10:58.860358 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Oct 31 02:10:58.862262 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 02:10:58.863708 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 02:10:58.871187 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (807) Oct 31 02:10:58.877533 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 02:10:58.877574 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 02:10:58.877595 kernel: BTRFS info (device vda6): using free space tree Oct 31 02:10:58.879870 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 02:10:58.888441 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 02:10:58.898192 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 02:10:58.902798 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 02:10:58.999712 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 02:10:59.008425 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Oct 31 02:10:59.022756 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 02:10:59.032990 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 02:10:59.148475 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 02:10:59.155316 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 02:10:59.158296 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 02:10:59.171229 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 02:10:59.174607 kernel: BTRFS info (device vda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 02:10:59.207872 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 02:10:59.212283 ignition[924]: INFO : Ignition 2.19.0 Oct 31 02:10:59.212283 ignition[924]: INFO : Stage: mount Oct 31 02:10:59.213952 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 02:10:59.213952 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 02:10:59.213952 ignition[924]: INFO : mount: mount passed Oct 31 02:10:59.217388 ignition[924]: INFO : Ignition finished successfully Oct 31 02:10:59.216588 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 02:10:59.873417 systemd-networkd[773]: eth0: Gained IPv6LL Oct 31 02:11:01.383275 systemd-networkd[773]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8f41:24:19ff:fee6:3d06/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8f41:24:19ff:fee6:3d06/64 assigned by NDisc. Oct 31 02:11:01.383294 systemd-networkd[773]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Oct 31 02:11:06.087679 coreos-metadata[809]: Oct 31 02:11:06.087 WARN failed to locate config-drive, using the metadata service API instead Oct 31 02:11:06.115006 coreos-metadata[809]: Oct 31 02:11:06.114 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 31 02:11:06.135399 coreos-metadata[809]: Oct 31 02:11:06.135 INFO Fetch successful Oct 31 02:11:06.136456 coreos-metadata[809]: Oct 31 02:11:06.135 INFO wrote hostname srv-xg3om.gb1.brightbox.com to /sysroot/etc/hostname Oct 31 02:11:06.139670 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Oct 31 02:11:06.139952 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Oct 31 02:11:06.149336 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 02:11:06.178621 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 02:11:06.193186 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Oct 31 02:11:06.193292 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 02:11:06.194437 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 02:11:06.196427 kernel: BTRFS info (device vda6): using free space tree Oct 31 02:11:06.202190 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 02:11:06.205754 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 02:11:06.245900 ignition[959]: INFO : Ignition 2.19.0 Oct 31 02:11:06.245900 ignition[959]: INFO : Stage: files Oct 31 02:11:06.247933 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 02:11:06.247933 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 02:11:06.249776 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Oct 31 02:11:06.250793 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 02:11:06.250793 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 02:11:06.254043 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 02:11:06.255254 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 02:11:06.256418 unknown[959]: wrote ssh authorized keys file for user: core Oct 31 02:11:06.257453 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 02:11:06.258473 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 02:11:06.258473 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 31 02:11:06.511713 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 31 02:11:07.096190 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 02:11:07.096190 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 31 02:11:07.096190 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 02:11:07.096190 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 02:11:07.096190 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 02:11:07.096190 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 02:11:07.096190 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 02:11:07.096190 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 02:11:07.112933 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 02:11:07.112933 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 02:11:07.112933 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 02:11:07.112933 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 02:11:07.112933 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 02:11:07.112933 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 02:11:07.112933 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Oct 31 02:11:07.495718 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 31 02:11:10.515861 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Oct 31 02:11:10.515861 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 31 02:11:10.533608 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 02:11:10.533608 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 02:11:10.533608 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 31 02:11:10.533608 ignition[959]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 31 02:11:10.533608 ignition[959]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 02:11:10.533608 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 02:11:10.533608 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 02:11:10.533608 ignition[959]: INFO : files: files passed Oct 31 02:11:10.533608 ignition[959]: INFO : Ignition finished successfully Oct 31 02:11:10.539415 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 02:11:10.550598 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 02:11:10.562516 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 02:11:10.582513 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 02:11:10.583685 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 02:11:10.597275 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 02:11:10.599300 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 02:11:10.600670 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 02:11:10.601598 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 02:11:10.603512 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 02:11:10.614573 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 02:11:10.652535 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 02:11:10.653257 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 02:11:10.655111 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 02:11:10.656260 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 02:11:10.657953 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 02:11:10.667431 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 02:11:10.688574 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 02:11:10.698500 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 02:11:10.715030 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 02:11:10.717285 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 02:11:10.718227 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 02:11:10.718981 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 02:11:10.719200 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 02:11:10.721510 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 02:11:10.722538 systemd[1]: Stopped target basic.target - Basic System. Oct 31 02:11:10.723942 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 02:11:10.725614 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 02:11:10.727154 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 02:11:10.728741 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 02:11:10.730178 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 02:11:10.731920 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 02:11:10.733534 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 02:11:10.735061 systemd[1]: Stopped target swap.target - Swaps. Oct 31 02:11:10.736577 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 02:11:10.736852 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 02:11:10.738745 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 02:11:10.739740 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 02:11:10.741111 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 02:11:10.741390 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 02:11:10.742803 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 02:11:10.743046 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 02:11:10.745152 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 02:11:10.745412 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 02:11:10.747344 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 02:11:10.747606 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 02:11:10.756586 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 02:11:10.757419 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 02:11:10.758312 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 02:11:10.769491 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 02:11:10.771242 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 02:11:10.771555 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 02:11:10.775474 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 02:11:10.775705 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 02:11:10.794495 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 02:11:10.797583 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 02:11:10.799437 ignition[1011]: INFO : Ignition 2.19.0 Oct 31 02:11:10.799437 ignition[1011]: INFO : Stage: umount Oct 31 02:11:10.799437 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 02:11:10.799437 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 02:11:10.806324 ignition[1011]: INFO : umount: umount passed Oct 31 02:11:10.806324 ignition[1011]: INFO : Ignition finished successfully Oct 31 02:11:10.800899 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 02:11:10.801036 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 02:11:10.808099 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 02:11:10.808227 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 02:11:10.809906 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 02:11:10.809996 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 02:11:10.811465 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 31 02:11:10.811585 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 31 02:11:10.813555 systemd[1]: Stopped target network.target - Network. Oct 31 02:11:10.814198 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 02:11:10.814315 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 02:11:10.815122 systemd[1]: Stopped target paths.target - Path Units. Oct 31 02:11:10.818270 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 02:11:10.818394 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 02:11:10.819263 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 02:11:10.819965 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 02:11:10.820711 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 02:11:10.820784 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 02:11:10.823288 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 02:11:10.823356 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 02:11:10.824832 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 02:11:10.824922 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 02:11:10.826386 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 02:11:10.826542 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 02:11:10.827818 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 02:11:10.830998 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 02:11:10.836476 systemd-networkd[773]: eth0: DHCPv6 lease lost Oct 31 02:11:10.838394 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 02:11:10.842342 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 02:11:10.842663 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 02:11:10.844802 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 02:11:10.844981 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 02:11:10.849637 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 02:11:10.850063 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 02:11:10.855669 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 02:11:10.856219 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 02:11:10.858428 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 02:11:10.858705 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 02:11:10.866417 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 02:11:10.868402 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 02:11:10.868496 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 02:11:10.871125 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 02:11:10.871243 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 02:11:10.872055 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 02:11:10.872191 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 02:11:10.874258 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 02:11:10.874341 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 02:11:10.876607 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 02:11:10.892560 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 02:11:10.892775 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 02:11:10.895046 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 02:11:10.895480 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 02:11:10.898129 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 02:11:10.898258 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 02:11:10.900039 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 02:11:10.900099 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 02:11:10.901666 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 02:11:10.901743 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 02:11:10.904045 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 02:11:10.904124 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 02:11:10.905658 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 02:11:10.905737 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 02:11:10.928283 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 02:11:10.929274 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 02:11:10.929386 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 02:11:10.930423 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 02:11:10.930588 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 02:11:10.945336 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 02:11:10.945543 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 02:11:10.948076 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 02:11:10.961562 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 02:11:10.973385 systemd[1]: Switching root. Oct 31 02:11:11.012512 systemd-journald[202]: Journal stopped Oct 31 02:11:12.648586 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Oct 31 02:11:12.648726 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 02:11:12.648779 kernel: SELinux: policy capability open_perms=1 Oct 31 02:11:12.648803 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 02:11:12.648822 kernel: SELinux: policy capability always_check_network=0 Oct 31 02:11:12.648841 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 02:11:12.648861 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 02:11:12.648880 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 02:11:12.648898 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 02:11:12.648918 kernel: audit: type=1403 audit(1761876671.286:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 02:11:12.648962 systemd[1]: Successfully loaded SELinux policy in 59.014ms. Oct 31 02:11:12.649001 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.609ms. Oct 31 02:11:12.649025 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 02:11:12.649047 systemd[1]: Detected virtualization kvm. Oct 31 02:11:12.649068 systemd[1]: Detected architecture x86-64. Oct 31 02:11:12.649089 systemd[1]: Detected first boot. Oct 31 02:11:12.649110 systemd[1]: Hostname set to . Oct 31 02:11:12.649130 systemd[1]: Initializing machine ID from VM UUID. Oct 31 02:11:12.649182 zram_generator::config[1054]: No configuration found. Oct 31 02:11:12.649209 systemd[1]: Populated /etc with preset unit settings. Oct 31 02:11:12.649231 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 31 02:11:12.649252 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 31 02:11:12.649273 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 31 02:11:12.649295 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 31 02:11:12.649328 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 31 02:11:12.649351 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 31 02:11:12.649390 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 31 02:11:12.649425 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 31 02:11:12.649448 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 31 02:11:12.649471 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 31 02:11:12.649493 systemd[1]: Created slice user.slice - User and Session Slice. Oct 31 02:11:12.649513 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 02:11:12.649534 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 02:11:12.649555 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 31 02:11:12.649576 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 31 02:11:12.649613 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 31 02:11:12.649639 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 02:11:12.649659 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 31 02:11:12.649680 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 02:11:12.649701 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 31 02:11:12.649740 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 31 02:11:12.649779 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 31 02:11:12.649804 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 31 02:11:12.649825 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 02:11:12.649846 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 02:11:12.649867 systemd[1]: Reached target slices.target - Slice Units. Oct 31 02:11:12.649888 systemd[1]: Reached target swap.target - Swaps. Oct 31 02:11:12.649910 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 31 02:11:12.649930 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 31 02:11:12.649951 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 02:11:12.649972 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 02:11:12.650020 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 02:11:12.650073 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 31 02:11:12.650098 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 31 02:11:12.650119 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 31 02:11:12.650141 systemd[1]: Mounting media.mount - External Media Directory... Oct 31 02:11:12.652328 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 02:11:12.652362 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 31 02:11:12.652403 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 31 02:11:12.652426 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 31 02:11:12.652449 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 02:11:12.652471 systemd[1]: Reached target machines.target - Containers. Oct 31 02:11:12.652492 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 31 02:11:12.652513 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 02:11:12.652554 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 02:11:12.652580 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 31 02:11:12.652601 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 02:11:12.652622 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 02:11:12.652643 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 02:11:12.652670 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 02:11:12.652692 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 02:11:12.652713 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 02:11:12.652733 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 31 02:11:12.652772 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 31 02:11:12.652796 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 31 02:11:12.652817 systemd[1]: Stopped systemd-fsck-usr.service. Oct 31 02:11:12.652838 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 02:11:12.652859 kernel: fuse: init (API version 7.39) Oct 31 02:11:12.652879 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 02:11:12.652900 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 02:11:12.652923 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 31 02:11:12.652944 kernel: loop: module loaded Oct 31 02:11:12.652982 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 02:11:12.653007 systemd[1]: verity-setup.service: Deactivated successfully. Oct 31 02:11:12.653028 systemd[1]: Stopped verity-setup.service. Oct 31 02:11:12.653050 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 02:11:12.653071 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 31 02:11:12.653093 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 31 02:11:12.653125 systemd[1]: Mounted media.mount - External Media Directory. Oct 31 02:11:12.653150 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 31 02:11:12.658238 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 31 02:11:12.658272 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 31 02:11:12.658318 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 02:11:12.658344 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 02:11:12.658365 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 31 02:11:12.658405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 02:11:12.658430 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 02:11:12.658452 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 02:11:12.658474 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 02:11:12.658520 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 02:11:12.658544 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 02:11:12.658582 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 02:11:12.658616 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 02:11:12.658640 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 02:11:12.658678 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 02:11:12.658718 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 31 02:11:12.658743 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 02:11:12.658807 systemd-journald[1143]: Collecting audit messages is disabled. Oct 31 02:11:12.658875 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 31 02:11:12.658902 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 31 02:11:12.658925 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 02:11:12.658947 systemd-journald[1143]: Journal started Oct 31 02:11:12.658980 systemd-journald[1143]: Runtime Journal (/run/log/journal/b28a58b41aa1417791fd3315bc386fa0) is 4.7M, max 38.0M, 33.2M free. Oct 31 02:11:12.153363 systemd[1]: Queued start job for default target multi-user.target. Oct 31 02:11:12.670415 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 02:11:12.670462 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 31 02:11:12.177172 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 31 02:11:12.178024 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 31 02:11:12.684181 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 31 02:11:12.693237 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 31 02:11:12.697183 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 02:11:12.722530 kernel: ACPI: bus type drm_connector registered Oct 31 02:11:12.729963 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 31 02:11:12.730061 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 02:11:12.741186 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 31 02:11:12.741285 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 02:11:12.751202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 02:11:12.762291 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 31 02:11:12.774231 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 02:11:12.773659 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 31 02:11:12.774845 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 02:11:12.775081 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 02:11:12.776074 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 31 02:11:12.777100 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 31 02:11:12.778384 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 31 02:11:12.847594 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 31 02:11:12.852457 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 31 02:11:12.871344 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 31 02:11:12.873594 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 31 02:11:12.892198 kernel: loop0: detected capacity change from 0 to 140768 Oct 31 02:11:12.882490 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 31 02:11:12.945481 systemd-journald[1143]: Time spent on flushing to /var/log/journal/b28a58b41aa1417791fd3315bc386fa0 is 52.150ms for 1143 entries. Oct 31 02:11:12.945481 systemd-journald[1143]: System Journal (/var/log/journal/b28a58b41aa1417791fd3315bc386fa0) is 8.0M, max 584.8M, 576.8M free. Oct 31 02:11:13.049590 systemd-journald[1143]: Received client request to flush runtime journal. Oct 31 02:11:13.049723 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 02:11:12.975967 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 02:11:12.979225 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 02:11:12.985644 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 31 02:11:13.046472 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 31 02:11:13.053685 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 02:11:13.055103 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 31 02:11:13.059597 kernel: loop1: detected capacity change from 0 to 8 Oct 31 02:11:13.088401 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 02:11:13.105808 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 31 02:11:13.128601 kernel: loop2: detected capacity change from 0 to 142488 Oct 31 02:11:13.127761 udevadm[1210]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 31 02:11:13.142298 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Oct 31 02:11:13.142329 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Oct 31 02:11:13.154686 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 02:11:13.216269 kernel: loop3: detected capacity change from 0 to 229808 Oct 31 02:11:13.300187 kernel: loop4: detected capacity change from 0 to 140768 Oct 31 02:11:13.347193 kernel: loop5: detected capacity change from 0 to 8 Oct 31 02:11:13.381862 kernel: loop6: detected capacity change from 0 to 142488 Oct 31 02:11:13.465190 kernel: loop7: detected capacity change from 0 to 229808 Oct 31 02:11:13.488710 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Oct 31 02:11:13.525733 (sd-merge)[1214]: Merged extensions into '/usr'. Oct 31 02:11:13.535038 systemd[1]: Reloading requested from client PID 1169 ('systemd-sysext') (unit systemd-sysext.service)... Oct 31 02:11:13.535069 systemd[1]: Reloading... Oct 31 02:11:13.638113 zram_generator::config[1240]: No configuration found. Oct 31 02:11:14.149829 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 02:11:14.186964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 02:11:14.255757 systemd[1]: Reloading finished in 719 ms. Oct 31 02:11:14.293682 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 31 02:11:14.301107 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 31 02:11:14.312545 systemd[1]: Starting ensure-sysext.service... Oct 31 02:11:14.315115 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 02:11:14.336309 systemd[1]: Reloading requested from client PID 1296 ('systemctl') (unit ensure-sysext.service)... Oct 31 02:11:14.336334 systemd[1]: Reloading... Oct 31 02:11:14.363679 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 02:11:14.364988 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 31 02:11:14.366912 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 02:11:14.367403 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Oct 31 02:11:14.367537 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Oct 31 02:11:14.374906 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 02:11:14.374932 systemd-tmpfiles[1297]: Skipping /boot Oct 31 02:11:14.398010 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 02:11:14.398032 systemd-tmpfiles[1297]: Skipping /boot Oct 31 02:11:14.434265 zram_generator::config[1323]: No configuration found. Oct 31 02:11:14.617260 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 02:11:14.691758 systemd[1]: Reloading finished in 354 ms. Oct 31 02:11:14.719015 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 31 02:11:14.728253 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 02:11:14.756514 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 02:11:14.761422 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 31 02:11:14.765397 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 31 02:11:14.772008 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 02:11:14.781416 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 02:11:14.790376 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 31 02:11:14.801809 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 02:11:14.802106 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 02:11:14.806296 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 02:11:14.809568 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 02:11:14.818411 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 02:11:14.819463 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 02:11:14.827547 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 31 02:11:14.830341 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 02:11:14.834804 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 02:11:14.835151 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 02:11:14.835541 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 02:11:14.835757 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 02:11:14.842677 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 02:11:14.843001 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 02:11:14.848450 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 02:11:14.851446 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 02:11:14.851535 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 02:11:14.852263 systemd[1]: Finished ensure-sysext.service. Oct 31 02:11:14.853378 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 31 02:11:14.866499 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 31 02:11:14.900032 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 02:11:14.902264 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 02:11:14.909145 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 31 02:11:14.911266 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 02:11:14.913256 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 02:11:14.915093 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 02:11:14.929418 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 31 02:11:14.930788 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 02:11:14.931070 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 02:11:14.934680 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 02:11:14.934931 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 02:11:14.936918 systemd-udevd[1387]: Using default interface naming scheme 'v255'. Oct 31 02:11:14.938966 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 02:11:14.958583 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 31 02:11:14.959787 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 02:11:14.976200 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 31 02:11:14.981676 augenrules[1417]: No rules Oct 31 02:11:14.985261 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 02:11:14.987535 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 02:11:14.996409 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 02:11:15.021272 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 31 02:11:15.207197 systemd-resolved[1386]: Positive Trust Anchors: Oct 31 02:11:15.207780 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 02:11:15.207924 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 02:11:15.215231 systemd-resolved[1386]: Using system hostname 'srv-xg3om.gb1.brightbox.com'. Oct 31 02:11:15.218348 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 02:11:15.223586 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 02:11:15.229251 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 31 02:11:15.230548 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 31 02:11:15.231616 systemd[1]: Reached target time-set.target - System Time Set. Oct 31 02:11:15.239376 systemd-networkd[1429]: lo: Link UP Oct 31 02:11:15.239389 systemd-networkd[1429]: lo: Gained carrier Oct 31 02:11:15.243298 systemd-timesyncd[1403]: No network connectivity, watching for changes. Oct 31 02:11:15.243732 systemd-networkd[1429]: Enumeration completed Oct 31 02:11:15.243845 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 02:11:15.244798 systemd[1]: Reached target network.target - Network. Oct 31 02:11:15.259458 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 31 02:11:15.366210 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1428) Oct 31 02:11:15.457932 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 02:11:15.459298 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 02:11:15.462066 systemd-networkd[1429]: eth0: Link UP Oct 31 02:11:15.463618 systemd-networkd[1429]: eth0: Gained carrier Oct 31 02:11:15.463742 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 02:11:15.497201 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 02:11:15.501274 systemd-networkd[1429]: eth0: DHCPv4 address 10.230.61.6/30, gateway 10.230.61.5 acquired from 10.230.61.5 Oct 31 02:11:15.503313 systemd-timesyncd[1403]: Network configuration changed, trying to establish connection. Oct 31 02:11:15.520220 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 31 02:11:15.531192 kernel: ACPI: button: Power Button [PWRF] Oct 31 02:11:15.594264 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 31 02:11:15.605565 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 31 02:11:15.605901 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 31 02:11:15.643403 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 31 02:11:15.649193 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 02:11:15.669757 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 31 02:11:15.678482 systemd-timesyncd[1403]: Contacted time server 172.237.96.114:123 (2.flatcar.pool.ntp.org). Oct 31 02:11:15.678783 systemd-timesyncd[1403]: Initial clock synchronization to Fri 2025-10-31 02:11:15.949202 UTC. Oct 31 02:11:15.718262 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 31 02:11:15.750674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 02:11:15.896895 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 31 02:11:15.906599 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 31 02:11:15.969112 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 02:11:15.995578 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 02:11:16.035811 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 31 02:11:16.037129 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 02:11:16.037991 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 02:11:16.038930 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 31 02:11:16.039991 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 31 02:11:16.041189 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 31 02:11:16.042124 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 31 02:11:16.042965 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 31 02:11:16.043804 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 02:11:16.043866 systemd[1]: Reached target paths.target - Path Units. Oct 31 02:11:16.044615 systemd[1]: Reached target timers.target - Timer Units. Oct 31 02:11:16.047177 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 31 02:11:16.050123 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 31 02:11:16.059556 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 31 02:11:16.062436 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 31 02:11:16.063891 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 31 02:11:16.064810 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 02:11:16.065555 systemd[1]: Reached target basic.target - Basic System. Oct 31 02:11:16.066307 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 31 02:11:16.066410 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 31 02:11:16.079464 systemd[1]: Starting containerd.service - containerd container runtime... Oct 31 02:11:16.084440 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 31 02:11:16.088510 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 02:11:16.089575 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 31 02:11:16.099411 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 31 02:11:16.104599 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 31 02:11:16.105491 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 31 02:11:16.107294 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 31 02:11:16.118002 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 31 02:11:16.124459 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 31 02:11:16.147477 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 31 02:11:16.162099 jq[1476]: false Oct 31 02:11:16.163840 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 31 02:11:16.171037 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 02:11:16.172455 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 31 02:11:16.180499 systemd[1]: Starting update-engine.service - Update Engine... Oct 31 02:11:16.190293 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 31 02:11:16.206680 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 31 02:11:16.213068 dbus-daemon[1475]: [system] SELinux support is enabled Oct 31 02:11:16.214622 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 31 02:11:16.225871 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 02:11:16.243276 jq[1486]: true Oct 31 02:11:16.232524 dbus-daemon[1475]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1429 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 31 02:11:16.226759 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 31 02:11:16.235961 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 02:11:16.236239 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 31 02:11:16.279022 update_engine[1485]: I20251031 02:11:16.270896 1485 main.cc:92] Flatcar Update Engine starting Oct 31 02:11:16.274004 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 02:11:16.274048 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 31 02:11:16.274972 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 02:11:16.275001 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 31 02:11:16.283864 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 31 02:11:16.285576 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 02:11:16.286779 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 31 02:11:16.289451 tar[1490]: linux-amd64/LICENSE Oct 31 02:11:16.289451 tar[1490]: linux-amd64/helm Oct 31 02:11:16.296390 update_engine[1485]: I20251031 02:11:16.296312 1485 update_check_scheduler.cc:74] Next update check in 6m59s Oct 31 02:11:16.298028 systemd[1]: Started update-engine.service - Update Engine. Oct 31 02:11:16.304456 extend-filesystems[1477]: Found loop4 Oct 31 02:11:16.304456 extend-filesystems[1477]: Found loop5 Oct 31 02:11:16.304456 extend-filesystems[1477]: Found loop6 Oct 31 02:11:16.304456 extend-filesystems[1477]: Found loop7 Oct 31 02:11:16.331263 extend-filesystems[1477]: Found vda Oct 31 02:11:16.331263 extend-filesystems[1477]: Found vda1 Oct 31 02:11:16.331263 extend-filesystems[1477]: Found vda2 Oct 31 02:11:16.331263 extend-filesystems[1477]: Found vda3 Oct 31 02:11:16.331263 extend-filesystems[1477]: Found usr Oct 31 02:11:16.331263 extend-filesystems[1477]: Found vda4 Oct 31 02:11:16.331263 extend-filesystems[1477]: Found vda6 Oct 31 02:11:16.331263 extend-filesystems[1477]: Found vda7 Oct 31 02:11:16.331263 extend-filesystems[1477]: Found vda9 Oct 31 02:11:16.331263 extend-filesystems[1477]: Checking size of /dev/vda9 Oct 31 02:11:16.308763 (ntainerd)[1506]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 31 02:11:16.409583 jq[1503]: true Oct 31 02:11:16.309245 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 31 02:11:16.317223 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 31 02:11:16.346289 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 31 02:11:16.409097 systemd-logind[1484]: Watching system buttons on /dev/input/event2 (Power Button) Oct 31 02:11:16.409165 systemd-logind[1484]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 02:11:16.409867 systemd-logind[1484]: New seat seat0. Oct 31 02:11:16.413628 systemd[1]: Started systemd-logind.service - User Login Management. Oct 31 02:11:16.442273 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Oct 31 02:11:16.442355 extend-filesystems[1477]: Resized partition /dev/vda9 Oct 31 02:11:16.443918 extend-filesystems[1519]: resize2fs 1.47.1 (20-May-2024) Oct 31 02:11:16.613023 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1439) Oct 31 02:11:16.659221 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 02:11:16.711904 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Oct 31 02:11:16.717951 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 31 02:11:16.738595 systemd[1]: Starting sshkeys.service... Oct 31 02:11:16.781466 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 31 02:11:16.787625 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 31 02:11:16.791056 dbus-daemon[1475]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1510 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 31 02:11:16.805781 systemd[1]: Starting polkit.service - Authorization Manager... Oct 31 02:11:16.854628 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 31 02:11:16.897774 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 31 02:11:16.956991 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 31 02:11:16.967495 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 31 02:11:16.979791 systemd[1]: Started sshd@0-10.230.61.6:22-147.75.109.163:40216.service - OpenSSH per-connection server daemon (147.75.109.163:40216). Oct 31 02:11:16.986253 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 31 02:11:16.995358 polkitd[1545]: Started polkitd version 121 Oct 31 02:11:17.004210 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 02:11:17.004969 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 31 02:11:17.017953 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 31 02:11:17.047519 extend-filesystems[1519]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 02:11:17.047519 extend-filesystems[1519]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 31 02:11:17.047519 extend-filesystems[1519]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 31 02:11:17.055069 extend-filesystems[1477]: Resized filesystem in /dev/vda9 Oct 31 02:11:17.056804 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 02:11:17.057802 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 31 02:11:17.060052 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 02:11:17.072037 polkitd[1545]: Loading rules from directory /etc/polkit-1/rules.d Oct 31 02:11:17.077172 polkitd[1545]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 31 02:11:17.082938 polkitd[1545]: Finished loading, compiling and executing 2 rules Oct 31 02:11:17.083462 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 31 02:11:17.084091 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 31 02:11:17.087848 systemd[1]: Started polkit.service - Authorization Manager. Oct 31 02:11:17.088431 polkitd[1545]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 31 02:11:17.109182 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 31 02:11:17.123847 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 31 02:11:17.136782 systemd[1]: Reached target getty.target - Login Prompts. Oct 31 02:11:17.147233 systemd-hostnamed[1510]: Hostname set to (static) Oct 31 02:11:17.297288 systemd-networkd[1429]: eth0: Gained IPv6LL Oct 31 02:11:17.315061 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 31 02:11:17.317448 systemd[1]: Reached target network-online.target - Network is Online. Oct 31 02:11:17.329686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 02:11:17.340690 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 31 02:11:17.349937 containerd[1506]: time="2025-10-31T02:11:17.349715749Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 31 02:11:17.417957 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 31 02:11:17.432463 containerd[1506]: time="2025-10-31T02:11:17.431996205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 31 02:11:17.435287 containerd[1506]: time="2025-10-31T02:11:17.435236512Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 31 02:11:17.435418 containerd[1506]: time="2025-10-31T02:11:17.435389762Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 31 02:11:17.436155 containerd[1506]: time="2025-10-31T02:11:17.435561631Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 31 02:11:17.436155 containerd[1506]: time="2025-10-31T02:11:17.435853778Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 31 02:11:17.436155 containerd[1506]: time="2025-10-31T02:11:17.435884157Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 31 02:11:17.436155 containerd[1506]: time="2025-10-31T02:11:17.436009898Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 02:11:17.436155 containerd[1506]: time="2025-10-31T02:11:17.436034158Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 31 02:11:17.436623 containerd[1506]: time="2025-10-31T02:11:17.436590762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 02:11:17.436734 containerd[1506]: time="2025-10-31T02:11:17.436709109Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 31 02:11:17.436856 containerd[1506]: time="2025-10-31T02:11:17.436827909Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 02:11:17.436959 containerd[1506]: time="2025-10-31T02:11:17.436935815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 31 02:11:17.437813 containerd[1506]: time="2025-10-31T02:11:17.437256397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 31 02:11:17.437813 containerd[1506]: time="2025-10-31T02:11:17.437744759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 31 02:11:17.438109 containerd[1506]: time="2025-10-31T02:11:17.438077908Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 02:11:17.438264 containerd[1506]: time="2025-10-31T02:11:17.438182116Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 31 02:11:17.438501 containerd[1506]: time="2025-10-31T02:11:17.438473810Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 31 02:11:17.438716 containerd[1506]: time="2025-10-31T02:11:17.438684727Z" level=info msg="metadata content store policy set" policy=shared Oct 31 02:11:17.466628 containerd[1506]: time="2025-10-31T02:11:17.466561424Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 31 02:11:17.467495 containerd[1506]: time="2025-10-31T02:11:17.466913246Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 31 02:11:17.467495 containerd[1506]: time="2025-10-31T02:11:17.467011576Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 31 02:11:17.467495 containerd[1506]: time="2025-10-31T02:11:17.467044215Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 31 02:11:17.467495 containerd[1506]: time="2025-10-31T02:11:17.467072551Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 31 02:11:17.467495 containerd[1506]: time="2025-10-31T02:11:17.467396923Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 31 02:11:17.468536 containerd[1506]: time="2025-10-31T02:11:17.468506187Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 31 02:11:17.470259 containerd[1506]: time="2025-10-31T02:11:17.469602331Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 31 02:11:17.470259 containerd[1506]: time="2025-10-31T02:11:17.469687283Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 31 02:11:17.470259 containerd[1506]: time="2025-10-31T02:11:17.469729976Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 31 02:11:17.470259 containerd[1506]: time="2025-10-31T02:11:17.469759890Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 31 02:11:17.470259 containerd[1506]: time="2025-10-31T02:11:17.469785388Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 31 02:11:17.470259 containerd[1506]: time="2025-10-31T02:11:17.469807420Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 31 02:11:17.470259 containerd[1506]: time="2025-10-31T02:11:17.469839086Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 31 02:11:17.470259 containerd[1506]: time="2025-10-31T02:11:17.469876309Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 31 02:11:17.470259 containerd[1506]: time="2025-10-31T02:11:17.469903244Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 31 02:11:17.470259 containerd[1506]: time="2025-10-31T02:11:17.469933704Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 31 02:11:17.470259 containerd[1506]: time="2025-10-31T02:11:17.469963068Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 31 02:11:17.470259 containerd[1506]: time="2025-10-31T02:11:17.470015948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.470259 containerd[1506]: time="2025-10-31T02:11:17.470105774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.470259 containerd[1506]: time="2025-10-31T02:11:17.470128175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.470933 containerd[1506]: time="2025-10-31T02:11:17.470161506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.470933 containerd[1506]: time="2025-10-31T02:11:17.470222454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.470933 containerd[1506]: time="2025-10-31T02:11:17.470295395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.470933 containerd[1506]: time="2025-10-31T02:11:17.470321271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.470933 containerd[1506]: time="2025-10-31T02:11:17.470378095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.470933 containerd[1506]: time="2025-10-31T02:11:17.470404704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.470933 containerd[1506]: time="2025-10-31T02:11:17.470429104Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.470933 containerd[1506]: time="2025-10-31T02:11:17.470465883Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.470933 containerd[1506]: time="2025-10-31T02:11:17.470501813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.470933 containerd[1506]: time="2025-10-31T02:11:17.470526759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.470933 containerd[1506]: time="2025-10-31T02:11:17.470559094Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 31 02:11:17.470933 containerd[1506]: time="2025-10-31T02:11:17.470625052Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.470933 containerd[1506]: time="2025-10-31T02:11:17.470652315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.470933 containerd[1506]: time="2025-10-31T02:11:17.470697020Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 31 02:11:17.471514 containerd[1506]: time="2025-10-31T02:11:17.471052122Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 31 02:11:17.471514 containerd[1506]: time="2025-10-31T02:11:17.471214511Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 31 02:11:17.471514 containerd[1506]: time="2025-10-31T02:11:17.471248939Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 31 02:11:17.471514 containerd[1506]: time="2025-10-31T02:11:17.471284798Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 31 02:11:17.471514 containerd[1506]: time="2025-10-31T02:11:17.471305341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.471514 containerd[1506]: time="2025-10-31T02:11:17.471336689Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 31 02:11:17.471514 containerd[1506]: time="2025-10-31T02:11:17.471392469Z" level=info msg="NRI interface is disabled by configuration." Oct 31 02:11:17.471514 containerd[1506]: time="2025-10-31T02:11:17.471438167Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 31 02:11:17.474940 containerd[1506]: time="2025-10-31T02:11:17.472318626Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 31 02:11:17.474940 containerd[1506]: time="2025-10-31T02:11:17.472444378Z" level=info msg="Connect containerd service" Oct 31 02:11:17.474940 containerd[1506]: time="2025-10-31T02:11:17.472679806Z" level=info msg="using legacy CRI server" Oct 31 02:11:17.474940 containerd[1506]: time="2025-10-31T02:11:17.472701349Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 31 02:11:17.474940 containerd[1506]: time="2025-10-31T02:11:17.472961186Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 31 02:11:17.478973 containerd[1506]: time="2025-10-31T02:11:17.478923309Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 02:11:17.479306 containerd[1506]: time="2025-10-31T02:11:17.479228844Z" level=info msg="Start subscribing containerd event" Oct 31 02:11:17.479409 containerd[1506]: time="2025-10-31T02:11:17.479347785Z" level=info msg="Start recovering state" Oct 31 02:11:17.479484 containerd[1506]: time="2025-10-31T02:11:17.479459401Z" level=info msg="Start event monitor" Oct 31 02:11:17.479530 containerd[1506]: time="2025-10-31T02:11:17.479496101Z" level=info msg="Start snapshots syncer" Oct 31 02:11:17.479581 containerd[1506]: time="2025-10-31T02:11:17.479527148Z" level=info msg="Start cni network conf syncer for default" Oct 31 02:11:17.479581 containerd[1506]: time="2025-10-31T02:11:17.479548913Z" level=info msg="Start streaming server" Oct 31 02:11:17.483135 containerd[1506]: time="2025-10-31T02:11:17.481796302Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 02:11:17.483135 containerd[1506]: time="2025-10-31T02:11:17.481911479Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 02:11:17.482195 systemd[1]: Started containerd.service - containerd container runtime. Oct 31 02:11:17.483396 containerd[1506]: time="2025-10-31T02:11:17.483155900Z" level=info msg="containerd successfully booted in 0.135456s" Oct 31 02:11:18.038420 tar[1490]: linux-amd64/README.md Oct 31 02:11:18.060968 sshd[1554]: Accepted publickey for core from 147.75.109.163 port 40216 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:11:18.069253 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 31 02:11:18.073113 sshd[1554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:11:18.102304 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 31 02:11:18.112740 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 31 02:11:18.123295 systemd-logind[1484]: New session 1 of user core. Oct 31 02:11:18.144316 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 31 02:11:18.156762 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 31 02:11:18.166408 (systemd)[1600]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 02:11:18.322069 systemd[1600]: Queued start job for default target default.target. Oct 31 02:11:18.331342 systemd[1600]: Created slice app.slice - User Application Slice. Oct 31 02:11:18.331631 systemd[1600]: Reached target paths.target - Paths. Oct 31 02:11:18.331807 systemd[1600]: Reached target timers.target - Timers. Oct 31 02:11:18.334887 systemd[1600]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 31 02:11:18.373057 systemd[1600]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 31 02:11:18.374230 systemd[1600]: Reached target sockets.target - Sockets. Oct 31 02:11:18.374264 systemd[1600]: Reached target basic.target - Basic System. Oct 31 02:11:18.374335 systemd[1600]: Reached target default.target - Main User Target. Oct 31 02:11:18.374405 systemd[1600]: Startup finished in 194ms. Oct 31 02:11:18.375028 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 31 02:11:18.384768 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 31 02:11:18.531457 systemd-networkd[1429]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8f41:24:19ff:fee6:3d06/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8f41:24:19ff:fee6:3d06/64 assigned by NDisc. Oct 31 02:11:18.531472 systemd-networkd[1429]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Oct 31 02:11:19.052791 systemd[1]: Started sshd@1-10.230.61.6:22-147.75.109.163:40224.service - OpenSSH per-connection server daemon (147.75.109.163:40224). Oct 31 02:11:19.068467 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 02:11:19.080954 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 02:11:19.942081 kubelet[1617]: E1031 02:11:19.941965 1617 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 02:11:19.945887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 02:11:19.946283 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 02:11:19.947117 systemd[1]: kubelet.service: Consumed 1.761s CPU time. Oct 31 02:11:19.990428 sshd[1616]: Accepted publickey for core from 147.75.109.163 port 40224 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:11:19.992851 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:11:20.000770 systemd-logind[1484]: New session 2 of user core. Oct 31 02:11:20.012878 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 31 02:11:20.637747 sshd[1616]: pam_unix(sshd:session): session closed for user core Oct 31 02:11:20.643272 systemd[1]: sshd@1-10.230.61.6:22-147.75.109.163:40224.service: Deactivated successfully. Oct 31 02:11:20.645967 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 02:11:20.647160 systemd-logind[1484]: Session 2 logged out. Waiting for processes to exit. Oct 31 02:11:20.649210 systemd-logind[1484]: Removed session 2. Oct 31 02:11:20.798789 systemd[1]: Started sshd@2-10.230.61.6:22-147.75.109.163:33378.service - OpenSSH per-connection server daemon (147.75.109.163:33378). Oct 31 02:11:21.711886 sshd[1633]: Accepted publickey for core from 147.75.109.163 port 33378 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:11:21.714438 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:11:21.723015 systemd-logind[1484]: New session 3 of user core. Oct 31 02:11:21.734586 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 31 02:11:22.192163 login[1575]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 31 02:11:22.202351 login[1576]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 31 02:11:22.205536 systemd-logind[1484]: New session 4 of user core. Oct 31 02:11:22.210517 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 31 02:11:22.222672 systemd-logind[1484]: New session 5 of user core. Oct 31 02:11:22.234798 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 31 02:11:22.363582 sshd[1633]: pam_unix(sshd:session): session closed for user core Oct 31 02:11:22.369839 systemd[1]: sshd@2-10.230.61.6:22-147.75.109.163:33378.service: Deactivated successfully. Oct 31 02:11:22.372548 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 02:11:22.373878 systemd-logind[1484]: Session 3 logged out. Waiting for processes to exit. Oct 31 02:11:22.375519 systemd-logind[1484]: Removed session 3. Oct 31 02:11:23.367944 coreos-metadata[1474]: Oct 31 02:11:23.367 WARN failed to locate config-drive, using the metadata service API instead Oct 31 02:11:23.398852 coreos-metadata[1474]: Oct 31 02:11:23.398 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Oct 31 02:11:23.410372 coreos-metadata[1474]: Oct 31 02:11:23.410 INFO Fetch failed with 404: resource not found Oct 31 02:11:23.410372 coreos-metadata[1474]: Oct 31 02:11:23.410 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 31 02:11:23.410813 coreos-metadata[1474]: Oct 31 02:11:23.410 INFO Fetch successful Oct 31 02:11:23.410940 coreos-metadata[1474]: Oct 31 02:11:23.410 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Oct 31 02:11:23.426877 coreos-metadata[1474]: Oct 31 02:11:23.426 INFO Fetch successful Oct 31 02:11:23.426877 coreos-metadata[1474]: Oct 31 02:11:23.426 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Oct 31 02:11:23.472810 coreos-metadata[1474]: Oct 31 02:11:23.472 INFO Fetch successful Oct 31 02:11:23.472810 coreos-metadata[1474]: Oct 31 02:11:23.472 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Oct 31 02:11:23.504637 coreos-metadata[1474]: Oct 31 02:11:23.504 INFO Fetch successful Oct 31 02:11:23.504637 coreos-metadata[1474]: Oct 31 02:11:23.504 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Oct 31 02:11:23.530921 coreos-metadata[1474]: Oct 31 02:11:23.530 INFO Fetch successful Oct 31 02:11:23.559285 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 31 02:11:23.560589 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 31 02:11:24.125894 coreos-metadata[1548]: Oct 31 02:11:24.125 WARN failed to locate config-drive, using the metadata service API instead Oct 31 02:11:24.149629 coreos-metadata[1548]: Oct 31 02:11:24.149 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Oct 31 02:11:24.178982 coreos-metadata[1548]: Oct 31 02:11:24.178 INFO Fetch successful Oct 31 02:11:24.179140 coreos-metadata[1548]: Oct 31 02:11:24.179 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 31 02:11:24.261564 coreos-metadata[1548]: Oct 31 02:11:24.261 INFO Fetch successful Oct 31 02:11:24.275400 unknown[1548]: wrote ssh authorized keys file for user: core Oct 31 02:11:24.307113 update-ssh-keys[1674]: Updated "/home/core/.ssh/authorized_keys" Oct 31 02:11:24.308702 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 31 02:11:24.311967 systemd[1]: Finished sshkeys.service. Oct 31 02:11:24.316703 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 31 02:11:24.318502 systemd[1]: Startup finished in 1.705s (kernel) + 16.516s (initrd) + 13.089s (userspace) = 31.311s. Oct 31 02:11:30.019783 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 02:11:30.032466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 02:11:30.250095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 02:11:30.256745 (kubelet)[1685]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 02:11:30.346074 kubelet[1685]: E1031 02:11:30.345844 1685 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 02:11:30.351704 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 02:11:30.352226 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 02:11:32.607889 systemd[1]: Started sshd@3-10.230.61.6:22-147.75.109.163:41012.service - OpenSSH per-connection server daemon (147.75.109.163:41012). Oct 31 02:11:33.545900 sshd[1694]: Accepted publickey for core from 147.75.109.163 port 41012 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:11:33.548250 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:11:33.556261 systemd-logind[1484]: New session 6 of user core. Oct 31 02:11:33.562368 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 31 02:11:34.188508 sshd[1694]: pam_unix(sshd:session): session closed for user core Oct 31 02:11:34.192316 systemd[1]: sshd@3-10.230.61.6:22-147.75.109.163:41012.service: Deactivated successfully. Oct 31 02:11:34.194539 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 02:11:34.196776 systemd-logind[1484]: Session 6 logged out. Waiting for processes to exit. Oct 31 02:11:34.198379 systemd-logind[1484]: Removed session 6. Oct 31 02:11:34.357547 systemd[1]: Started sshd@4-10.230.61.6:22-147.75.109.163:41024.service - OpenSSH per-connection server daemon (147.75.109.163:41024). Oct 31 02:11:35.278047 sshd[1701]: Accepted publickey for core from 147.75.109.163 port 41024 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:11:35.280187 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:11:35.286483 systemd-logind[1484]: New session 7 of user core. Oct 31 02:11:35.295612 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 31 02:11:35.911733 sshd[1701]: pam_unix(sshd:session): session closed for user core Oct 31 02:11:35.916141 systemd[1]: sshd@4-10.230.61.6:22-147.75.109.163:41024.service: Deactivated successfully. Oct 31 02:11:35.918591 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 02:11:35.920851 systemd-logind[1484]: Session 7 logged out. Waiting for processes to exit. Oct 31 02:11:35.922367 systemd-logind[1484]: Removed session 7. Oct 31 02:11:36.078597 systemd[1]: Started sshd@5-10.230.61.6:22-147.75.109.163:41036.service - OpenSSH per-connection server daemon (147.75.109.163:41036). Oct 31 02:11:36.980685 sshd[1708]: Accepted publickey for core from 147.75.109.163 port 41036 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:11:36.982916 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:11:36.991996 systemd-logind[1484]: New session 8 of user core. Oct 31 02:11:37.001387 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 31 02:11:37.610372 sshd[1708]: pam_unix(sshd:session): session closed for user core Oct 31 02:11:37.614621 systemd-logind[1484]: Session 8 logged out. Waiting for processes to exit. Oct 31 02:11:37.615937 systemd[1]: sshd@5-10.230.61.6:22-147.75.109.163:41036.service: Deactivated successfully. Oct 31 02:11:37.618127 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 02:11:37.620139 systemd-logind[1484]: Removed session 8. Oct 31 02:11:37.771558 systemd[1]: Started sshd@6-10.230.61.6:22-147.75.109.163:41052.service - OpenSSH per-connection server daemon (147.75.109.163:41052). Oct 31 02:11:38.665501 sshd[1715]: Accepted publickey for core from 147.75.109.163 port 41052 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:11:38.667595 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:11:38.676007 systemd-logind[1484]: New session 9 of user core. Oct 31 02:11:38.683402 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 31 02:11:39.163060 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 02:11:39.163604 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 02:11:39.183564 sudo[1718]: pam_unix(sudo:session): session closed for user root Oct 31 02:11:39.330584 sshd[1715]: pam_unix(sshd:session): session closed for user core Oct 31 02:11:39.335154 systemd[1]: sshd@6-10.230.61.6:22-147.75.109.163:41052.service: Deactivated successfully. Oct 31 02:11:39.337421 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 02:11:39.339150 systemd-logind[1484]: Session 9 logged out. Waiting for processes to exit. Oct 31 02:11:39.341053 systemd-logind[1484]: Removed session 9. Oct 31 02:11:39.491512 systemd[1]: Started sshd@7-10.230.61.6:22-147.75.109.163:41064.service - OpenSSH per-connection server daemon (147.75.109.163:41064). Oct 31 02:11:40.392409 sshd[1723]: Accepted publickey for core from 147.75.109.163 port 41064 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:11:40.394849 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:11:40.396483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 02:11:40.405418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 02:11:40.409662 systemd-logind[1484]: New session 10 of user core. Oct 31 02:11:40.416401 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 02:11:40.749713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 02:11:40.762949 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 02:11:40.845325 kubelet[1734]: E1031 02:11:40.845093 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 02:11:40.849851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 02:11:40.850159 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 02:11:40.882558 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 02:11:40.883109 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 02:11:40.889972 sudo[1742]: pam_unix(sudo:session): session closed for user root Oct 31 02:11:40.899029 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 31 02:11:40.900080 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 02:11:40.919922 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 31 02:11:40.931456 auditctl[1745]: No rules Oct 31 02:11:40.932228 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 02:11:40.932624 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 31 02:11:40.942810 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 02:11:40.984782 augenrules[1763]: No rules Oct 31 02:11:40.987051 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 02:11:40.988913 sudo[1741]: pam_unix(sudo:session): session closed for user root Oct 31 02:11:41.136324 sshd[1723]: pam_unix(sshd:session): session closed for user core Oct 31 02:11:41.141962 systemd[1]: sshd@7-10.230.61.6:22-147.75.109.163:41064.service: Deactivated successfully. Oct 31 02:11:41.144926 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 02:11:41.147405 systemd-logind[1484]: Session 10 logged out. Waiting for processes to exit. Oct 31 02:11:41.149407 systemd-logind[1484]: Removed session 10. Oct 31 02:11:41.308713 systemd[1]: Started sshd@8-10.230.61.6:22-147.75.109.163:42496.service - OpenSSH per-connection server daemon (147.75.109.163:42496). Oct 31 02:11:42.210446 sshd[1771]: Accepted publickey for core from 147.75.109.163 port 42496 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:11:42.212810 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:11:42.219961 systemd-logind[1484]: New session 11 of user core. Oct 31 02:11:42.230803 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 02:11:42.697139 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 02:11:42.698534 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 02:11:43.659587 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 31 02:11:43.678019 (dockerd)[1790]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 31 02:11:44.294210 dockerd[1790]: time="2025-10-31T02:11:44.292696536Z" level=info msg="Starting up" Oct 31 02:11:44.490821 dockerd[1790]: time="2025-10-31T02:11:44.490522208Z" level=info msg="Loading containers: start." Oct 31 02:11:44.649493 kernel: Initializing XFRM netlink socket Oct 31 02:11:44.765973 systemd-networkd[1429]: docker0: Link UP Oct 31 02:11:44.783196 dockerd[1790]: time="2025-10-31T02:11:44.783025774Z" level=info msg="Loading containers: done." Oct 31 02:11:44.817287 dockerd[1790]: time="2025-10-31T02:11:44.814962120Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 02:11:44.817287 dockerd[1790]: time="2025-10-31T02:11:44.815100241Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 31 02:11:44.817287 dockerd[1790]: time="2025-10-31T02:11:44.815273287Z" level=info msg="Daemon has completed initialization" Oct 31 02:11:44.816812 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck701482776-merged.mount: Deactivated successfully. Oct 31 02:11:44.867277 dockerd[1790]: time="2025-10-31T02:11:44.866795907Z" level=info msg="API listen on /run/docker.sock" Oct 31 02:11:44.868905 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 31 02:11:46.532193 containerd[1506]: time="2025-10-31T02:11:46.531963240Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Oct 31 02:11:47.633418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3352555676.mount: Deactivated successfully. Oct 31 02:11:48.596292 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 31 02:11:51.020085 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 31 02:11:51.031475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 02:11:51.386442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 02:11:51.398754 (kubelet)[1998]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 02:11:51.483562 kubelet[1998]: E1031 02:11:51.483452 1998 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 02:11:51.487027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 02:11:51.487508 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 02:11:59.551305 containerd[1506]: time="2025-10-31T02:11:59.550051569Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:11:59.554663 containerd[1506]: time="2025-10-31T02:11:59.552258527Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114901" Oct 31 02:11:59.555201 containerd[1506]: time="2025-10-31T02:11:59.554933001Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:11:59.560375 containerd[1506]: time="2025-10-31T02:11:59.560333799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:11:59.562450 containerd[1506]: time="2025-10-31T02:11:59.562403679Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 13.030151648s" Oct 31 02:11:59.562677 containerd[1506]: time="2025-10-31T02:11:59.562646543Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Oct 31 02:11:59.567481 containerd[1506]: time="2025-10-31T02:11:59.567446603Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Oct 31 02:12:01.521283 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 31 02:12:01.538667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 02:12:01.888477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 02:12:01.891733 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 02:12:01.973641 kubelet[2016]: E1031 02:12:01.973558 2016 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 02:12:01.976747 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 02:12:01.977117 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 02:12:02.022181 update_engine[1485]: I20251031 02:12:02.021888 1485 update_attempter.cc:509] Updating boot flags... Oct 31 02:12:02.082328 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2030) Oct 31 02:12:02.168209 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2029) Oct 31 02:12:11.315413 containerd[1506]: time="2025-10-31T02:12:11.315138076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:11.318352 containerd[1506]: time="2025-10-31T02:12:11.318220104Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020852" Oct 31 02:12:11.319858 containerd[1506]: time="2025-10-31T02:12:11.319812644Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:11.324008 containerd[1506]: time="2025-10-31T02:12:11.323921020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:11.326987 containerd[1506]: time="2025-10-31T02:12:11.325881882Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 11.758385131s" Oct 31 02:12:11.326987 containerd[1506]: time="2025-10-31T02:12:11.326011042Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Oct 31 02:12:11.328444 containerd[1506]: time="2025-10-31T02:12:11.328411103Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Oct 31 02:12:12.020418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Oct 31 02:12:12.034707 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 02:12:12.214740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 02:12:12.227729 (kubelet)[2049]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 02:12:12.301669 kubelet[2049]: E1031 02:12:12.301500 2049 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 02:12:12.305734 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 02:12:12.306012 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 02:12:22.522375 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Oct 31 02:12:22.535552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 02:12:22.903449 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 02:12:22.916761 (kubelet)[2068]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 02:12:23.030426 kubelet[2068]: E1031 02:12:23.030337 2068 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 02:12:23.034560 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 02:12:23.035481 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 02:12:23.290184 containerd[1506]: time="2025-10-31T02:12:23.288555729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:23.293328 containerd[1506]: time="2025-10-31T02:12:23.293241696Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155576" Oct 31 02:12:23.295206 containerd[1506]: time="2025-10-31T02:12:23.293520686Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:23.300514 containerd[1506]: time="2025-10-31T02:12:23.300478612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:23.303891 containerd[1506]: time="2025-10-31T02:12:23.303798535Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 11.975180975s" Oct 31 02:12:23.304017 containerd[1506]: time="2025-10-31T02:12:23.303947900Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Oct 31 02:12:23.306860 containerd[1506]: time="2025-10-31T02:12:23.306798954Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Oct 31 02:12:26.629803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount658284736.mount: Deactivated successfully. Oct 31 02:12:27.803705 containerd[1506]: time="2025-10-31T02:12:27.803622107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:27.805196 containerd[1506]: time="2025-10-31T02:12:27.804890078Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929477" Oct 31 02:12:27.806506 containerd[1506]: time="2025-10-31T02:12:27.806016298Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:27.809071 containerd[1506]: time="2025-10-31T02:12:27.809026058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:27.810312 containerd[1506]: time="2025-10-31T02:12:27.810246955Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 4.503012937s" Oct 31 02:12:27.810401 containerd[1506]: time="2025-10-31T02:12:27.810322867Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Oct 31 02:12:27.811229 containerd[1506]: time="2025-10-31T02:12:27.811189599Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Oct 31 02:12:28.802506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2092794463.mount: Deactivated successfully. Oct 31 02:12:33.270758 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Oct 31 02:12:33.283794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 02:12:33.477414 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 02:12:33.498761 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 02:12:33.578571 kubelet[2138]: E1031 02:12:33.577873 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 02:12:33.583029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 02:12:33.583413 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 02:12:37.683989 containerd[1506]: time="2025-10-31T02:12:37.683796778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:37.695364 containerd[1506]: time="2025-10-31T02:12:37.695255270Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Oct 31 02:12:37.705357 containerd[1506]: time="2025-10-31T02:12:37.705312797Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:37.730692 containerd[1506]: time="2025-10-31T02:12:37.730589163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:37.733062 containerd[1506]: time="2025-10-31T02:12:37.732367607Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 9.921113752s" Oct 31 02:12:37.733062 containerd[1506]: time="2025-10-31T02:12:37.732460908Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Oct 31 02:12:37.736377 containerd[1506]: time="2025-10-31T02:12:37.736338256Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 31 02:12:38.671005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount817719902.mount: Deactivated successfully. Oct 31 02:12:38.687247 containerd[1506]: time="2025-10-31T02:12:38.687097454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:38.692392 containerd[1506]: time="2025-10-31T02:12:38.692312629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Oct 31 02:12:38.700053 containerd[1506]: time="2025-10-31T02:12:38.699912857Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:38.704768 containerd[1506]: time="2025-10-31T02:12:38.704503189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:38.705648 containerd[1506]: time="2025-10-31T02:12:38.705305669Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 968.922181ms" Oct 31 02:12:38.705648 containerd[1506]: time="2025-10-31T02:12:38.705370959Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Oct 31 02:12:38.706720 containerd[1506]: time="2025-10-31T02:12:38.706688378Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Oct 31 02:12:39.638870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1293606635.mount: Deactivated successfully. Oct 31 02:12:43.770541 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Oct 31 02:12:43.781634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 02:12:44.225065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 02:12:44.242001 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 02:12:44.353304 kubelet[2210]: E1031 02:12:44.353210 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 02:12:44.356949 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 02:12:44.357414 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 02:12:51.550030 containerd[1506]: time="2025-10-31T02:12:51.549773970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:51.553766 containerd[1506]: time="2025-10-31T02:12:51.553510781Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378441" Oct 31 02:12:51.555793 containerd[1506]: time="2025-10-31T02:12:51.555422976Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:51.561516 containerd[1506]: time="2025-10-31T02:12:51.561469755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:12:51.563790 containerd[1506]: time="2025-10-31T02:12:51.563737848Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 12.856999184s" Oct 31 02:12:51.564561 containerd[1506]: time="2025-10-31T02:12:51.563964876Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Oct 31 02:12:54.520867 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Oct 31 02:12:54.535315 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 02:12:54.928473 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 02:12:54.941963 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 02:12:55.240929 kubelet[2253]: E1031 02:12:55.238866 2253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 02:12:55.241500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 02:12:55.241783 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 02:12:58.897550 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 02:12:58.916453 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 02:12:58.948652 systemd[1]: Reloading requested from client PID 2267 ('systemctl') (unit session-11.scope)... Oct 31 02:12:58.948707 systemd[1]: Reloading... Oct 31 02:12:59.109198 zram_generator::config[2302]: No configuration found. Oct 31 02:12:59.297492 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 02:12:59.411589 systemd[1]: Reloading finished in 462 ms. Oct 31 02:12:59.487222 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 02:12:59.487356 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 02:12:59.487732 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 02:12:59.490299 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 02:12:59.653185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 02:12:59.669880 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 02:12:59.789816 kubelet[2373]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 02:12:59.789816 kubelet[2373]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 02:12:59.789816 kubelet[2373]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 02:12:59.793218 kubelet[2373]: I1031 02:12:59.792323 2373 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 02:13:00.754395 kubelet[2373]: I1031 02:13:00.754278 2373 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 31 02:13:00.755845 kubelet[2373]: I1031 02:13:00.755821 2373 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 02:13:00.758185 kubelet[2373]: I1031 02:13:00.756901 2373 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 02:13:00.802649 kubelet[2373]: I1031 02:13:00.802552 2373 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 02:13:00.805726 kubelet[2373]: E1031 02:13:00.805493 2373 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.61.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.61.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 02:13:00.831818 kubelet[2373]: E1031 02:13:00.831755 2373 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 02:13:00.832072 kubelet[2373]: I1031 02:13:00.832042 2373 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 02:13:00.846775 kubelet[2373]: I1031 02:13:00.846740 2373 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 02:13:00.852216 kubelet[2373]: I1031 02:13:00.852138 2373 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 02:13:00.855517 kubelet[2373]: I1031 02:13:00.852324 2373 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-xg3om.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 02:13:00.856408 kubelet[2373]: I1031 02:13:00.855919 2373 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 02:13:00.856408 kubelet[2373]: I1031 02:13:00.855962 2373 container_manager_linux.go:303] "Creating device plugin manager" Oct 31 02:13:00.856408 kubelet[2373]: I1031 02:13:00.856259 2373 state_mem.go:36] "Initialized new in-memory state store" Oct 31 02:13:00.862459 kubelet[2373]: I1031 02:13:00.862398 2373 kubelet.go:480] "Attempting to sync node with API server" Oct 31 02:13:00.862459 kubelet[2373]: I1031 02:13:00.862455 2373 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 02:13:00.862642 kubelet[2373]: I1031 02:13:00.862529 2373 kubelet.go:386] "Adding apiserver pod source" Oct 31 02:13:00.865678 kubelet[2373]: I1031 02:13:00.865013 2373 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 02:13:00.871821 kubelet[2373]: E1031 02:13:00.871777 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.61.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-xg3om.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.61.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 02:13:00.872124 kubelet[2373]: I1031 02:13:00.872093 2373 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 02:13:00.874255 kubelet[2373]: I1031 02:13:00.874227 2373 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 02:13:00.875404 kubelet[2373]: W1031 02:13:00.875366 2373 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 02:13:00.884635 kubelet[2373]: E1031 02:13:00.884599 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.61.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.61.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 02:13:00.887640 kubelet[2373]: I1031 02:13:00.886705 2373 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 02:13:00.887832 kubelet[2373]: I1031 02:13:00.887811 2373 server.go:1289] "Started kubelet" Oct 31 02:13:00.889428 kubelet[2373]: I1031 02:13:00.889277 2373 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 02:13:00.895189 kubelet[2373]: I1031 02:13:00.894633 2373 server.go:317] "Adding debug handlers to kubelet server" Oct 31 02:13:00.896582 kubelet[2373]: I1031 02:13:00.896507 2373 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 02:13:00.897363 kubelet[2373]: I1031 02:13:00.897339 2373 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 02:13:00.900517 kubelet[2373]: E1031 02:13:00.897824 2373 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.61.6:6443/api/v1/namespaces/default/events\": dial tcp 10.230.61.6:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-xg3om.gb1.brightbox.com.18737198eed2ce3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-xg3om.gb1.brightbox.com,UID:srv-xg3om.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-xg3om.gb1.brightbox.com,},FirstTimestamp:2025-10-31 02:13:00.887756347 +0000 UTC m=+1.155696467,LastTimestamp:2025-10-31 02:13:00.887756347 +0000 UTC m=+1.155696467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-xg3om.gb1.brightbox.com,}" Oct 31 02:13:00.905811 kubelet[2373]: I1031 02:13:00.905523 2373 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 02:13:00.908216 kubelet[2373]: I1031 02:13:00.906356 2373 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 02:13:00.917833 kubelet[2373]: E1031 02:13:00.917799 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-xg3om.gb1.brightbox.com\" not found" Oct 31 02:13:00.917954 kubelet[2373]: I1031 02:13:00.917860 2373 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 02:13:00.918660 kubelet[2373]: I1031 02:13:00.918635 2373 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 02:13:00.919278 kubelet[2373]: I1031 02:13:00.918769 2373 reconciler.go:26] "Reconciler: start to sync state" Oct 31 02:13:00.919894 kubelet[2373]: E1031 02:13:00.919859 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.61.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.61.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 02:13:00.920241 kubelet[2373]: I1031 02:13:00.920213 2373 factory.go:223] Registration of the systemd container factory successfully Oct 31 02:13:00.920350 kubelet[2373]: I1031 02:13:00.920327 2373 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 02:13:00.924975 kubelet[2373]: E1031 02:13:00.924895 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.61.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xg3om.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.61.6:6443: connect: connection refused" interval="200ms" Oct 31 02:13:00.925711 kubelet[2373]: E1031 02:13:00.925485 2373 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 02:13:00.925795 kubelet[2373]: I1031 02:13:00.925749 2373 factory.go:223] Registration of the containerd container factory successfully Oct 31 02:13:00.944652 kubelet[2373]: I1031 02:13:00.944587 2373 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 31 02:13:00.950007 kubelet[2373]: I1031 02:13:00.949460 2373 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 31 02:13:00.950007 kubelet[2373]: I1031 02:13:00.949516 2373 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 31 02:13:00.950007 kubelet[2373]: I1031 02:13:00.949574 2373 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 02:13:00.950007 kubelet[2373]: I1031 02:13:00.949601 2373 kubelet.go:2436] "Starting kubelet main sync loop" Oct 31 02:13:00.950007 kubelet[2373]: E1031 02:13:00.949698 2373 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 02:13:00.957719 kubelet[2373]: E1031 02:13:00.957683 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.61.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.61.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 02:13:00.975805 kubelet[2373]: I1031 02:13:00.975771 2373 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 02:13:00.976006 kubelet[2373]: I1031 02:13:00.975985 2373 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 02:13:00.976138 kubelet[2373]: I1031 02:13:00.976119 2373 state_mem.go:36] "Initialized new in-memory state store" Oct 31 02:13:00.981012 kubelet[2373]: I1031 02:13:00.980598 2373 policy_none.go:49] "None policy: Start" Oct 31 02:13:00.981012 kubelet[2373]: I1031 02:13:00.980648 2373 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 02:13:00.981012 kubelet[2373]: I1031 02:13:00.980682 2373 state_mem.go:35] "Initializing new in-memory state store" Oct 31 02:13:00.994916 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 31 02:13:01.015033 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 31 02:13:01.018134 kubelet[2373]: E1031 02:13:01.018045 2373 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-xg3om.gb1.brightbox.com\" not found" Oct 31 02:13:01.022319 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 31 02:13:01.032737 kubelet[2373]: E1031 02:13:01.032678 2373 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 02:13:01.033053 kubelet[2373]: I1031 02:13:01.033026 2373 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 02:13:01.033148 kubelet[2373]: I1031 02:13:01.033070 2373 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 02:13:01.034677 kubelet[2373]: I1031 02:13:01.034060 2373 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 02:13:01.036730 kubelet[2373]: E1031 02:13:01.036367 2373 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 02:13:01.036730 kubelet[2373]: E1031 02:13:01.036588 2373 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-xg3om.gb1.brightbox.com\" not found" Oct 31 02:13:01.073713 systemd[1]: Created slice kubepods-burstable-pod61d1cb99736313431c468ea7ecf6dae8.slice - libcontainer container kubepods-burstable-pod61d1cb99736313431c468ea7ecf6dae8.slice. Oct 31 02:13:01.090759 kubelet[2373]: E1031 02:13:01.090596 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xg3om.gb1.brightbox.com\" not found" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.099099 systemd[1]: Created slice kubepods-burstable-poda76798987d9ffc47bd4a3422a7f289a2.slice - libcontainer container kubepods-burstable-poda76798987d9ffc47bd4a3422a7f289a2.slice. Oct 31 02:13:01.109619 kubelet[2373]: E1031 02:13:01.109308 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xg3om.gb1.brightbox.com\" not found" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.113412 systemd[1]: Created slice kubepods-burstable-pod2bbed94c00883c647d2a55172dde75de.slice - libcontainer container kubepods-burstable-pod2bbed94c00883c647d2a55172dde75de.slice. Oct 31 02:13:01.116209 kubelet[2373]: E1031 02:13:01.115945 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xg3om.gb1.brightbox.com\" not found" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.120327 kubelet[2373]: I1031 02:13:01.120232 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61d1cb99736313431c468ea7ecf6dae8-k8s-certs\") pod \"kube-apiserver-srv-xg3om.gb1.brightbox.com\" (UID: \"61d1cb99736313431c468ea7ecf6dae8\") " pod="kube-system/kube-apiserver-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.120327 kubelet[2373]: I1031 02:13:01.120285 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a76798987d9ffc47bd4a3422a7f289a2-ca-certs\") pod \"kube-controller-manager-srv-xg3om.gb1.brightbox.com\" (UID: \"a76798987d9ffc47bd4a3422a7f289a2\") " pod="kube-system/kube-controller-manager-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.120509 kubelet[2373]: I1031 02:13:01.120365 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a76798987d9ffc47bd4a3422a7f289a2-kubeconfig\") pod \"kube-controller-manager-srv-xg3om.gb1.brightbox.com\" (UID: \"a76798987d9ffc47bd4a3422a7f289a2\") " pod="kube-system/kube-controller-manager-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.120509 kubelet[2373]: I1031 02:13:01.120435 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bbed94c00883c647d2a55172dde75de-kubeconfig\") pod \"kube-scheduler-srv-xg3om.gb1.brightbox.com\" (UID: \"2bbed94c00883c647d2a55172dde75de\") " pod="kube-system/kube-scheduler-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.120509 kubelet[2373]: I1031 02:13:01.120474 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61d1cb99736313431c468ea7ecf6dae8-ca-certs\") pod \"kube-apiserver-srv-xg3om.gb1.brightbox.com\" (UID: \"61d1cb99736313431c468ea7ecf6dae8\") " pod="kube-system/kube-apiserver-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.120688 kubelet[2373]: I1031 02:13:01.120522 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61d1cb99736313431c468ea7ecf6dae8-usr-share-ca-certificates\") pod \"kube-apiserver-srv-xg3om.gb1.brightbox.com\" (UID: \"61d1cb99736313431c468ea7ecf6dae8\") " pod="kube-system/kube-apiserver-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.120688 kubelet[2373]: I1031 02:13:01.120627 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a76798987d9ffc47bd4a3422a7f289a2-flexvolume-dir\") pod \"kube-controller-manager-srv-xg3om.gb1.brightbox.com\" (UID: \"a76798987d9ffc47bd4a3422a7f289a2\") " pod="kube-system/kube-controller-manager-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.120688 kubelet[2373]: I1031 02:13:01.120664 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a76798987d9ffc47bd4a3422a7f289a2-k8s-certs\") pod \"kube-controller-manager-srv-xg3om.gb1.brightbox.com\" (UID: \"a76798987d9ffc47bd4a3422a7f289a2\") " pod="kube-system/kube-controller-manager-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.120846 kubelet[2373]: I1031 02:13:01.120769 2373 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a76798987d9ffc47bd4a3422a7f289a2-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-xg3om.gb1.brightbox.com\" (UID: \"a76798987d9ffc47bd4a3422a7f289a2\") " pod="kube-system/kube-controller-manager-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.125908 kubelet[2373]: E1031 02:13:01.125857 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.61.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xg3om.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.61.6:6443: connect: connection refused" interval="400ms" Oct 31 02:13:01.136974 kubelet[2373]: I1031 02:13:01.136574 2373 kubelet_node_status.go:75] "Attempting to register node" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.137206 kubelet[2373]: E1031 02:13:01.137147 2373 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.61.6:6443/api/v1/nodes\": dial tcp 10.230.61.6:6443: connect: connection refused" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.341323 kubelet[2373]: I1031 02:13:01.340760 2373 kubelet_node_status.go:75] "Attempting to register node" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.341323 kubelet[2373]: E1031 02:13:01.341280 2373 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.61.6:6443/api/v1/nodes\": dial tcp 10.230.61.6:6443: connect: connection refused" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.393154 containerd[1506]: time="2025-10-31T02:13:01.393014537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-xg3om.gb1.brightbox.com,Uid:61d1cb99736313431c468ea7ecf6dae8,Namespace:kube-system,Attempt:0,}" Oct 31 02:13:01.419060 containerd[1506]: time="2025-10-31T02:13:01.419005147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-xg3om.gb1.brightbox.com,Uid:2bbed94c00883c647d2a55172dde75de,Namespace:kube-system,Attempt:0,}" Oct 31 02:13:01.419674 containerd[1506]: time="2025-10-31T02:13:01.419013155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-xg3om.gb1.brightbox.com,Uid:a76798987d9ffc47bd4a3422a7f289a2,Namespace:kube-system,Attempt:0,}" Oct 31 02:13:01.526833 kubelet[2373]: E1031 02:13:01.526762 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.61.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xg3om.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.61.6:6443: connect: connection refused" interval="800ms" Oct 31 02:13:01.730624 kubelet[2373]: E1031 02:13:01.730445 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.61.6:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-xg3om.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.61.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 02:13:01.745501 kubelet[2373]: I1031 02:13:01.745455 2373 kubelet_node_status.go:75] "Attempting to register node" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.745982 kubelet[2373]: E1031 02:13:01.745891 2373 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.61.6:6443/api/v1/nodes\": dial tcp 10.230.61.6:6443: connect: connection refused" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:01.830648 kubelet[2373]: E1031 02:13:01.830585 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.61.6:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.61.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 02:13:02.151618 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3883495134.mount: Deactivated successfully. Oct 31 02:13:02.158472 containerd[1506]: time="2025-10-31T02:13:02.158395391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 02:13:02.160318 containerd[1506]: time="2025-10-31T02:13:02.160208773Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 02:13:02.161426 containerd[1506]: time="2025-10-31T02:13:02.161369893Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 02:13:02.162743 containerd[1506]: time="2025-10-31T02:13:02.162698807Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 02:13:02.164108 containerd[1506]: time="2025-10-31T02:13:02.163957329Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 02:13:02.165318 containerd[1506]: time="2025-10-31T02:13:02.165023521Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Oct 31 02:13:02.165318 containerd[1506]: time="2025-10-31T02:13:02.165147639Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 02:13:02.174817 kubelet[2373]: E1031 02:13:02.174740 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.61.6:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.61.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 02:13:02.177096 containerd[1506]: time="2025-10-31T02:13:02.177048794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 02:13:02.179828 containerd[1506]: time="2025-10-31T02:13:02.179777777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 786.48137ms" Oct 31 02:13:02.183023 containerd[1506]: time="2025-10-31T02:13:02.182492917Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 763.250818ms" Oct 31 02:13:02.185090 containerd[1506]: time="2025-10-31T02:13:02.184083490Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 764.564011ms" Oct 31 02:13:02.329235 kubelet[2373]: E1031 02:13:02.328672 2373 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.61.6:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-xg3om.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.61.6:6443: connect: connection refused" interval="1.6s" Oct 31 02:13:02.359644 kubelet[2373]: E1031 02:13:02.359542 2373 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.61.6:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.61.6:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 02:13:02.367604 containerd[1506]: time="2025-10-31T02:13:02.366698747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:13:02.367604 containerd[1506]: time="2025-10-31T02:13:02.366783369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:13:02.367604 containerd[1506]: time="2025-10-31T02:13:02.366805339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:13:02.367604 containerd[1506]: time="2025-10-31T02:13:02.366919865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:13:02.393043 containerd[1506]: time="2025-10-31T02:13:02.392928293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:13:02.393373 containerd[1506]: time="2025-10-31T02:13:02.393018952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:13:02.393373 containerd[1506]: time="2025-10-31T02:13:02.393043972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:13:02.396105 containerd[1506]: time="2025-10-31T02:13:02.396029797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:13:02.396260 containerd[1506]: time="2025-10-31T02:13:02.396153873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:13:02.396260 containerd[1506]: time="2025-10-31T02:13:02.396222077Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:13:02.398239 containerd[1506]: time="2025-10-31T02:13:02.396239005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:13:02.398239 containerd[1506]: time="2025-10-31T02:13:02.396344091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:13:02.448365 systemd[1]: Started cri-containerd-265401f85a18dfa0f410b2d6487deaae667586b61b2229d99ff8316494ad422c.scope - libcontainer container 265401f85a18dfa0f410b2d6487deaae667586b61b2229d99ff8316494ad422c. Oct 31 02:13:02.452651 systemd[1]: Started cri-containerd-646994fe27f1c0c365599f938002cd7e529ed28122012d3df0f3f9f502635872.scope - libcontainer container 646994fe27f1c0c365599f938002cd7e529ed28122012d3df0f3f9f502635872. Oct 31 02:13:02.469220 systemd[1]: Started cri-containerd-88f0829c1bde83f83a76bcee2ac1e0699665f806ca293ed4b4968f3c45887b78.scope - libcontainer container 88f0829c1bde83f83a76bcee2ac1e0699665f806ca293ed4b4968f3c45887b78. Oct 31 02:13:02.552910 kubelet[2373]: I1031 02:13:02.552870 2373 kubelet_node_status.go:75] "Attempting to register node" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:02.559179 kubelet[2373]: E1031 02:13:02.558282 2373 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.61.6:6443/api/v1/nodes\": dial tcp 10.230.61.6:6443: connect: connection refused" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:02.608123 containerd[1506]: time="2025-10-31T02:13:02.608045417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-xg3om.gb1.brightbox.com,Uid:a76798987d9ffc47bd4a3422a7f289a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"88f0829c1bde83f83a76bcee2ac1e0699665f806ca293ed4b4968f3c45887b78\"" Oct 31 02:13:02.612018 containerd[1506]: time="2025-10-31T02:13:02.611963219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-xg3om.gb1.brightbox.com,Uid:61d1cb99736313431c468ea7ecf6dae8,Namespace:kube-system,Attempt:0,} returns sandbox id \"265401f85a18dfa0f410b2d6487deaae667586b61b2229d99ff8316494ad422c\"" Oct 31 02:13:02.613434 containerd[1506]: time="2025-10-31T02:13:02.613397121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-xg3om.gb1.brightbox.com,Uid:2bbed94c00883c647d2a55172dde75de,Namespace:kube-system,Attempt:0,} returns sandbox id \"646994fe27f1c0c365599f938002cd7e529ed28122012d3df0f3f9f502635872\"" Oct 31 02:13:02.622957 containerd[1506]: time="2025-10-31T02:13:02.622919927Z" level=info msg="CreateContainer within sandbox \"646994fe27f1c0c365599f938002cd7e529ed28122012d3df0f3f9f502635872\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 02:13:02.625004 containerd[1506]: time="2025-10-31T02:13:02.624941591Z" level=info msg="CreateContainer within sandbox \"88f0829c1bde83f83a76bcee2ac1e0699665f806ca293ed4b4968f3c45887b78\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 02:13:02.625956 containerd[1506]: time="2025-10-31T02:13:02.625919963Z" level=info msg="CreateContainer within sandbox \"265401f85a18dfa0f410b2d6487deaae667586b61b2229d99ff8316494ad422c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 02:13:02.645842 containerd[1506]: time="2025-10-31T02:13:02.645786893Z" level=info msg="CreateContainer within sandbox \"88f0829c1bde83f83a76bcee2ac1e0699665f806ca293ed4b4968f3c45887b78\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"286d61c78101c845a144a59813b606ca825604c275ff295375bf4f4662eae054\"" Oct 31 02:13:02.647199 containerd[1506]: time="2025-10-31T02:13:02.647038910Z" level=info msg="StartContainer for \"286d61c78101c845a144a59813b606ca825604c275ff295375bf4f4662eae054\"" Oct 31 02:13:02.654510 containerd[1506]: time="2025-10-31T02:13:02.654330252Z" level=info msg="CreateContainer within sandbox \"646994fe27f1c0c365599f938002cd7e529ed28122012d3df0f3f9f502635872\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4dd9e3c9e7978a2e15fd2e2c1808319fd809119d9f0c6661b9fc9839c50b1976\"" Oct 31 02:13:02.655513 containerd[1506]: time="2025-10-31T02:13:02.655438906Z" level=info msg="CreateContainer within sandbox \"265401f85a18dfa0f410b2d6487deaae667586b61b2229d99ff8316494ad422c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"31b79fc4f6903ec5fdb02eb1c85a91aacf8c8226a5474ea8c8a5fc7c2a58133a\"" Oct 31 02:13:02.657231 containerd[1506]: time="2025-10-31T02:13:02.656751219Z" level=info msg="StartContainer for \"4dd9e3c9e7978a2e15fd2e2c1808319fd809119d9f0c6661b9fc9839c50b1976\"" Oct 31 02:13:02.658508 containerd[1506]: time="2025-10-31T02:13:02.658449987Z" level=info msg="StartContainer for \"31b79fc4f6903ec5fdb02eb1c85a91aacf8c8226a5474ea8c8a5fc7c2a58133a\"" Oct 31 02:13:02.702394 systemd[1]: Started cri-containerd-286d61c78101c845a144a59813b606ca825604c275ff295375bf4f4662eae054.scope - libcontainer container 286d61c78101c845a144a59813b606ca825604c275ff295375bf4f4662eae054. Oct 31 02:13:02.721372 systemd[1]: Started cri-containerd-4dd9e3c9e7978a2e15fd2e2c1808319fd809119d9f0c6661b9fc9839c50b1976.scope - libcontainer container 4dd9e3c9e7978a2e15fd2e2c1808319fd809119d9f0c6661b9fc9839c50b1976. Oct 31 02:13:02.733320 systemd[1]: Started cri-containerd-31b79fc4f6903ec5fdb02eb1c85a91aacf8c8226a5474ea8c8a5fc7c2a58133a.scope - libcontainer container 31b79fc4f6903ec5fdb02eb1c85a91aacf8c8226a5474ea8c8a5fc7c2a58133a. Oct 31 02:13:02.831078 containerd[1506]: time="2025-10-31T02:13:02.830457682Z" level=info msg="StartContainer for \"31b79fc4f6903ec5fdb02eb1c85a91aacf8c8226a5474ea8c8a5fc7c2a58133a\" returns successfully" Oct 31 02:13:02.839692 containerd[1506]: time="2025-10-31T02:13:02.839600515Z" level=info msg="StartContainer for \"286d61c78101c845a144a59813b606ca825604c275ff295375bf4f4662eae054\" returns successfully" Oct 31 02:13:02.847205 containerd[1506]: time="2025-10-31T02:13:02.846483143Z" level=info msg="StartContainer for \"4dd9e3c9e7978a2e15fd2e2c1808319fd809119d9f0c6661b9fc9839c50b1976\" returns successfully" Oct 31 02:13:02.895672 kubelet[2373]: E1031 02:13:02.895613 2373 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.61.6:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.61.6:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 02:13:02.977558 kubelet[2373]: E1031 02:13:02.976878 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xg3om.gb1.brightbox.com\" not found" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:02.980501 kubelet[2373]: E1031 02:13:02.980448 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xg3om.gb1.brightbox.com\" not found" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:02.986634 kubelet[2373]: E1031 02:13:02.986589 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xg3om.gb1.brightbox.com\" not found" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:03.991959 kubelet[2373]: E1031 02:13:03.990010 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xg3om.gb1.brightbox.com\" not found" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:03.991959 kubelet[2373]: E1031 02:13:03.990487 2373 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-xg3om.gb1.brightbox.com\" not found" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:04.161466 kubelet[2373]: I1031 02:13:04.161415 2373 kubelet_node_status.go:75] "Attempting to register node" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:06.362116 kubelet[2373]: E1031 02:13:06.361911 2373 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-xg3om.gb1.brightbox.com\" not found" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:06.388091 kubelet[2373]: E1031 02:13:06.387708 2373 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-xg3om.gb1.brightbox.com.18737198eed2ce3b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-xg3om.gb1.brightbox.com,UID:srv-xg3om.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-xg3om.gb1.brightbox.com,},FirstTimestamp:2025-10-31 02:13:00.887756347 +0000 UTC m=+1.155696467,LastTimestamp:2025-10-31 02:13:00.887756347 +0000 UTC m=+1.155696467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-xg3om.gb1.brightbox.com,}" Oct 31 02:13:06.414696 kubelet[2373]: I1031 02:13:06.414657 2373 kubelet_node_status.go:78] "Successfully registered node" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:06.421790 kubelet[2373]: I1031 02:13:06.421285 2373 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:06.463263 kubelet[2373]: E1031 02:13:06.462765 2373 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-xg3om.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:06.463263 kubelet[2373]: I1031 02:13:06.462809 2373 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:06.466660 kubelet[2373]: I1031 02:13:06.466628 2373 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:06.470590 kubelet[2373]: E1031 02:13:06.470548 2373 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-xg3om.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:06.470590 kubelet[2373]: I1031 02:13:06.470589 2373 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:06.479007 kubelet[2373]: E1031 02:13:06.478459 2373 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-xg3om.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:06.481498 kubelet[2373]: E1031 02:13:06.481469 2373 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-xg3om.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:06.887785 kubelet[2373]: I1031 02:13:06.887255 2373 apiserver.go:52] "Watching apiserver" Oct 31 02:13:06.919620 kubelet[2373]: I1031 02:13:06.919471 2373 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 02:13:09.213924 systemd[1]: Reloading requested from client PID 2660 ('systemctl') (unit session-11.scope)... Oct 31 02:13:09.213978 systemd[1]: Reloading... Oct 31 02:13:09.362264 zram_generator::config[2699]: No configuration found. Oct 31 02:13:09.563811 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 02:13:09.699624 systemd[1]: Reloading finished in 484 ms. Oct 31 02:13:09.772636 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 02:13:09.792463 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 02:13:09.792916 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 02:13:09.793039 systemd[1]: kubelet.service: Consumed 1.764s CPU time, 129.6M memory peak, 0B memory swap peak. Oct 31 02:13:09.799612 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 02:13:10.226897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 02:13:10.237794 (kubelet)[2763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 02:13:10.344860 kubelet[2763]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 02:13:10.344860 kubelet[2763]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 02:13:10.344860 kubelet[2763]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 02:13:10.350224 kubelet[2763]: I1031 02:13:10.348377 2763 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 02:13:10.363232 kubelet[2763]: I1031 02:13:10.362533 2763 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Oct 31 02:13:10.363798 kubelet[2763]: I1031 02:13:10.363725 2763 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 02:13:10.364251 kubelet[2763]: I1031 02:13:10.364158 2763 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 02:13:10.368223 kubelet[2763]: I1031 02:13:10.368187 2763 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 31 02:13:10.389134 kubelet[2763]: I1031 02:13:10.387240 2763 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 02:13:10.408417 kubelet[2763]: E1031 02:13:10.406566 2763 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 02:13:10.408557 kubelet[2763]: I1031 02:13:10.408430 2763 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 31 02:13:10.420178 kubelet[2763]: I1031 02:13:10.418713 2763 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 31 02:13:10.420642 kubelet[2763]: I1031 02:13:10.419141 2763 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 02:13:10.424093 kubelet[2763]: I1031 02:13:10.420917 2763 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-xg3om.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 02:13:10.425174 kubelet[2763]: I1031 02:13:10.425057 2763 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 02:13:10.425552 kubelet[2763]: I1031 02:13:10.425439 2763 container_manager_linux.go:303] "Creating device plugin manager" Oct 31 02:13:10.426178 kubelet[2763]: I1031 02:13:10.426137 2763 state_mem.go:36] "Initialized new in-memory state store" Oct 31 02:13:10.426737 kubelet[2763]: I1031 02:13:10.426633 2763 kubelet.go:480] "Attempting to sync node with API server" Oct 31 02:13:10.427090 kubelet[2763]: I1031 02:13:10.427066 2763 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 02:13:10.427370 kubelet[2763]: I1031 02:13:10.427242 2763 kubelet.go:386] "Adding apiserver pod source" Oct 31 02:13:10.427370 kubelet[2763]: I1031 02:13:10.427271 2763 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 02:13:10.432447 kubelet[2763]: I1031 02:13:10.431845 2763 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 02:13:10.434412 kubelet[2763]: I1031 02:13:10.434032 2763 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 02:13:10.449356 kubelet[2763]: I1031 02:13:10.449228 2763 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 31 02:13:10.450095 kubelet[2763]: I1031 02:13:10.450074 2763 server.go:1289] "Started kubelet" Oct 31 02:13:10.456005 kubelet[2763]: I1031 02:13:10.455925 2763 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 02:13:10.459832 kubelet[2763]: I1031 02:13:10.457020 2763 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 02:13:10.460419 kubelet[2763]: I1031 02:13:10.460355 2763 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 02:13:10.464291 kubelet[2763]: I1031 02:13:10.462531 2763 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 02:13:10.473412 kubelet[2763]: I1031 02:13:10.473369 2763 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 02:13:10.498347 kubelet[2763]: I1031 02:13:10.496111 2763 factory.go:223] Registration of the systemd container factory successfully Oct 31 02:13:10.498347 kubelet[2763]: I1031 02:13:10.496733 2763 server.go:317] "Adding debug handlers to kubelet server" Oct 31 02:13:10.498347 kubelet[2763]: I1031 02:13:10.497322 2763 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 02:13:10.500269 kubelet[2763]: I1031 02:13:10.474722 2763 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 31 02:13:10.504352 kubelet[2763]: I1031 02:13:10.474707 2763 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 31 02:13:10.506233 kubelet[2763]: E1031 02:13:10.474758 2763 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-xg3om.gb1.brightbox.com\" not found" Oct 31 02:13:10.506658 kubelet[2763]: I1031 02:13:10.506624 2763 reconciler.go:26] "Reconciler: start to sync state" Oct 31 02:13:10.515384 kubelet[2763]: I1031 02:13:10.515356 2763 factory.go:223] Registration of the containerd container factory successfully Oct 31 02:13:10.546532 kubelet[2763]: E1031 02:13:10.516472 2763 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 02:13:10.607221 kubelet[2763]: I1031 02:13:10.606337 2763 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Oct 31 02:13:10.611199 kubelet[2763]: I1031 02:13:10.610895 2763 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Oct 31 02:13:10.611199 kubelet[2763]: I1031 02:13:10.610944 2763 status_manager.go:230] "Starting to sync pod status with apiserver" Oct 31 02:13:10.611199 kubelet[2763]: I1031 02:13:10.610980 2763 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 02:13:10.611199 kubelet[2763]: I1031 02:13:10.610996 2763 kubelet.go:2436] "Starting kubelet main sync loop" Oct 31 02:13:10.611199 kubelet[2763]: E1031 02:13:10.611066 2763 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 02:13:10.709257 kubelet[2763]: I1031 02:13:10.708808 2763 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 02:13:10.709257 kubelet[2763]: I1031 02:13:10.708839 2763 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 02:13:10.709257 kubelet[2763]: I1031 02:13:10.708877 2763 state_mem.go:36] "Initialized new in-memory state store" Oct 31 02:13:10.709257 kubelet[2763]: I1031 02:13:10.709097 2763 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 02:13:10.709257 kubelet[2763]: I1031 02:13:10.709128 2763 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 02:13:10.710943 kubelet[2763]: I1031 02:13:10.710923 2763 policy_none.go:49] "None policy: Start" Oct 31 02:13:10.711099 kubelet[2763]: I1031 02:13:10.711078 2763 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 31 02:13:10.711261 kubelet[2763]: I1031 02:13:10.711241 2763 state_mem.go:35] "Initializing new in-memory state store" Oct 31 02:13:10.712223 kubelet[2763]: I1031 02:13:10.711539 2763 state_mem.go:75] "Updated machine memory state" Oct 31 02:13:10.712531 kubelet[2763]: E1031 02:13:10.712302 2763 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 31 02:13:10.732426 kubelet[2763]: E1031 02:13:10.730735 2763 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 02:13:10.732426 kubelet[2763]: I1031 02:13:10.731052 2763 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 02:13:10.734451 kubelet[2763]: I1031 02:13:10.734238 2763 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 02:13:10.739526 kubelet[2763]: I1031 02:13:10.737029 2763 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 02:13:10.753248 kubelet[2763]: E1031 02:13:10.751550 2763 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 02:13:10.865959 kubelet[2763]: I1031 02:13:10.865893 2763 kubelet_node_status.go:75] "Attempting to register node" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:10.880373 kubelet[2763]: I1031 02:13:10.879871 2763 kubelet_node_status.go:124] "Node was previously registered" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:10.880373 kubelet[2763]: I1031 02:13:10.880226 2763 kubelet_node_status.go:78] "Successfully registered node" node="srv-xg3om.gb1.brightbox.com" Oct 31 02:13:10.914021 kubelet[2763]: I1031 02:13:10.913982 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:10.915127 kubelet[2763]: I1031 02:13:10.914410 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:10.917525 kubelet[2763]: I1031 02:13:10.916696 2763 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:10.928064 kubelet[2763]: I1031 02:13:10.928029 2763 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 31 02:13:10.931326 kubelet[2763]: I1031 02:13:10.930652 2763 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 31 02:13:10.931634 kubelet[2763]: I1031 02:13:10.931578 2763 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 31 02:13:11.010350 kubelet[2763]: I1031 02:13:11.009149 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61d1cb99736313431c468ea7ecf6dae8-k8s-certs\") pod \"kube-apiserver-srv-xg3om.gb1.brightbox.com\" (UID: \"61d1cb99736313431c468ea7ecf6dae8\") " pod="kube-system/kube-apiserver-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:11.010871 kubelet[2763]: I1031 02:13:11.010378 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61d1cb99736313431c468ea7ecf6dae8-usr-share-ca-certificates\") pod \"kube-apiserver-srv-xg3om.gb1.brightbox.com\" (UID: \"61d1cb99736313431c468ea7ecf6dae8\") " pod="kube-system/kube-apiserver-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:11.010871 kubelet[2763]: I1031 02:13:11.010426 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a76798987d9ffc47bd4a3422a7f289a2-flexvolume-dir\") pod \"kube-controller-manager-srv-xg3om.gb1.brightbox.com\" (UID: \"a76798987d9ffc47bd4a3422a7f289a2\") " pod="kube-system/kube-controller-manager-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:11.010871 kubelet[2763]: I1031 02:13:11.010834 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a76798987d9ffc47bd4a3422a7f289a2-kubeconfig\") pod \"kube-controller-manager-srv-xg3om.gb1.brightbox.com\" (UID: \"a76798987d9ffc47bd4a3422a7f289a2\") " pod="kube-system/kube-controller-manager-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:11.011027 kubelet[2763]: I1031 02:13:11.010908 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a76798987d9ffc47bd4a3422a7f289a2-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-xg3om.gb1.brightbox.com\" (UID: \"a76798987d9ffc47bd4a3422a7f289a2\") " pod="kube-system/kube-controller-manager-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:11.011027 kubelet[2763]: I1031 02:13:11.010984 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61d1cb99736313431c468ea7ecf6dae8-ca-certs\") pod \"kube-apiserver-srv-xg3om.gb1.brightbox.com\" (UID: \"61d1cb99736313431c468ea7ecf6dae8\") " pod="kube-system/kube-apiserver-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:11.011127 kubelet[2763]: I1031 02:13:11.011045 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a76798987d9ffc47bd4a3422a7f289a2-ca-certs\") pod \"kube-controller-manager-srv-xg3om.gb1.brightbox.com\" (UID: \"a76798987d9ffc47bd4a3422a7f289a2\") " pod="kube-system/kube-controller-manager-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:11.011127 kubelet[2763]: I1031 02:13:11.011078 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a76798987d9ffc47bd4a3422a7f289a2-k8s-certs\") pod \"kube-controller-manager-srv-xg3om.gb1.brightbox.com\" (UID: \"a76798987d9ffc47bd4a3422a7f289a2\") " pod="kube-system/kube-controller-manager-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:11.011127 kubelet[2763]: I1031 02:13:11.011110 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2bbed94c00883c647d2a55172dde75de-kubeconfig\") pod \"kube-scheduler-srv-xg3om.gb1.brightbox.com\" (UID: \"2bbed94c00883c647d2a55172dde75de\") " pod="kube-system/kube-scheduler-srv-xg3om.gb1.brightbox.com" Oct 31 02:13:11.431534 kubelet[2763]: I1031 02:13:11.429650 2763 apiserver.go:52] "Watching apiserver" Oct 31 02:13:11.503156 kubelet[2763]: I1031 02:13:11.503039 2763 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 31 02:13:11.607436 kubelet[2763]: I1031 02:13:11.606581 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-xg3om.gb1.brightbox.com" podStartSLOduration=1.60655037 podStartE2EDuration="1.60655037s" podCreationTimestamp="2025-10-31 02:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 02:13:11.606129881 +0000 UTC m=+1.356288960" watchObservedRunningTime="2025-10-31 02:13:11.60655037 +0000 UTC m=+1.356709446" Oct 31 02:13:11.627366 kubelet[2763]: I1031 02:13:11.627123 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-xg3om.gb1.brightbox.com" podStartSLOduration=1.62710507 podStartE2EDuration="1.62710507s" podCreationTimestamp="2025-10-31 02:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 02:13:11.626876709 +0000 UTC m=+1.377035787" watchObservedRunningTime="2025-10-31 02:13:11.62710507 +0000 UTC m=+1.377264139" Oct 31 02:13:11.686560 kubelet[2763]: I1031 02:13:11.686292 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-xg3om.gb1.brightbox.com" podStartSLOduration=1.686258097 podStartE2EDuration="1.686258097s" podCreationTimestamp="2025-10-31 02:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 02:13:11.643934407 +0000 UTC m=+1.394093501" watchObservedRunningTime="2025-10-31 02:13:11.686258097 +0000 UTC m=+1.436417202" Oct 31 02:13:15.222662 kubelet[2763]: I1031 02:13:15.222505 2763 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 02:13:15.224335 containerd[1506]: time="2025-10-31T02:13:15.223705984Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 02:13:15.225774 kubelet[2763]: I1031 02:13:15.224333 2763 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 02:13:16.210509 systemd[1]: Created slice kubepods-besteffort-pod165d2094_07e6_4b96_87a9_7a14a3706d97.slice - libcontainer container kubepods-besteffort-pod165d2094_07e6_4b96_87a9_7a14a3706d97.slice. Oct 31 02:13:16.246566 kubelet[2763]: I1031 02:13:16.246371 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/165d2094-07e6-4b96-87a9-7a14a3706d97-lib-modules\") pod \"kube-proxy-sdw7g\" (UID: \"165d2094-07e6-4b96-87a9-7a14a3706d97\") " pod="kube-system/kube-proxy-sdw7g" Oct 31 02:13:16.246566 kubelet[2763]: I1031 02:13:16.246441 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jd42\" (UniqueName: \"kubernetes.io/projected/165d2094-07e6-4b96-87a9-7a14a3706d97-kube-api-access-9jd42\") pod \"kube-proxy-sdw7g\" (UID: \"165d2094-07e6-4b96-87a9-7a14a3706d97\") " pod="kube-system/kube-proxy-sdw7g" Oct 31 02:13:16.246566 kubelet[2763]: I1031 02:13:16.246576 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/165d2094-07e6-4b96-87a9-7a14a3706d97-kube-proxy\") pod \"kube-proxy-sdw7g\" (UID: \"165d2094-07e6-4b96-87a9-7a14a3706d97\") " pod="kube-system/kube-proxy-sdw7g" Oct 31 02:13:16.248354 kubelet[2763]: I1031 02:13:16.246623 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/165d2094-07e6-4b96-87a9-7a14a3706d97-xtables-lock\") pod \"kube-proxy-sdw7g\" (UID: \"165d2094-07e6-4b96-87a9-7a14a3706d97\") " pod="kube-system/kube-proxy-sdw7g" Oct 31 02:13:16.449154 systemd[1]: Created slice kubepods-besteffort-pod29c86bc4_6d0b_4f6e_baac_62ff05d1a475.slice - libcontainer container kubepods-besteffort-pod29c86bc4_6d0b_4f6e_baac_62ff05d1a475.slice. Oct 31 02:13:16.526777 containerd[1506]: time="2025-10-31T02:13:16.526624263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sdw7g,Uid:165d2094-07e6-4b96-87a9-7a14a3706d97,Namespace:kube-system,Attempt:0,}" Oct 31 02:13:16.550145 kubelet[2763]: I1031 02:13:16.550076 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzhk5\" (UniqueName: \"kubernetes.io/projected/29c86bc4-6d0b-4f6e-baac-62ff05d1a475-kube-api-access-kzhk5\") pod \"tigera-operator-7dcd859c48-r5csv\" (UID: \"29c86bc4-6d0b-4f6e-baac-62ff05d1a475\") " pod="tigera-operator/tigera-operator-7dcd859c48-r5csv" Oct 31 02:13:16.550330 kubelet[2763]: I1031 02:13:16.550155 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/29c86bc4-6d0b-4f6e-baac-62ff05d1a475-var-lib-calico\") pod \"tigera-operator-7dcd859c48-r5csv\" (UID: \"29c86bc4-6d0b-4f6e-baac-62ff05d1a475\") " pod="tigera-operator/tigera-operator-7dcd859c48-r5csv" Oct 31 02:13:16.576988 containerd[1506]: time="2025-10-31T02:13:16.576792944Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:13:16.577972 containerd[1506]: time="2025-10-31T02:13:16.577664674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:13:16.577972 containerd[1506]: time="2025-10-31T02:13:16.577746569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:13:16.577972 containerd[1506]: time="2025-10-31T02:13:16.577940332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:13:16.632534 systemd[1]: Started cri-containerd-b823ea6620be71212a337ed8e09dff008aa9a339b8a788697612f5aec663ee27.scope - libcontainer container b823ea6620be71212a337ed8e09dff008aa9a339b8a788697612f5aec663ee27. Oct 31 02:13:16.703244 containerd[1506]: time="2025-10-31T02:13:16.703082032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sdw7g,Uid:165d2094-07e6-4b96-87a9-7a14a3706d97,Namespace:kube-system,Attempt:0,} returns sandbox id \"b823ea6620be71212a337ed8e09dff008aa9a339b8a788697612f5aec663ee27\"" Oct 31 02:13:16.719155 containerd[1506]: time="2025-10-31T02:13:16.719058878Z" level=info msg="CreateContainer within sandbox \"b823ea6620be71212a337ed8e09dff008aa9a339b8a788697612f5aec663ee27\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 02:13:16.740542 containerd[1506]: time="2025-10-31T02:13:16.740458486Z" level=info msg="CreateContainer within sandbox \"b823ea6620be71212a337ed8e09dff008aa9a339b8a788697612f5aec663ee27\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"23ab663b596722572dce3440bd992db8a1659fd9cc5460d77587e646a43a5297\"" Oct 31 02:13:16.741538 containerd[1506]: time="2025-10-31T02:13:16.741496579Z" level=info msg="StartContainer for \"23ab663b596722572dce3440bd992db8a1659fd9cc5460d77587e646a43a5297\"" Oct 31 02:13:16.757274 containerd[1506]: time="2025-10-31T02:13:16.756651484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r5csv,Uid:29c86bc4-6d0b-4f6e-baac-62ff05d1a475,Namespace:tigera-operator,Attempt:0,}" Oct 31 02:13:16.810409 systemd[1]: Started cri-containerd-23ab663b596722572dce3440bd992db8a1659fd9cc5460d77587e646a43a5297.scope - libcontainer container 23ab663b596722572dce3440bd992db8a1659fd9cc5460d77587e646a43a5297. Oct 31 02:13:16.826249 containerd[1506]: time="2025-10-31T02:13:16.824225564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:13:16.826249 containerd[1506]: time="2025-10-31T02:13:16.824385293Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:13:16.826249 containerd[1506]: time="2025-10-31T02:13:16.824406769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:13:16.826249 containerd[1506]: time="2025-10-31T02:13:16.824566232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:13:16.874498 systemd[1]: Started cri-containerd-e1ed488c84a29245fbcb3b393fd56d44ebb52c23644af5ff7df75afdcac264bc.scope - libcontainer container e1ed488c84a29245fbcb3b393fd56d44ebb52c23644af5ff7df75afdcac264bc. Oct 31 02:13:16.894020 containerd[1506]: time="2025-10-31T02:13:16.893560020Z" level=info msg="StartContainer for \"23ab663b596722572dce3440bd992db8a1659fd9cc5460d77587e646a43a5297\" returns successfully" Oct 31 02:13:17.020520 containerd[1506]: time="2025-10-31T02:13:17.020383457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-r5csv,Uid:29c86bc4-6d0b-4f6e-baac-62ff05d1a475,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e1ed488c84a29245fbcb3b393fd56d44ebb52c23644af5ff7df75afdcac264bc\"" Oct 31 02:13:17.025675 containerd[1506]: time="2025-10-31T02:13:17.025150171Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 02:13:17.734912 kubelet[2763]: I1031 02:13:17.734509 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sdw7g" podStartSLOduration=1.734458936 podStartE2EDuration="1.734458936s" podCreationTimestamp="2025-10-31 02:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 02:13:17.734137371 +0000 UTC m=+7.484296454" watchObservedRunningTime="2025-10-31 02:13:17.734458936 +0000 UTC m=+7.484618038" Oct 31 02:13:19.427861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339855298.mount: Deactivated successfully. Oct 31 02:13:21.034848 containerd[1506]: time="2025-10-31T02:13:21.034266952Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:13:21.036791 containerd[1506]: time="2025-10-31T02:13:21.036135199Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 31 02:13:21.039222 containerd[1506]: time="2025-10-31T02:13:21.038500861Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:13:21.043313 containerd[1506]: time="2025-10-31T02:13:21.043263663Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:13:21.044934 containerd[1506]: time="2025-10-31T02:13:21.044534748Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.019300251s" Oct 31 02:13:21.044934 containerd[1506]: time="2025-10-31T02:13:21.044606812Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 31 02:13:21.071739 containerd[1506]: time="2025-10-31T02:13:21.071277635Z" level=info msg="CreateContainer within sandbox \"e1ed488c84a29245fbcb3b393fd56d44ebb52c23644af5ff7df75afdcac264bc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 02:13:21.124662 containerd[1506]: time="2025-10-31T02:13:21.124579872Z" level=info msg="CreateContainer within sandbox \"e1ed488c84a29245fbcb3b393fd56d44ebb52c23644af5ff7df75afdcac264bc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dfdb5dcdbc3a3df3b56de0d522d2ce6b641d2b1675a2c786a3453c1281bd1f5b\"" Oct 31 02:13:21.132760 containerd[1506]: time="2025-10-31T02:13:21.130753516Z" level=info msg="StartContainer for \"dfdb5dcdbc3a3df3b56de0d522d2ce6b641d2b1675a2c786a3453c1281bd1f5b\"" Oct 31 02:13:21.191512 systemd[1]: Started cri-containerd-dfdb5dcdbc3a3df3b56de0d522d2ce6b641d2b1675a2c786a3453c1281bd1f5b.scope - libcontainer container dfdb5dcdbc3a3df3b56de0d522d2ce6b641d2b1675a2c786a3453c1281bd1f5b. Oct 31 02:13:21.243960 containerd[1506]: time="2025-10-31T02:13:21.243907923Z" level=info msg="StartContainer for \"dfdb5dcdbc3a3df3b56de0d522d2ce6b641d2b1675a2c786a3453c1281bd1f5b\" returns successfully" Oct 31 02:13:21.734995 kubelet[2763]: I1031 02:13:21.733107 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-r5csv" podStartSLOduration=1.7046931939999999 podStartE2EDuration="5.731132877s" podCreationTimestamp="2025-10-31 02:13:16 +0000 UTC" firstStartedPulling="2025-10-31 02:13:17.023547405 +0000 UTC m=+6.773706469" lastFinishedPulling="2025-10-31 02:13:21.049987088 +0000 UTC m=+10.800146152" observedRunningTime="2025-10-31 02:13:21.7267637 +0000 UTC m=+11.476922790" watchObservedRunningTime="2025-10-31 02:13:21.731132877 +0000 UTC m=+11.481291966" Oct 31 02:13:31.126010 sudo[1774]: pam_unix(sudo:session): session closed for user root Oct 31 02:13:31.288384 sshd[1771]: pam_unix(sshd:session): session closed for user core Oct 31 02:13:31.302449 systemd[1]: sshd@8-10.230.61.6:22-147.75.109.163:42496.service: Deactivated successfully. Oct 31 02:13:31.311836 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 02:13:31.312861 systemd[1]: session-11.scope: Consumed 10.815s CPU time, 145.3M memory peak, 0B memory swap peak. Oct 31 02:13:31.314990 systemd-logind[1484]: Session 11 logged out. Waiting for processes to exit. Oct 31 02:13:31.319941 systemd-logind[1484]: Removed session 11. Oct 31 02:13:39.581122 systemd[1]: Created slice kubepods-besteffort-pod48048e83_b324_4191_a4a4_187f2ed18d92.slice - libcontainer container kubepods-besteffort-pod48048e83_b324_4191_a4a4_187f2ed18d92.slice. Oct 31 02:13:39.703248 kubelet[2763]: I1031 02:13:39.703129 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48048e83-b324-4191-a4a4-187f2ed18d92-tigera-ca-bundle\") pod \"calico-typha-56876f869c-9m6hh\" (UID: \"48048e83-b324-4191-a4a4-187f2ed18d92\") " pod="calico-system/calico-typha-56876f869c-9m6hh" Oct 31 02:13:39.703248 kubelet[2763]: I1031 02:13:39.703286 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkzz4\" (UniqueName: \"kubernetes.io/projected/48048e83-b324-4191-a4a4-187f2ed18d92-kube-api-access-xkzz4\") pod \"calico-typha-56876f869c-9m6hh\" (UID: \"48048e83-b324-4191-a4a4-187f2ed18d92\") " pod="calico-system/calico-typha-56876f869c-9m6hh" Oct 31 02:13:39.705995 kubelet[2763]: I1031 02:13:39.703338 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/48048e83-b324-4191-a4a4-187f2ed18d92-typha-certs\") pod \"calico-typha-56876f869c-9m6hh\" (UID: \"48048e83-b324-4191-a4a4-187f2ed18d92\") " pod="calico-system/calico-typha-56876f869c-9m6hh" Oct 31 02:13:39.714439 systemd[1]: Created slice kubepods-besteffort-poddc8f2e6a_e5f7_42da_9740_7b12c0c54ed1.slice - libcontainer container kubepods-besteffort-poddc8f2e6a_e5f7_42da_9740_7b12c0c54ed1.slice. Oct 31 02:13:39.804501 kubelet[2763]: I1031 02:13:39.804381 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1-policysync\") pod \"calico-node-z895v\" (UID: \"dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1\") " pod="calico-system/calico-node-z895v" Oct 31 02:13:39.804501 kubelet[2763]: I1031 02:13:39.804458 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1-xtables-lock\") pod \"calico-node-z895v\" (UID: \"dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1\") " pod="calico-system/calico-node-z895v" Oct 31 02:13:39.807135 kubelet[2763]: I1031 02:13:39.804503 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1-var-run-calico\") pod \"calico-node-z895v\" (UID: \"dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1\") " pod="calico-system/calico-node-z895v" Oct 31 02:13:39.807135 kubelet[2763]: I1031 02:13:39.804658 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1-node-certs\") pod \"calico-node-z895v\" (UID: \"dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1\") " pod="calico-system/calico-node-z895v" Oct 31 02:13:39.807135 kubelet[2763]: I1031 02:13:39.804829 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8crj\" (UniqueName: \"kubernetes.io/projected/dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1-kube-api-access-s8crj\") pod \"calico-node-z895v\" (UID: \"dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1\") " pod="calico-system/calico-node-z895v" Oct 31 02:13:39.807135 kubelet[2763]: I1031 02:13:39.804898 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1-cni-bin-dir\") pod \"calico-node-z895v\" (UID: \"dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1\") " pod="calico-system/calico-node-z895v" Oct 31 02:13:39.807135 kubelet[2763]: I1031 02:13:39.804941 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1-cni-log-dir\") pod \"calico-node-z895v\" (UID: \"dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1\") " pod="calico-system/calico-node-z895v" Oct 31 02:13:39.807490 kubelet[2763]: I1031 02:13:39.805106 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1-flexvol-driver-host\") pod \"calico-node-z895v\" (UID: \"dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1\") " pod="calico-system/calico-node-z895v" Oct 31 02:13:39.807490 kubelet[2763]: I1031 02:13:39.805204 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1-var-lib-calico\") pod \"calico-node-z895v\" (UID: \"dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1\") " pod="calico-system/calico-node-z895v" Oct 31 02:13:39.807490 kubelet[2763]: I1031 02:13:39.805411 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1-cni-net-dir\") pod \"calico-node-z895v\" (UID: \"dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1\") " pod="calico-system/calico-node-z895v" Oct 31 02:13:39.807490 kubelet[2763]: I1031 02:13:39.805475 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1-tigera-ca-bundle\") pod \"calico-node-z895v\" (UID: \"dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1\") " pod="calico-system/calico-node-z895v" Oct 31 02:13:39.807490 kubelet[2763]: I1031 02:13:39.805512 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1-lib-modules\") pod \"calico-node-z895v\" (UID: \"dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1\") " pod="calico-system/calico-node-z895v" Oct 31 02:13:39.824745 kubelet[2763]: E1031 02:13:39.824676 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:13:39.898712 containerd[1506]: time="2025-10-31T02:13:39.897276169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56876f869c-9m6hh,Uid:48048e83-b324-4191-a4a4-187f2ed18d92,Namespace:calico-system,Attempt:0,}" Oct 31 02:13:39.909192 kubelet[2763]: I1031 02:13:39.907111 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1aba93ae-9569-4e3f-92f8-b96678002f38-varrun\") pod \"csi-node-driver-rsz7n\" (UID: \"1aba93ae-9569-4e3f-92f8-b96678002f38\") " pod="calico-system/csi-node-driver-rsz7n" Oct 31 02:13:39.909192 kubelet[2763]: I1031 02:13:39.907258 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1aba93ae-9569-4e3f-92f8-b96678002f38-kubelet-dir\") pod \"csi-node-driver-rsz7n\" (UID: \"1aba93ae-9569-4e3f-92f8-b96678002f38\") " pod="calico-system/csi-node-driver-rsz7n" Oct 31 02:13:39.909192 kubelet[2763]: I1031 02:13:39.907295 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1aba93ae-9569-4e3f-92f8-b96678002f38-registration-dir\") pod \"csi-node-driver-rsz7n\" (UID: \"1aba93ae-9569-4e3f-92f8-b96678002f38\") " pod="calico-system/csi-node-driver-rsz7n" Oct 31 02:13:39.909192 kubelet[2763]: I1031 02:13:39.907329 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1aba93ae-9569-4e3f-92f8-b96678002f38-socket-dir\") pod \"csi-node-driver-rsz7n\" (UID: \"1aba93ae-9569-4e3f-92f8-b96678002f38\") " pod="calico-system/csi-node-driver-rsz7n" Oct 31 02:13:39.909192 kubelet[2763]: I1031 02:13:39.907373 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p6pw\" (UniqueName: \"kubernetes.io/projected/1aba93ae-9569-4e3f-92f8-b96678002f38-kube-api-access-2p6pw\") pod \"csi-node-driver-rsz7n\" (UID: \"1aba93ae-9569-4e3f-92f8-b96678002f38\") " pod="calico-system/csi-node-driver-rsz7n" Oct 31 02:13:39.923541 kubelet[2763]: E1031 02:13:39.923319 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:39.923541 kubelet[2763]: W1031 02:13:39.923495 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:39.923939 kubelet[2763]: E1031 02:13:39.923669 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:39.938966 kubelet[2763]: E1031 02:13:39.938926 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:39.939926 kubelet[2763]: W1031 02:13:39.939192 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:39.939926 kubelet[2763]: E1031 02:13:39.939247 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:39.940298 kubelet[2763]: E1031 02:13:39.940275 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:39.940397 kubelet[2763]: W1031 02:13:39.940375 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:39.940524 kubelet[2763]: E1031 02:13:39.940502 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:39.995177 containerd[1506]: time="2025-10-31T02:13:39.994948297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:13:39.996691 containerd[1506]: time="2025-10-31T02:13:39.996359736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:13:39.996691 containerd[1506]: time="2025-10-31T02:13:39.996394731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:13:39.996691 containerd[1506]: time="2025-10-31T02:13:39.996606188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:13:40.010281 kubelet[2763]: E1031 02:13:40.008938 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.010281 kubelet[2763]: W1031 02:13:40.008970 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.010281 kubelet[2763]: E1031 02:13:40.008999 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.010281 kubelet[2763]: E1031 02:13:40.009330 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.010281 kubelet[2763]: W1031 02:13:40.009345 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.010281 kubelet[2763]: E1031 02:13:40.009361 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.010281 kubelet[2763]: E1031 02:13:40.009624 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.010281 kubelet[2763]: W1031 02:13:40.009638 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.010281 kubelet[2763]: E1031 02:13:40.009667 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.010281 kubelet[2763]: E1031 02:13:40.009997 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.011487 kubelet[2763]: W1031 02:13:40.010013 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.011487 kubelet[2763]: E1031 02:13:40.010045 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.011487 kubelet[2763]: E1031 02:13:40.010400 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.011487 kubelet[2763]: W1031 02:13:40.010415 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.011487 kubelet[2763]: E1031 02:13:40.010432 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.011487 kubelet[2763]: E1031 02:13:40.010803 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.011487 kubelet[2763]: W1031 02:13:40.010822 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.011487 kubelet[2763]: E1031 02:13:40.010842 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.012733 kubelet[2763]: E1031 02:13:40.012491 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.012733 kubelet[2763]: W1031 02:13:40.012512 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.012733 kubelet[2763]: E1031 02:13:40.012552 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.013607 kubelet[2763]: E1031 02:13:40.013002 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.013607 kubelet[2763]: W1031 02:13:40.013017 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.013607 kubelet[2763]: E1031 02:13:40.013034 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.013607 kubelet[2763]: E1031 02:13:40.013390 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.013607 kubelet[2763]: W1031 02:13:40.013405 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.013607 kubelet[2763]: E1031 02:13:40.013422 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.015069 kubelet[2763]: E1031 02:13:40.015048 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.015344 kubelet[2763]: W1031 02:13:40.015210 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.015344 kubelet[2763]: E1031 02:13:40.015248 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.016619 kubelet[2763]: E1031 02:13:40.016431 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.016619 kubelet[2763]: W1031 02:13:40.016451 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.016619 kubelet[2763]: E1031 02:13:40.016468 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.016975 kubelet[2763]: E1031 02:13:40.016862 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.016975 kubelet[2763]: W1031 02:13:40.016881 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.016975 kubelet[2763]: E1031 02:13:40.016899 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.017681 kubelet[2763]: E1031 02:13:40.017544 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.017681 kubelet[2763]: W1031 02:13:40.017579 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.017681 kubelet[2763]: E1031 02:13:40.017599 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.018303 kubelet[2763]: E1031 02:13:40.018095 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.018303 kubelet[2763]: W1031 02:13:40.018114 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.018303 kubelet[2763]: E1031 02:13:40.018131 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.019352 kubelet[2763]: E1031 02:13:40.018774 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.019352 kubelet[2763]: W1031 02:13:40.018794 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.019352 kubelet[2763]: E1031 02:13:40.018811 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.020129 containerd[1506]: time="2025-10-31T02:13:40.020073714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z895v,Uid:dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1,Namespace:calico-system,Attempt:0,}" Oct 31 02:13:40.020651 kubelet[2763]: E1031 02:13:40.020490 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.020651 kubelet[2763]: W1031 02:13:40.020511 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.020651 kubelet[2763]: E1031 02:13:40.020529 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.022350 kubelet[2763]: E1031 02:13:40.021783 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.022350 kubelet[2763]: W1031 02:13:40.021803 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.022350 kubelet[2763]: E1031 02:13:40.021821 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.022350 kubelet[2763]: E1031 02:13:40.022095 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.022350 kubelet[2763]: W1031 02:13:40.022110 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.022350 kubelet[2763]: E1031 02:13:40.022127 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.023031 kubelet[2763]: E1031 02:13:40.022825 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.023031 kubelet[2763]: W1031 02:13:40.022846 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.023031 kubelet[2763]: E1031 02:13:40.022863 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.024012 kubelet[2763]: E1031 02:13:40.023749 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.024012 kubelet[2763]: W1031 02:13:40.023770 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.024012 kubelet[2763]: E1031 02:13:40.023787 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.025067 kubelet[2763]: E1031 02:13:40.024907 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.025067 kubelet[2763]: W1031 02:13:40.024951 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.025067 kubelet[2763]: E1031 02:13:40.024994 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.026062 kubelet[2763]: E1031 02:13:40.026040 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.026344 kubelet[2763]: W1031 02:13:40.026143 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.026344 kubelet[2763]: E1031 02:13:40.026197 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.026908 kubelet[2763]: E1031 02:13:40.026886 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.027110 kubelet[2763]: W1031 02:13:40.027062 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.027451 kubelet[2763]: E1031 02:13:40.027324 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.028259 kubelet[2763]: E1031 02:13:40.028002 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.028259 kubelet[2763]: W1031 02:13:40.028041 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.028259 kubelet[2763]: E1031 02:13:40.028062 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.029303 kubelet[2763]: E1031 02:13:40.028611 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.029303 kubelet[2763]: W1031 02:13:40.028631 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.029303 kubelet[2763]: E1031 02:13:40.028648 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.066586 kubelet[2763]: E1031 02:13:40.066522 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:40.067198 kubelet[2763]: W1031 02:13:40.066960 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:40.067198 kubelet[2763]: E1031 02:13:40.067022 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:40.106616 systemd[1]: Started cri-containerd-c187e5a04216f506d5eb0f58f3a90fc7ded9318e2e4d081e26802e03488f1256.scope - libcontainer container c187e5a04216f506d5eb0f58f3a90fc7ded9318e2e4d081e26802e03488f1256. Oct 31 02:13:40.115138 containerd[1506]: time="2025-10-31T02:13:40.114226353Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:13:40.115138 containerd[1506]: time="2025-10-31T02:13:40.114369976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:13:40.115138 containerd[1506]: time="2025-10-31T02:13:40.114404285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:13:40.115138 containerd[1506]: time="2025-10-31T02:13:40.114633312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:13:40.160374 systemd[1]: Started cri-containerd-4225f05525b122be12d426f5c17852b40a4685a341bc2a86e62243a34718d770.scope - libcontainer container 4225f05525b122be12d426f5c17852b40a4685a341bc2a86e62243a34718d770. Oct 31 02:13:40.218867 containerd[1506]: time="2025-10-31T02:13:40.218465162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56876f869c-9m6hh,Uid:48048e83-b324-4191-a4a4-187f2ed18d92,Namespace:calico-system,Attempt:0,} returns sandbox id \"c187e5a04216f506d5eb0f58f3a90fc7ded9318e2e4d081e26802e03488f1256\"" Oct 31 02:13:40.224356 containerd[1506]: time="2025-10-31T02:13:40.223888549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 02:13:40.232096 containerd[1506]: time="2025-10-31T02:13:40.231776017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z895v,Uid:dc8f2e6a-e5f7-42da-9740-7b12c0c54ed1,Namespace:calico-system,Attempt:0,} returns sandbox id \"4225f05525b122be12d426f5c17852b40a4685a341bc2a86e62243a34718d770\"" Oct 31 02:13:41.612434 kubelet[2763]: E1031 02:13:41.612137 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:13:41.884835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1177351476.mount: Deactivated successfully. Oct 31 02:13:43.567076 containerd[1506]: time="2025-10-31T02:13:43.567000288Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:13:43.568774 containerd[1506]: time="2025-10-31T02:13:43.568711324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 31 02:13:43.569495 containerd[1506]: time="2025-10-31T02:13:43.569449418Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:13:43.574194 containerd[1506]: time="2025-10-31T02:13:43.573143533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:13:43.574523 containerd[1506]: time="2025-10-31T02:13:43.574482093Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.350517451s" Oct 31 02:13:43.574677 containerd[1506]: time="2025-10-31T02:13:43.574647339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 31 02:13:43.578062 containerd[1506]: time="2025-10-31T02:13:43.577390648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 02:13:43.610148 containerd[1506]: time="2025-10-31T02:13:43.610098892Z" level=info msg="CreateContainer within sandbox \"c187e5a04216f506d5eb0f58f3a90fc7ded9318e2e4d081e26802e03488f1256\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 02:13:43.612787 kubelet[2763]: E1031 02:13:43.612715 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:13:43.635567 containerd[1506]: time="2025-10-31T02:13:43.635409936Z" level=info msg="CreateContainer within sandbox \"c187e5a04216f506d5eb0f58f3a90fc7ded9318e2e4d081e26802e03488f1256\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c754ebda45d8a12c2b8fab9b1724ed6cf1d23bed429d54174625b2ae206b7540\"" Oct 31 02:13:43.639331 containerd[1506]: time="2025-10-31T02:13:43.637330457Z" level=info msg="StartContainer for \"c754ebda45d8a12c2b8fab9b1724ed6cf1d23bed429d54174625b2ae206b7540\"" Oct 31 02:13:43.721434 systemd[1]: Started cri-containerd-c754ebda45d8a12c2b8fab9b1724ed6cf1d23bed429d54174625b2ae206b7540.scope - libcontainer container c754ebda45d8a12c2b8fab9b1724ed6cf1d23bed429d54174625b2ae206b7540. Oct 31 02:13:43.812507 containerd[1506]: time="2025-10-31T02:13:43.812454399Z" level=info msg="StartContainer for \"c754ebda45d8a12c2b8fab9b1724ed6cf1d23bed429d54174625b2ae206b7540\" returns successfully" Oct 31 02:13:44.824617 kubelet[2763]: I1031 02:13:44.824470 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56876f869c-9m6hh" podStartSLOduration=2.470429588 podStartE2EDuration="5.823923556s" podCreationTimestamp="2025-10-31 02:13:39 +0000 UTC" firstStartedPulling="2025-10-31 02:13:40.222575579 +0000 UTC m=+29.972734649" lastFinishedPulling="2025-10-31 02:13:43.576069544 +0000 UTC m=+33.326228617" observedRunningTime="2025-10-31 02:13:44.823078735 +0000 UTC m=+34.573237824" watchObservedRunningTime="2025-10-31 02:13:44.823923556 +0000 UTC m=+34.574082638" Oct 31 02:13:44.848527 kubelet[2763]: E1031 02:13:44.848205 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.848527 kubelet[2763]: W1031 02:13:44.848286 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.848527 kubelet[2763]: E1031 02:13:44.848389 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.848905 kubelet[2763]: E1031 02:13:44.848784 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.848905 kubelet[2763]: W1031 02:13:44.848874 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.849117 kubelet[2763]: E1031 02:13:44.848910 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.849361 kubelet[2763]: E1031 02:13:44.849330 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.849361 kubelet[2763]: W1031 02:13:44.849355 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.849495 kubelet[2763]: E1031 02:13:44.849373 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.849875 kubelet[2763]: E1031 02:13:44.849852 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.849875 kubelet[2763]: W1031 02:13:44.849873 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.850032 kubelet[2763]: E1031 02:13:44.849890 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.850274 kubelet[2763]: E1031 02:13:44.850241 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.850371 kubelet[2763]: W1031 02:13:44.850274 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.850371 kubelet[2763]: E1031 02:13:44.850292 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.850605 kubelet[2763]: E1031 02:13:44.850586 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.850669 kubelet[2763]: W1031 02:13:44.850605 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.850669 kubelet[2763]: E1031 02:13:44.850622 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.850966 kubelet[2763]: E1031 02:13:44.850944 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.850966 kubelet[2763]: W1031 02:13:44.850964 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.851112 kubelet[2763]: E1031 02:13:44.850981 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.851379 kubelet[2763]: E1031 02:13:44.851358 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.851379 kubelet[2763]: W1031 02:13:44.851378 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.851699 kubelet[2763]: E1031 02:13:44.851395 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.851995 kubelet[2763]: E1031 02:13:44.851708 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.851995 kubelet[2763]: W1031 02:13:44.851723 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.851995 kubelet[2763]: E1031 02:13:44.851738 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.852335 kubelet[2763]: E1031 02:13:44.852061 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.852335 kubelet[2763]: W1031 02:13:44.852076 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.852335 kubelet[2763]: E1031 02:13:44.852105 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.852557 kubelet[2763]: E1031 02:13:44.852431 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.852557 kubelet[2763]: W1031 02:13:44.852446 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.852557 kubelet[2763]: E1031 02:13:44.852462 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.852753 kubelet[2763]: E1031 02:13:44.852729 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.852753 kubelet[2763]: W1031 02:13:44.852744 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.852903 kubelet[2763]: E1031 02:13:44.852760 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.853311 kubelet[2763]: E1031 02:13:44.853043 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.853311 kubelet[2763]: W1031 02:13:44.853061 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.853311 kubelet[2763]: E1031 02:13:44.853078 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.853522 kubelet[2763]: E1031 02:13:44.853396 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.853522 kubelet[2763]: W1031 02:13:44.853411 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.853522 kubelet[2763]: E1031 02:13:44.853427 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.853681 kubelet[2763]: E1031 02:13:44.853658 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.853681 kubelet[2763]: W1031 02:13:44.853679 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.854019 kubelet[2763]: E1031 02:13:44.853696 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.948870 kubelet[2763]: E1031 02:13:44.948827 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.948870 kubelet[2763]: W1031 02:13:44.948862 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.949277 kubelet[2763]: E1031 02:13:44.948892 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.949372 kubelet[2763]: E1031 02:13:44.949352 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.949372 kubelet[2763]: W1031 02:13:44.949368 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.949746 kubelet[2763]: E1031 02:13:44.949385 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.950139 kubelet[2763]: E1031 02:13:44.950111 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.950456 kubelet[2763]: W1031 02:13:44.950279 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.950456 kubelet[2763]: E1031 02:13:44.950331 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.951079 kubelet[2763]: E1031 02:13:44.950900 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.951079 kubelet[2763]: W1031 02:13:44.950920 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.951079 kubelet[2763]: E1031 02:13:44.950937 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.951545 kubelet[2763]: E1031 02:13:44.951407 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.951545 kubelet[2763]: W1031 02:13:44.951426 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.951545 kubelet[2763]: E1031 02:13:44.951443 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.952134 kubelet[2763]: E1031 02:13:44.951959 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.952134 kubelet[2763]: W1031 02:13:44.951978 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.952134 kubelet[2763]: E1031 02:13:44.951995 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.952562 kubelet[2763]: E1031 02:13:44.952428 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.952562 kubelet[2763]: W1031 02:13:44.952450 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.952562 kubelet[2763]: E1031 02:13:44.952468 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.953200 kubelet[2763]: E1031 02:13:44.953054 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.953200 kubelet[2763]: W1031 02:13:44.953073 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.953200 kubelet[2763]: E1031 02:13:44.953089 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.953843 kubelet[2763]: E1031 02:13:44.953679 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.953843 kubelet[2763]: W1031 02:13:44.953698 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.953843 kubelet[2763]: E1031 02:13:44.953715 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.954463 kubelet[2763]: E1031 02:13:44.954226 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.954463 kubelet[2763]: W1031 02:13:44.954242 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.954463 kubelet[2763]: E1031 02:13:44.954258 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.955061 kubelet[2763]: E1031 02:13:44.954948 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.955061 kubelet[2763]: W1031 02:13:44.954968 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.955061 kubelet[2763]: E1031 02:13:44.954985 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.955765 kubelet[2763]: E1031 02:13:44.955545 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.955765 kubelet[2763]: W1031 02:13:44.955565 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.955765 kubelet[2763]: E1031 02:13:44.955583 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.956174 kubelet[2763]: E1031 02:13:44.956050 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.956174 kubelet[2763]: W1031 02:13:44.956070 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.956174 kubelet[2763]: E1031 02:13:44.956087 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.956447 kubelet[2763]: E1031 02:13:44.956424 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.956519 kubelet[2763]: W1031 02:13:44.956450 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.956519 kubelet[2763]: E1031 02:13:44.956471 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.956879 kubelet[2763]: E1031 02:13:44.956844 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.956879 kubelet[2763]: W1031 02:13:44.956871 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.957029 kubelet[2763]: E1031 02:13:44.956890 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.958224 kubelet[2763]: E1031 02:13:44.957509 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.958224 kubelet[2763]: W1031 02:13:44.957528 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.958224 kubelet[2763]: E1031 02:13:44.957546 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.958224 kubelet[2763]: E1031 02:13:44.957951 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.958224 kubelet[2763]: W1031 02:13:44.957972 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.958224 kubelet[2763]: E1031 02:13:44.957997 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:44.959623 kubelet[2763]: E1031 02:13:44.959600 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 02:13:44.959623 kubelet[2763]: W1031 02:13:44.959622 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 02:13:44.959760 kubelet[2763]: E1031 02:13:44.959641 2763 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 02:13:45.358254 containerd[1506]: time="2025-10-31T02:13:45.358029932Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:13:45.360545 containerd[1506]: time="2025-10-31T02:13:45.360200124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 31 02:13:45.361498 containerd[1506]: time="2025-10-31T02:13:45.361458615Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:13:45.365149 containerd[1506]: time="2025-10-31T02:13:45.365106805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:13:45.366608 containerd[1506]: time="2025-10-31T02:13:45.366559321Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.789121814s" Oct 31 02:13:45.366734 containerd[1506]: time="2025-10-31T02:13:45.366611136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 31 02:13:45.373698 containerd[1506]: time="2025-10-31T02:13:45.373648017Z" level=info msg="CreateContainer within sandbox \"4225f05525b122be12d426f5c17852b40a4685a341bc2a86e62243a34718d770\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 02:13:45.405143 containerd[1506]: time="2025-10-31T02:13:45.405019873Z" level=info msg="CreateContainer within sandbox \"4225f05525b122be12d426f5c17852b40a4685a341bc2a86e62243a34718d770\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b07c381f6689b3cb9a65f044596900fa9abb51b7aa369f0ee11cae0a8ab93684\"" Oct 31 02:13:45.407281 containerd[1506]: time="2025-10-31T02:13:45.407112691Z" level=info msg="StartContainer for \"b07c381f6689b3cb9a65f044596900fa9abb51b7aa369f0ee11cae0a8ab93684\"" Oct 31 02:13:45.479428 systemd[1]: Started cri-containerd-b07c381f6689b3cb9a65f044596900fa9abb51b7aa369f0ee11cae0a8ab93684.scope - libcontainer container b07c381f6689b3cb9a65f044596900fa9abb51b7aa369f0ee11cae0a8ab93684. Oct 31 02:13:45.527944 containerd[1506]: time="2025-10-31T02:13:45.527888318Z" level=info msg="StartContainer for \"b07c381f6689b3cb9a65f044596900fa9abb51b7aa369f0ee11cae0a8ab93684\" returns successfully" Oct 31 02:13:45.551103 systemd[1]: cri-containerd-b07c381f6689b3cb9a65f044596900fa9abb51b7aa369f0ee11cae0a8ab93684.scope: Deactivated successfully. Oct 31 02:13:45.614023 kubelet[2763]: E1031 02:13:45.613304 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:13:45.618694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b07c381f6689b3cb9a65f044596900fa9abb51b7aa369f0ee11cae0a8ab93684-rootfs.mount: Deactivated successfully. Oct 31 02:13:45.808517 kubelet[2763]: I1031 02:13:45.808388 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 02:13:45.827673 containerd[1506]: time="2025-10-31T02:13:45.810038595Z" level=info msg="shim disconnected" id=b07c381f6689b3cb9a65f044596900fa9abb51b7aa369f0ee11cae0a8ab93684 namespace=k8s.io Oct 31 02:13:45.827673 containerd[1506]: time="2025-10-31T02:13:45.827607247Z" level=warning msg="cleaning up after shim disconnected" id=b07c381f6689b3cb9a65f044596900fa9abb51b7aa369f0ee11cae0a8ab93684 namespace=k8s.io Oct 31 02:13:45.828301 containerd[1506]: time="2025-10-31T02:13:45.827844488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 02:13:46.814555 containerd[1506]: time="2025-10-31T02:13:46.814440121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 02:13:47.612260 kubelet[2763]: E1031 02:13:47.611965 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:13:49.614678 kubelet[2763]: E1031 02:13:49.614451 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:13:51.611955 kubelet[2763]: E1031 02:13:51.611857 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:13:51.647516 containerd[1506]: time="2025-10-31T02:13:51.647395946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:13:51.649181 containerd[1506]: time="2025-10-31T02:13:51.649068340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 31 02:13:51.651192 containerd[1506]: time="2025-10-31T02:13:51.649667685Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:13:51.656808 containerd[1506]: time="2025-10-31T02:13:51.656758569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:13:51.658300 containerd[1506]: time="2025-10-31T02:13:51.658244523Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.84371981s" Oct 31 02:13:51.658391 containerd[1506]: time="2025-10-31T02:13:51.658323111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 31 02:13:51.666325 containerd[1506]: time="2025-10-31T02:13:51.666280474Z" level=info msg="CreateContainer within sandbox \"4225f05525b122be12d426f5c17852b40a4685a341bc2a86e62243a34718d770\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 02:13:51.707738 containerd[1506]: time="2025-10-31T02:13:51.707679811Z" level=info msg="CreateContainer within sandbox \"4225f05525b122be12d426f5c17852b40a4685a341bc2a86e62243a34718d770\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"082b587c80421a85931ed5488f8ec2b730ac9df6525a8c223b0f03f1daffe443\"" Oct 31 02:13:51.709507 containerd[1506]: time="2025-10-31T02:13:51.709402157Z" level=info msg="StartContainer for \"082b587c80421a85931ed5488f8ec2b730ac9df6525a8c223b0f03f1daffe443\"" Oct 31 02:13:51.806438 systemd[1]: Started cri-containerd-082b587c80421a85931ed5488f8ec2b730ac9df6525a8c223b0f03f1daffe443.scope - libcontainer container 082b587c80421a85931ed5488f8ec2b730ac9df6525a8c223b0f03f1daffe443. Oct 31 02:13:51.883357 containerd[1506]: time="2025-10-31T02:13:51.882997327Z" level=info msg="StartContainer for \"082b587c80421a85931ed5488f8ec2b730ac9df6525a8c223b0f03f1daffe443\" returns successfully" Oct 31 02:13:52.302113 kubelet[2763]: I1031 02:13:52.301755 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 02:13:53.068994 systemd[1]: cri-containerd-082b587c80421a85931ed5488f8ec2b730ac9df6525a8c223b0f03f1daffe443.scope: Deactivated successfully. Oct 31 02:13:53.133472 kubelet[2763]: I1031 02:13:53.127132 2763 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 31 02:13:53.140460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-082b587c80421a85931ed5488f8ec2b730ac9df6525a8c223b0f03f1daffe443-rootfs.mount: Deactivated successfully. Oct 31 02:13:53.168713 containerd[1506]: time="2025-10-31T02:13:53.167605341Z" level=info msg="shim disconnected" id=082b587c80421a85931ed5488f8ec2b730ac9df6525a8c223b0f03f1daffe443 namespace=k8s.io Oct 31 02:13:53.170245 containerd[1506]: time="2025-10-31T02:13:53.168708484Z" level=warning msg="cleaning up after shim disconnected" id=082b587c80421a85931ed5488f8ec2b730ac9df6525a8c223b0f03f1daffe443 namespace=k8s.io Oct 31 02:13:53.170245 containerd[1506]: time="2025-10-31T02:13:53.168760601Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 02:13:53.360833 systemd[1]: Created slice kubepods-burstable-podc8e6d4c7_57e9_4902_bae1_886c53b818d8.slice - libcontainer container kubepods-burstable-podc8e6d4c7_57e9_4902_bae1_886c53b818d8.slice. Oct 31 02:13:53.381605 systemd[1]: Created slice kubepods-besteffort-pode4c39b9a_a5c9_405f_a471_262b649fbc6a.slice - libcontainer container kubepods-besteffort-pode4c39b9a_a5c9_405f_a471_262b649fbc6a.slice. Oct 31 02:13:53.396098 systemd[1]: Created slice kubepods-burstable-pod71d3c28c_c709_4960_8b43_030748d0a3ca.slice - libcontainer container kubepods-burstable-pod71d3c28c_c709_4960_8b43_030748d0a3ca.slice. Oct 31 02:13:53.407393 systemd[1]: Created slice kubepods-besteffort-pod8c099e8c_e833_4a6d_9d15_b2b6ba86bb9d.slice - libcontainer container kubepods-besteffort-pod8c099e8c_e833_4a6d_9d15_b2b6ba86bb9d.slice. Oct 31 02:13:53.425907 systemd[1]: Created slice kubepods-besteffort-pode8cd4f39_3f1e_47f1_8de2_399f0cec4257.slice - libcontainer container kubepods-besteffort-pode8cd4f39_3f1e_47f1_8de2_399f0cec4257.slice. Oct 31 02:13:53.439291 systemd[1]: Created slice kubepods-besteffort-pod383b1d33_d54b_4a00_801a_8a36f78ff190.slice - libcontainer container kubepods-besteffort-pod383b1d33_d54b_4a00_801a_8a36f78ff190.slice. Oct 31 02:13:53.450842 systemd[1]: Created slice kubepods-besteffort-pod5c5691c8_bb57_4400_82c8_d0c76d156189.slice - libcontainer container kubepods-besteffort-pod5c5691c8_bb57_4400_82c8_d0c76d156189.slice. Oct 31 02:13:53.463836 kubelet[2763]: I1031 02:13:53.463380 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8e6d4c7-57e9-4902-bae1-886c53b818d8-config-volume\") pod \"coredns-674b8bbfcf-thfzc\" (UID: \"c8e6d4c7-57e9-4902-bae1-886c53b818d8\") " pod="kube-system/coredns-674b8bbfcf-thfzc" Oct 31 02:13:53.465203 kubelet[2763]: I1031 02:13:53.464947 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zkqk\" (UniqueName: \"kubernetes.io/projected/8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d-kube-api-access-8zkqk\") pod \"calico-apiserver-c48557b4b-xk5jv\" (UID: \"8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d\") " pod="calico-apiserver/calico-apiserver-c48557b4b-xk5jv" Oct 31 02:13:53.465203 kubelet[2763]: I1031 02:13:53.465011 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/71d3c28c-c709-4960-8b43-030748d0a3ca-config-volume\") pod \"coredns-674b8bbfcf-jbl2g\" (UID: \"71d3c28c-c709-4960-8b43-030748d0a3ca\") " pod="kube-system/coredns-674b8bbfcf-jbl2g" Oct 31 02:13:53.465203 kubelet[2763]: I1031 02:13:53.465046 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4c39b9a-a5c9-405f-a471-262b649fbc6a-whisker-backend-key-pair\") pod \"whisker-979f7c865-m2xgg\" (UID: \"e4c39b9a-a5c9-405f-a471-262b649fbc6a\") " pod="calico-system/whisker-979f7c865-m2xgg" Oct 31 02:13:53.465203 kubelet[2763]: I1031 02:13:53.465078 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96scz\" (UniqueName: \"kubernetes.io/projected/e4c39b9a-a5c9-405f-a471-262b649fbc6a-kube-api-access-96scz\") pod \"whisker-979f7c865-m2xgg\" (UID: \"e4c39b9a-a5c9-405f-a471-262b649fbc6a\") " pod="calico-system/whisker-979f7c865-m2xgg" Oct 31 02:13:53.465203 kubelet[2763]: I1031 02:13:53.465114 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkmxc\" (UniqueName: \"kubernetes.io/projected/71d3c28c-c709-4960-8b43-030748d0a3ca-kube-api-access-jkmxc\") pod \"coredns-674b8bbfcf-jbl2g\" (UID: \"71d3c28c-c709-4960-8b43-030748d0a3ca\") " pod="kube-system/coredns-674b8bbfcf-jbl2g" Oct 31 02:13:53.465822 kubelet[2763]: I1031 02:13:53.465593 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbz9r\" (UniqueName: \"kubernetes.io/projected/c8e6d4c7-57e9-4902-bae1-886c53b818d8-kube-api-access-jbz9r\") pod \"coredns-674b8bbfcf-thfzc\" (UID: \"c8e6d4c7-57e9-4902-bae1-886c53b818d8\") " pod="kube-system/coredns-674b8bbfcf-thfzc" Oct 31 02:13:53.465822 kubelet[2763]: I1031 02:13:53.465636 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d-calico-apiserver-certs\") pod \"calico-apiserver-c48557b4b-xk5jv\" (UID: \"8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d\") " pod="calico-apiserver/calico-apiserver-c48557b4b-xk5jv" Oct 31 02:13:53.465822 kubelet[2763]: I1031 02:13:53.465678 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4c39b9a-a5c9-405f-a471-262b649fbc6a-whisker-ca-bundle\") pod \"whisker-979f7c865-m2xgg\" (UID: \"e4c39b9a-a5c9-405f-a471-262b649fbc6a\") " pod="calico-system/whisker-979f7c865-m2xgg" Oct 31 02:13:53.566992 kubelet[2763]: I1031 02:13:53.566932 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69bkb\" (UniqueName: \"kubernetes.io/projected/5c5691c8-bb57-4400-82c8-d0c76d156189-kube-api-access-69bkb\") pod \"calico-kube-controllers-b84756f78-vnktk\" (UID: \"5c5691c8-bb57-4400-82c8-d0c76d156189\") " pod="calico-system/calico-kube-controllers-b84756f78-vnktk" Oct 31 02:13:53.567226 kubelet[2763]: I1031 02:13:53.566997 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e8cd4f39-3f1e-47f1-8de2-399f0cec4257-calico-apiserver-certs\") pod \"calico-apiserver-c48557b4b-ts64b\" (UID: \"e8cd4f39-3f1e-47f1-8de2-399f0cec4257\") " pod="calico-apiserver/calico-apiserver-c48557b4b-ts64b" Oct 31 02:13:53.567226 kubelet[2763]: I1031 02:13:53.567054 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59lsh\" (UniqueName: \"kubernetes.io/projected/e8cd4f39-3f1e-47f1-8de2-399f0cec4257-kube-api-access-59lsh\") pod \"calico-apiserver-c48557b4b-ts64b\" (UID: \"e8cd4f39-3f1e-47f1-8de2-399f0cec4257\") " pod="calico-apiserver/calico-apiserver-c48557b4b-ts64b" Oct 31 02:13:53.567226 kubelet[2763]: I1031 02:13:53.567092 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/383b1d33-d54b-4a00-801a-8a36f78ff190-config\") pod \"goldmane-666569f655-5f5wx\" (UID: \"383b1d33-d54b-4a00-801a-8a36f78ff190\") " pod="calico-system/goldmane-666569f655-5f5wx" Oct 31 02:13:53.567226 kubelet[2763]: I1031 02:13:53.567121 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/383b1d33-d54b-4a00-801a-8a36f78ff190-goldmane-key-pair\") pod \"goldmane-666569f655-5f5wx\" (UID: \"383b1d33-d54b-4a00-801a-8a36f78ff190\") " pod="calico-system/goldmane-666569f655-5f5wx" Oct 31 02:13:53.567226 kubelet[2763]: I1031 02:13:53.567219 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/383b1d33-d54b-4a00-801a-8a36f78ff190-goldmane-ca-bundle\") pod \"goldmane-666569f655-5f5wx\" (UID: \"383b1d33-d54b-4a00-801a-8a36f78ff190\") " pod="calico-system/goldmane-666569f655-5f5wx" Oct 31 02:13:53.567570 kubelet[2763]: I1031 02:13:53.567256 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c5691c8-bb57-4400-82c8-d0c76d156189-tigera-ca-bundle\") pod \"calico-kube-controllers-b84756f78-vnktk\" (UID: \"5c5691c8-bb57-4400-82c8-d0c76d156189\") " pod="calico-system/calico-kube-controllers-b84756f78-vnktk" Oct 31 02:13:53.567570 kubelet[2763]: I1031 02:13:53.567393 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbvnq\" (UniqueName: \"kubernetes.io/projected/383b1d33-d54b-4a00-801a-8a36f78ff190-kube-api-access-wbvnq\") pod \"goldmane-666569f655-5f5wx\" (UID: \"383b1d33-d54b-4a00-801a-8a36f78ff190\") " pod="calico-system/goldmane-666569f655-5f5wx" Oct 31 02:13:53.635436 systemd[1]: Created slice kubepods-besteffort-pod1aba93ae_9569_4e3f_92f8_b96678002f38.slice - libcontainer container kubepods-besteffort-pod1aba93ae_9569_4e3f_92f8_b96678002f38.slice. Oct 31 02:13:53.651126 containerd[1506]: time="2025-10-31T02:13:53.651016435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rsz7n,Uid:1aba93ae-9569-4e3f-92f8-b96678002f38,Namespace:calico-system,Attempt:0,}" Oct 31 02:13:53.674429 containerd[1506]: time="2025-10-31T02:13:53.674385705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-thfzc,Uid:c8e6d4c7-57e9-4902-bae1-886c53b818d8,Namespace:kube-system,Attempt:0,}" Oct 31 02:13:53.698441 containerd[1506]: time="2025-10-31T02:13:53.692145632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-979f7c865-m2xgg,Uid:e4c39b9a-a5c9-405f-a471-262b649fbc6a,Namespace:calico-system,Attempt:0,}" Oct 31 02:13:53.711046 containerd[1506]: time="2025-10-31T02:13:53.710993692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jbl2g,Uid:71d3c28c-c709-4960-8b43-030748d0a3ca,Namespace:kube-system,Attempt:0,}" Oct 31 02:13:53.720600 containerd[1506]: time="2025-10-31T02:13:53.720417325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c48557b4b-xk5jv,Uid:8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d,Namespace:calico-apiserver,Attempt:0,}" Oct 31 02:13:53.778149 containerd[1506]: time="2025-10-31T02:13:53.778096389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b84756f78-vnktk,Uid:5c5691c8-bb57-4400-82c8-d0c76d156189,Namespace:calico-system,Attempt:0,}" Oct 31 02:13:53.790065 containerd[1506]: time="2025-10-31T02:13:53.780013461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c48557b4b-ts64b,Uid:e8cd4f39-3f1e-47f1-8de2-399f0cec4257,Namespace:calico-apiserver,Attempt:0,}" Oct 31 02:13:53.796601 containerd[1506]: time="2025-10-31T02:13:53.780085823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5f5wx,Uid:383b1d33-d54b-4a00-801a-8a36f78ff190,Namespace:calico-system,Attempt:0,}" Oct 31 02:13:53.863960 containerd[1506]: time="2025-10-31T02:13:53.863878473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 02:13:54.286011 containerd[1506]: time="2025-10-31T02:13:54.285018429Z" level=error msg="Failed to destroy network for sandbox \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.292386 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0-shm.mount: Deactivated successfully. Oct 31 02:13:54.296099 containerd[1506]: time="2025-10-31T02:13:54.293357520Z" level=error msg="Failed to destroy network for sandbox \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.298326 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074-shm.mount: Deactivated successfully. Oct 31 02:13:54.313276 containerd[1506]: time="2025-10-31T02:13:54.313210164Z" level=error msg="encountered an error cleaning up failed sandbox \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.313449 containerd[1506]: time="2025-10-31T02:13:54.313374457Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jbl2g,Uid:71d3c28c-c709-4960-8b43-030748d0a3ca,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.313683 containerd[1506]: time="2025-10-31T02:13:54.313644324Z" level=error msg="encountered an error cleaning up failed sandbox \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.313773 containerd[1506]: time="2025-10-31T02:13:54.313709719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5f5wx,Uid:383b1d33-d54b-4a00-801a-8a36f78ff190,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.320472 kubelet[2763]: E1031 02:13:54.320181 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.321502 kubelet[2763]: E1031 02:13:54.320172 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.324431 kubelet[2763]: E1031 02:13:54.323239 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jbl2g" Oct 31 02:13:54.324431 kubelet[2763]: E1031 02:13:54.324020 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5f5wx" Oct 31 02:13:54.326641 containerd[1506]: time="2025-10-31T02:13:54.326439366Z" level=error msg="Failed to destroy network for sandbox \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.330184 containerd[1506]: time="2025-10-31T02:13:54.328695306Z" level=error msg="encountered an error cleaning up failed sandbox \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.330184 containerd[1506]: time="2025-10-31T02:13:54.328770472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-979f7c865-m2xgg,Uid:e4c39b9a-a5c9-405f-a471-262b649fbc6a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.330449 kubelet[2763]: E1031 02:13:54.330397 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-jbl2g" Oct 31 02:13:54.331243 kubelet[2763]: E1031 02:13:54.331207 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5f5wx" Oct 31 02:13:54.331504 kubelet[2763]: E1031 02:13:54.331426 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5f5wx_calico-system(383b1d33-d54b-4a00-801a-8a36f78ff190)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5f5wx_calico-system(383b1d33-d54b-4a00-801a-8a36f78ff190)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5f5wx" podUID="383b1d33-d54b-4a00-801a-8a36f78ff190" Oct 31 02:13:54.334180 kubelet[2763]: E1031 02:13:54.331315 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jbl2g_kube-system(71d3c28c-c709-4960-8b43-030748d0a3ca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jbl2g_kube-system(71d3c28c-c709-4960-8b43-030748d0a3ca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jbl2g" podUID="71d3c28c-c709-4960-8b43-030748d0a3ca" Oct 31 02:13:54.334309 containerd[1506]: time="2025-10-31T02:13:54.334049008Z" level=error msg="Failed to destroy network for sandbox \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.331870 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04-shm.mount: Deactivated successfully. Oct 31 02:13:54.338074 containerd[1506]: time="2025-10-31T02:13:54.337004353Z" level=error msg="encountered an error cleaning up failed sandbox \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.339202 kubelet[2763]: E1031 02:13:54.336634 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.339202 kubelet[2763]: E1031 02:13:54.337101 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-979f7c865-m2xgg" Oct 31 02:13:54.339202 kubelet[2763]: E1031 02:13:54.337811 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-979f7c865-m2xgg" Oct 31 02:13:54.338902 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9-shm.mount: Deactivated successfully. Oct 31 02:13:54.339531 kubelet[2763]: E1031 02:13:54.337945 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-979f7c865-m2xgg_calico-system(e4c39b9a-a5c9-405f-a471-262b649fbc6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-979f7c865-m2xgg_calico-system(e4c39b9a-a5c9-405f-a471-262b649fbc6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-979f7c865-m2xgg" podUID="e4c39b9a-a5c9-405f-a471-262b649fbc6a" Oct 31 02:13:54.341121 containerd[1506]: time="2025-10-31T02:13:54.340725010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-thfzc,Uid:c8e6d4c7-57e9-4902-bae1-886c53b818d8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.341787 kubelet[2763]: E1031 02:13:54.341748 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.342038 kubelet[2763]: E1031 02:13:54.342002 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-thfzc" Oct 31 02:13:54.342506 kubelet[2763]: E1031 02:13:54.342044 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-thfzc" Oct 31 02:13:54.342506 kubelet[2763]: E1031 02:13:54.342128 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-thfzc_kube-system(c8e6d4c7-57e9-4902-bae1-886c53b818d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-thfzc_kube-system(c8e6d4c7-57e9-4902-bae1-886c53b818d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-thfzc" podUID="c8e6d4c7-57e9-4902-bae1-886c53b818d8" Oct 31 02:13:54.342668 containerd[1506]: time="2025-10-31T02:13:54.342430019Z" level=error msg="Failed to destroy network for sandbox \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.346184 containerd[1506]: time="2025-10-31T02:13:54.345599169Z" level=error msg="encountered an error cleaning up failed sandbox \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.346184 containerd[1506]: time="2025-10-31T02:13:54.345849861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rsz7n,Uid:1aba93ae-9569-4e3f-92f8-b96678002f38,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.348605 kubelet[2763]: E1031 02:13:54.347532 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.348605 kubelet[2763]: E1031 02:13:54.348225 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rsz7n" Oct 31 02:13:54.348605 kubelet[2763]: E1031 02:13:54.348257 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rsz7n" Oct 31 02:13:54.348816 kubelet[2763]: E1031 02:13:54.348338 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rsz7n_calico-system(1aba93ae-9569-4e3f-92f8-b96678002f38)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rsz7n_calico-system(1aba93ae-9569-4e3f-92f8-b96678002f38)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:13:54.352229 containerd[1506]: time="2025-10-31T02:13:54.352189615Z" level=error msg="Failed to destroy network for sandbox \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.353230 containerd[1506]: time="2025-10-31T02:13:54.352862892Z" level=error msg="encountered an error cleaning up failed sandbox \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.353230 containerd[1506]: time="2025-10-31T02:13:54.352967014Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c48557b4b-ts64b,Uid:e8cd4f39-3f1e-47f1-8de2-399f0cec4257,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.353681 kubelet[2763]: E1031 02:13:54.353421 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.353793 kubelet[2763]: E1031 02:13:54.353702 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c48557b4b-ts64b" Oct 31 02:13:54.353903 kubelet[2763]: E1031 02:13:54.353778 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c48557b4b-ts64b" Oct 31 02:13:54.354317 kubelet[2763]: E1031 02:13:54.353933 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c48557b4b-ts64b_calico-apiserver(e8cd4f39-3f1e-47f1-8de2-399f0cec4257)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c48557b4b-ts64b_calico-apiserver(e8cd4f39-3f1e-47f1-8de2-399f0cec4257)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c48557b4b-ts64b" podUID="e8cd4f39-3f1e-47f1-8de2-399f0cec4257" Oct 31 02:13:54.359765 containerd[1506]: time="2025-10-31T02:13:54.359333632Z" level=error msg="Failed to destroy network for sandbox \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.361824 containerd[1506]: time="2025-10-31T02:13:54.361780516Z" level=error msg="encountered an error cleaning up failed sandbox \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.361945 containerd[1506]: time="2025-10-31T02:13:54.361855897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c48557b4b-xk5jv,Uid:8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.362817 kubelet[2763]: E1031 02:13:54.362205 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.362969 kubelet[2763]: E1031 02:13:54.362842 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c48557b4b-xk5jv" Oct 31 02:13:54.362969 kubelet[2763]: E1031 02:13:54.362871 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c48557b4b-xk5jv" Oct 31 02:13:54.362969 kubelet[2763]: E1031 02:13:54.362934 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c48557b4b-xk5jv_calico-apiserver(8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c48557b4b-xk5jv_calico-apiserver(8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c48557b4b-xk5jv" podUID="8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d" Oct 31 02:13:54.383209 containerd[1506]: time="2025-10-31T02:13:54.382998220Z" level=error msg="Failed to destroy network for sandbox \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.385369 containerd[1506]: time="2025-10-31T02:13:54.385213545Z" level=error msg="encountered an error cleaning up failed sandbox \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.385369 containerd[1506]: time="2025-10-31T02:13:54.385280449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b84756f78-vnktk,Uid:5c5691c8-bb57-4400-82c8-d0c76d156189,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.385566 kubelet[2763]: E1031 02:13:54.385509 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:54.385627 kubelet[2763]: E1031 02:13:54.385571 2763 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b84756f78-vnktk" Oct 31 02:13:54.385627 kubelet[2763]: E1031 02:13:54.385602 2763 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-b84756f78-vnktk" Oct 31 02:13:54.385776 kubelet[2763]: E1031 02:13:54.385663 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-b84756f78-vnktk_calico-system(5c5691c8-bb57-4400-82c8-d0c76d156189)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-b84756f78-vnktk_calico-system(5c5691c8-bb57-4400-82c8-d0c76d156189)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b84756f78-vnktk" podUID="5c5691c8-bb57-4400-82c8-d0c76d156189" Oct 31 02:13:54.861720 kubelet[2763]: I1031 02:13:54.861670 2763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Oct 31 02:13:54.864654 kubelet[2763]: I1031 02:13:54.863960 2763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Oct 31 02:13:54.880682 kubelet[2763]: I1031 02:13:54.880628 2763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Oct 31 02:13:54.883942 kubelet[2763]: I1031 02:13:54.883231 2763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Oct 31 02:13:54.885233 kubelet[2763]: I1031 02:13:54.885208 2763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Oct 31 02:13:54.886979 kubelet[2763]: I1031 02:13:54.886952 2763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Oct 31 02:13:54.892851 kubelet[2763]: I1031 02:13:54.891569 2763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Oct 31 02:13:54.894838 kubelet[2763]: I1031 02:13:54.894811 2763 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Oct 31 02:13:54.908481 containerd[1506]: time="2025-10-31T02:13:54.908385186Z" level=info msg="StopPodSandbox for \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\"" Oct 31 02:13:54.910544 containerd[1506]: time="2025-10-31T02:13:54.910508789Z" level=info msg="StopPodSandbox for \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\"" Oct 31 02:13:54.914417 containerd[1506]: time="2025-10-31T02:13:54.914375057Z" level=info msg="StopPodSandbox for \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\"" Oct 31 02:13:54.914675 containerd[1506]: time="2025-10-31T02:13:54.914632157Z" level=info msg="Ensure that sandbox fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde in task-service has been cleanup successfully" Oct 31 02:13:54.919979 containerd[1506]: time="2025-10-31T02:13:54.919628701Z" level=info msg="Ensure that sandbox c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0 in task-service has been cleanup successfully" Oct 31 02:13:54.922219 containerd[1506]: time="2025-10-31T02:13:54.921916430Z" level=info msg="Ensure that sandbox 1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04 in task-service has been cleanup successfully" Oct 31 02:13:54.922390 containerd[1506]: time="2025-10-31T02:13:54.908732227Z" level=info msg="StopPodSandbox for \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\"" Oct 31 02:13:54.922787 containerd[1506]: time="2025-10-31T02:13:54.922755328Z" level=info msg="Ensure that sandbox 7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9 in task-service has been cleanup successfully" Oct 31 02:13:54.935315 containerd[1506]: time="2025-10-31T02:13:54.908812783Z" level=info msg="StopPodSandbox for \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\"" Oct 31 02:13:54.938215 containerd[1506]: time="2025-10-31T02:13:54.938147956Z" level=info msg="Ensure that sandbox 0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656 in task-service has been cleanup successfully" Oct 31 02:13:54.938753 containerd[1506]: time="2025-10-31T02:13:54.908883875Z" level=info msg="StopPodSandbox for \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\"" Oct 31 02:13:54.939889 containerd[1506]: time="2025-10-31T02:13:54.939798041Z" level=info msg="Ensure that sandbox 9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074 in task-service has been cleanup successfully" Oct 31 02:13:54.943870 containerd[1506]: time="2025-10-31T02:13:54.908967364Z" level=info msg="StopPodSandbox for \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\"" Oct 31 02:13:54.944142 containerd[1506]: time="2025-10-31T02:13:54.909012875Z" level=info msg="StopPodSandbox for \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\"" Oct 31 02:13:54.946819 containerd[1506]: time="2025-10-31T02:13:54.946675122Z" level=info msg="Ensure that sandbox d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66 in task-service has been cleanup successfully" Oct 31 02:13:54.957623 containerd[1506]: time="2025-10-31T02:13:54.949537398Z" level=info msg="Ensure that sandbox 09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d in task-service has been cleanup successfully" Oct 31 02:13:55.045493 containerd[1506]: time="2025-10-31T02:13:55.044881085Z" level=error msg="StopPodSandbox for \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\" failed" error="failed to destroy network for sandbox \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:55.045684 kubelet[2763]: E1031 02:13:55.045315 2763 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Oct 31 02:13:55.045684 kubelet[2763]: E1031 02:13:55.045394 2763 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde"} Oct 31 02:13:55.045684 kubelet[2763]: E1031 02:13:55.045534 2763 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5c5691c8-bb57-4400-82c8-d0c76d156189\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 02:13:55.045684 kubelet[2763]: E1031 02:13:55.045588 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5c5691c8-bb57-4400-82c8-d0c76d156189\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-b84756f78-vnktk" podUID="5c5691c8-bb57-4400-82c8-d0c76d156189" Oct 31 02:13:55.058743 containerd[1506]: time="2025-10-31T02:13:55.058551273Z" level=error msg="StopPodSandbox for \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\" failed" error="failed to destroy network for sandbox \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:55.060102 kubelet[2763]: E1031 02:13:55.060001 2763 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Oct 31 02:13:55.060102 kubelet[2763]: E1031 02:13:55.060079 2763 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0"} Oct 31 02:13:55.060433 kubelet[2763]: E1031 02:13:55.060144 2763 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"71d3c28c-c709-4960-8b43-030748d0a3ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 02:13:55.060433 kubelet[2763]: E1031 02:13:55.060226 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"71d3c28c-c709-4960-8b43-030748d0a3ca\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-jbl2g" podUID="71d3c28c-c709-4960-8b43-030748d0a3ca" Oct 31 02:13:55.090549 containerd[1506]: time="2025-10-31T02:13:55.090295521Z" level=error msg="StopPodSandbox for \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\" failed" error="failed to destroy network for sandbox \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:55.090898 kubelet[2763]: E1031 02:13:55.090647 2763 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Oct 31 02:13:55.090898 kubelet[2763]: E1031 02:13:55.090730 2763 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d"} Oct 31 02:13:55.090898 kubelet[2763]: E1031 02:13:55.090783 2763 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8cd4f39-3f1e-47f1-8de2-399f0cec4257\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 02:13:55.090898 kubelet[2763]: E1031 02:13:55.090818 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8cd4f39-3f1e-47f1-8de2-399f0cec4257\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c48557b4b-ts64b" podUID="e8cd4f39-3f1e-47f1-8de2-399f0cec4257" Oct 31 02:13:55.110507 containerd[1506]: time="2025-10-31T02:13:55.110431068Z" level=error msg="StopPodSandbox for \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\" failed" error="failed to destroy network for sandbox \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:55.110822 kubelet[2763]: E1031 02:13:55.110749 2763 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Oct 31 02:13:55.110977 kubelet[2763]: E1031 02:13:55.110823 2763 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656"} Oct 31 02:13:55.110977 kubelet[2763]: E1031 02:13:55.110872 2763 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 02:13:55.110977 kubelet[2763]: E1031 02:13:55.110907 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c48557b4b-xk5jv" podUID="8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d" Oct 31 02:13:55.120720 containerd[1506]: time="2025-10-31T02:13:55.120587114Z" level=error msg="StopPodSandbox for \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\" failed" error="failed to destroy network for sandbox \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:55.121383 kubelet[2763]: E1031 02:13:55.121336 2763 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Oct 31 02:13:55.121508 kubelet[2763]: E1031 02:13:55.121402 2763 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9"} Oct 31 02:13:55.121508 kubelet[2763]: E1031 02:13:55.121456 2763 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c8e6d4c7-57e9-4902-bae1-886c53b818d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 02:13:55.121810 kubelet[2763]: E1031 02:13:55.121504 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c8e6d4c7-57e9-4902-bae1-886c53b818d8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-thfzc" podUID="c8e6d4c7-57e9-4902-bae1-886c53b818d8" Oct 31 02:13:55.137547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde-shm.mount: Deactivated successfully. Oct 31 02:13:55.138150 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d-shm.mount: Deactivated successfully. Oct 31 02:13:55.138598 containerd[1506]: time="2025-10-31T02:13:55.138265891Z" level=error msg="StopPodSandbox for \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\" failed" error="failed to destroy network for sandbox \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:55.138695 kubelet[2763]: E1031 02:13:55.138601 2763 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Oct 31 02:13:55.138695 kubelet[2763]: E1031 02:13:55.138667 2763 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04"} Oct 31 02:13:55.138831 kubelet[2763]: E1031 02:13:55.138719 2763 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4c39b9a-a5c9-405f-a471-262b649fbc6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 02:13:55.138831 kubelet[2763]: E1031 02:13:55.138765 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4c39b9a-a5c9-405f-a471-262b649fbc6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-979f7c865-m2xgg" podUID="e4c39b9a-a5c9-405f-a471-262b649fbc6a" Oct 31 02:13:55.138921 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656-shm.mount: Deactivated successfully. Oct 31 02:13:55.139064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66-shm.mount: Deactivated successfully. Oct 31 02:13:55.139949 containerd[1506]: time="2025-10-31T02:13:55.139287862Z" level=error msg="StopPodSandbox for \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\" failed" error="failed to destroy network for sandbox \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:55.140255 kubelet[2763]: E1031 02:13:55.139518 2763 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Oct 31 02:13:55.140255 kubelet[2763]: E1031 02:13:55.139587 2763 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66"} Oct 31 02:13:55.140255 kubelet[2763]: E1031 02:13:55.139622 2763 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1aba93ae-9569-4e3f-92f8-b96678002f38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 02:13:55.140255 kubelet[2763]: E1031 02:13:55.139655 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1aba93ae-9569-4e3f-92f8-b96678002f38\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:13:55.142459 containerd[1506]: time="2025-10-31T02:13:55.142385410Z" level=error msg="StopPodSandbox for \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\" failed" error="failed to destroy network for sandbox \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 02:13:55.142801 kubelet[2763]: E1031 02:13:55.142737 2763 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Oct 31 02:13:55.142801 kubelet[2763]: E1031 02:13:55.142776 2763 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074"} Oct 31 02:13:55.143002 kubelet[2763]: E1031 02:13:55.142824 2763 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"383b1d33-d54b-4a00-801a-8a36f78ff190\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 02:13:55.143002 kubelet[2763]: E1031 02:13:55.142853 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"383b1d33-d54b-4a00-801a-8a36f78ff190\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5f5wx" podUID="383b1d33-d54b-4a00-801a-8a36f78ff190" Oct 31 02:14:05.474050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582508748.mount: Deactivated successfully. Oct 31 02:14:05.594082 containerd[1506]: time="2025-10-31T02:14:05.593086162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 31 02:14:05.623236 containerd[1506]: time="2025-10-31T02:14:05.623082103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:14:05.656099 containerd[1506]: time="2025-10-31T02:14:05.656024235Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 11.787810437s" Oct 31 02:14:05.656273 containerd[1506]: time="2025-10-31T02:14:05.656101955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 31 02:14:05.657612 containerd[1506]: time="2025-10-31T02:14:05.657575390Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:14:05.658625 containerd[1506]: time="2025-10-31T02:14:05.658578663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 02:14:05.725248 containerd[1506]: time="2025-10-31T02:14:05.724998057Z" level=info msg="CreateContainer within sandbox \"4225f05525b122be12d426f5c17852b40a4685a341bc2a86e62243a34718d770\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 02:14:05.796958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3675515579.mount: Deactivated successfully. Oct 31 02:14:05.806621 containerd[1506]: time="2025-10-31T02:14:05.806553239Z" level=info msg="CreateContainer within sandbox \"4225f05525b122be12d426f5c17852b40a4685a341bc2a86e62243a34718d770\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5ad04cd61d2d748849411418925a0c41ec61b0ebcfd90a71b5dff6fee7c8c734\"" Oct 31 02:14:05.809361 containerd[1506]: time="2025-10-31T02:14:05.807751746Z" level=info msg="StartContainer for \"5ad04cd61d2d748849411418925a0c41ec61b0ebcfd90a71b5dff6fee7c8c734\"" Oct 31 02:14:05.956398 systemd[1]: Started cri-containerd-5ad04cd61d2d748849411418925a0c41ec61b0ebcfd90a71b5dff6fee7c8c734.scope - libcontainer container 5ad04cd61d2d748849411418925a0c41ec61b0ebcfd90a71b5dff6fee7c8c734. Oct 31 02:14:06.038863 containerd[1506]: time="2025-10-31T02:14:06.038633588Z" level=info msg="StartContainer for \"5ad04cd61d2d748849411418925a0c41ec61b0ebcfd90a71b5dff6fee7c8c734\" returns successfully" Oct 31 02:14:06.389208 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 02:14:06.419703 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 02:14:06.624598 containerd[1506]: time="2025-10-31T02:14:06.623007469Z" level=info msg="StopPodSandbox for \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\"" Oct 31 02:14:06.644078 containerd[1506]: time="2025-10-31T02:14:06.642962099Z" level=info msg="StopPodSandbox for \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\"" Oct 31 02:14:07.053332 kubelet[2763]: I1031 02:14:07.048373 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z895v" podStartSLOduration=2.6071079619999997 podStartE2EDuration="28.045742487s" podCreationTimestamp="2025-10-31 02:13:39 +0000 UTC" firstStartedPulling="2025-10-31 02:13:40.23926053 +0000 UTC m=+29.989419595" lastFinishedPulling="2025-10-31 02:14:05.677895043 +0000 UTC m=+55.428054120" observedRunningTime="2025-10-31 02:14:07.043688208 +0000 UTC m=+56.793847284" watchObservedRunningTime="2025-10-31 02:14:07.045742487 +0000 UTC m=+56.795901567" Oct 31 02:14:07.244216 containerd[1506]: 2025-10-31 02:14:06.859 [INFO][3944] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Oct 31 02:14:07.244216 containerd[1506]: 2025-10-31 02:14:06.860 [INFO][3944] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" iface="eth0" netns="/var/run/netns/cni-36f0f121-610f-adf3-e29c-401a54a0391c" Oct 31 02:14:07.244216 containerd[1506]: 2025-10-31 02:14:06.863 [INFO][3944] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" iface="eth0" netns="/var/run/netns/cni-36f0f121-610f-adf3-e29c-401a54a0391c" Oct 31 02:14:07.244216 containerd[1506]: 2025-10-31 02:14:06.863 [INFO][3944] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" iface="eth0" netns="/var/run/netns/cni-36f0f121-610f-adf3-e29c-401a54a0391c" Oct 31 02:14:07.244216 containerd[1506]: 2025-10-31 02:14:06.863 [INFO][3944] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Oct 31 02:14:07.244216 containerd[1506]: 2025-10-31 02:14:06.863 [INFO][3944] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Oct 31 02:14:07.244216 containerd[1506]: 2025-10-31 02:14:07.157 [INFO][3965] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" HandleID="k8s-pod-network.9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Workload="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:07.244216 containerd[1506]: 2025-10-31 02:14:07.161 [INFO][3965] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:07.244216 containerd[1506]: 2025-10-31 02:14:07.162 [INFO][3965] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:07.244216 containerd[1506]: 2025-10-31 02:14:07.226 [WARNING][3965] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" HandleID="k8s-pod-network.9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Workload="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:07.244216 containerd[1506]: 2025-10-31 02:14:07.226 [INFO][3965] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" HandleID="k8s-pod-network.9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Workload="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:07.244216 containerd[1506]: 2025-10-31 02:14:07.230 [INFO][3965] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:07.244216 containerd[1506]: 2025-10-31 02:14:07.235 [INFO][3944] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Oct 31 02:14:07.252014 systemd[1]: run-netns-cni\x2d36f0f121\x2d610f\x2dadf3\x2de29c\x2d401a54a0391c.mount: Deactivated successfully. Oct 31 02:14:07.276773 containerd[1506]: time="2025-10-31T02:14:07.276677435Z" level=info msg="TearDown network for sandbox \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\" successfully" Oct 31 02:14:07.276773 containerd[1506]: time="2025-10-31T02:14:07.276751144Z" level=info msg="StopPodSandbox for \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\" returns successfully" Oct 31 02:14:07.290599 containerd[1506]: time="2025-10-31T02:14:07.289367697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5f5wx,Uid:383b1d33-d54b-4a00-801a-8a36f78ff190,Namespace:calico-system,Attempt:1,}" Oct 31 02:14:07.310114 containerd[1506]: 2025-10-31 02:14:06.861 [INFO][3945] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Oct 31 02:14:07.310114 containerd[1506]: 2025-10-31 02:14:06.861 [INFO][3945] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" iface="eth0" netns="/var/run/netns/cni-b2c8bfa7-f39f-92c2-9888-e40e782517eb" Oct 31 02:14:07.310114 containerd[1506]: 2025-10-31 02:14:06.862 [INFO][3945] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" iface="eth0" netns="/var/run/netns/cni-b2c8bfa7-f39f-92c2-9888-e40e782517eb" Oct 31 02:14:07.310114 containerd[1506]: 2025-10-31 02:14:06.863 [INFO][3945] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" iface="eth0" netns="/var/run/netns/cni-b2c8bfa7-f39f-92c2-9888-e40e782517eb" Oct 31 02:14:07.310114 containerd[1506]: 2025-10-31 02:14:06.863 [INFO][3945] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Oct 31 02:14:07.310114 containerd[1506]: 2025-10-31 02:14:06.863 [INFO][3945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Oct 31 02:14:07.310114 containerd[1506]: 2025-10-31 02:14:07.154 [INFO][3966] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" HandleID="k8s-pod-network.1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:07.310114 containerd[1506]: 2025-10-31 02:14:07.161 [INFO][3966] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:07.310114 containerd[1506]: 2025-10-31 02:14:07.230 [INFO][3966] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:07.310114 containerd[1506]: 2025-10-31 02:14:07.291 [WARNING][3966] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" HandleID="k8s-pod-network.1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:07.310114 containerd[1506]: 2025-10-31 02:14:07.291 [INFO][3966] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" HandleID="k8s-pod-network.1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:07.310114 containerd[1506]: 2025-10-31 02:14:07.297 [INFO][3966] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:07.310114 containerd[1506]: 2025-10-31 02:14:07.303 [INFO][3945] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Oct 31 02:14:07.311383 containerd[1506]: time="2025-10-31T02:14:07.311342523Z" level=info msg="TearDown network for sandbox \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\" successfully" Oct 31 02:14:07.311884 containerd[1506]: time="2025-10-31T02:14:07.311855372Z" level=info msg="StopPodSandbox for \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\" returns successfully" Oct 31 02:14:07.314467 containerd[1506]: time="2025-10-31T02:14:07.314422589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-979f7c865-m2xgg,Uid:e4c39b9a-a5c9-405f-a471-262b649fbc6a,Namespace:calico-system,Attempt:1,}" Oct 31 02:14:07.322513 systemd[1]: run-netns-cni\x2db2c8bfa7\x2df39f\x2d92c2\x2d9888\x2de40e782517eb.mount: Deactivated successfully. Oct 31 02:14:07.618562 containerd[1506]: time="2025-10-31T02:14:07.618415239Z" level=info msg="StopPodSandbox for \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\"" Oct 31 02:14:07.620184 containerd[1506]: time="2025-10-31T02:14:07.618889978Z" level=info msg="StopPodSandbox for \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\"" Oct 31 02:14:07.627341 containerd[1506]: time="2025-10-31T02:14:07.618961173Z" level=info msg="StopPodSandbox for \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\"" Oct 31 02:14:07.628229 containerd[1506]: time="2025-10-31T02:14:07.618993961Z" level=info msg="StopPodSandbox for \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\"" Oct 31 02:14:08.118449 systemd-networkd[1429]: cali74955880cec: Link UP Oct 31 02:14:08.118890 systemd-networkd[1429]: cali74955880cec: Gained carrier Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.494 [INFO][4002] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.538 [INFO][4002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0 whisker-979f7c865- calico-system e4c39b9a-a5c9-405f-a471-262b649fbc6a 940 0 2025-10-31 02:13:44 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:979f7c865 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-xg3om.gb1.brightbox.com whisker-979f7c865-m2xgg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali74955880cec [] [] }} ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Namespace="calico-system" Pod="whisker-979f7c865-m2xgg" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-" Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.538 [INFO][4002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Namespace="calico-system" Pod="whisker-979f7c865-m2xgg" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.671 [INFO][4023] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" HandleID="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.675 [INFO][4023] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" HandleID="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f990), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-xg3om.gb1.brightbox.com", "pod":"whisker-979f7c865-m2xgg", "timestamp":"2025-10-31 02:14:07.671067293 +0000 UTC"}, Hostname:"srv-xg3om.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.675 [INFO][4023] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.677 [INFO][4023] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.678 [INFO][4023] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xg3om.gb1.brightbox.com' Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.746 [INFO][4023] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.807 [INFO][4023] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.839 [INFO][4023] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.872 [INFO][4023] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.901 [INFO][4023] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.901 [INFO][4023] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.908 [INFO][4023] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.957 [INFO][4023] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.990 [INFO][4023] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.0/26] block=192.168.50.0/26 handle="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.992 [INFO][4023] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.0/26] handle="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.992 [INFO][4023] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:08.242143 containerd[1506]: 2025-10-31 02:14:07.992 [INFO][4023] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.0/26] IPv6=[] ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" HandleID="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:08.245150 containerd[1506]: 2025-10-31 02:14:08.006 [INFO][4002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Namespace="calico-system" Pod="whisker-979f7c865-m2xgg" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0", GenerateName:"whisker-979f7c865-", Namespace:"calico-system", SelfLink:"", UID:"e4c39b9a-a5c9-405f-a471-262b649fbc6a", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"979f7c865", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"", Pod:"whisker-979f7c865-m2xgg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74955880cec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:08.245150 containerd[1506]: 2025-10-31 02:14:08.011 [INFO][4002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.0/32] ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Namespace="calico-system" Pod="whisker-979f7c865-m2xgg" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:08.245150 containerd[1506]: 2025-10-31 02:14:08.011 [INFO][4002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74955880cec ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Namespace="calico-system" Pod="whisker-979f7c865-m2xgg" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:08.245150 containerd[1506]: 2025-10-31 02:14:08.125 [INFO][4002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Namespace="calico-system" Pod="whisker-979f7c865-m2xgg" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:08.245150 containerd[1506]: 2025-10-31 02:14:08.133 [INFO][4002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Namespace="calico-system" Pod="whisker-979f7c865-m2xgg" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0", GenerateName:"whisker-979f7c865-", Namespace:"calico-system", SelfLink:"", UID:"e4c39b9a-a5c9-405f-a471-262b649fbc6a", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"979f7c865", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab", Pod:"whisker-979f7c865-m2xgg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74955880cec", MAC:"1a:a6:05:61:23:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:08.245150 containerd[1506]: 2025-10-31 02:14:08.233 [INFO][4002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Namespace="calico-system" Pod="whisker-979f7c865-m2xgg" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:08.355575 containerd[1506]: time="2025-10-31T02:14:08.354441363Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:14:08.355575 containerd[1506]: time="2025-10-31T02:14:08.354582335Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:14:08.355575 containerd[1506]: time="2025-10-31T02:14:08.354617770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:08.355575 containerd[1506]: time="2025-10-31T02:14:08.354971207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:08.406336 systemd-networkd[1429]: caliaaf0c3d2352: Link UP Oct 31 02:14:08.432902 systemd-networkd[1429]: caliaaf0c3d2352: Gained carrier Oct 31 02:14:08.463620 containerd[1506]: 2025-10-31 02:14:08.053 [INFO][4072] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Oct 31 02:14:08.463620 containerd[1506]: 2025-10-31 02:14:08.053 [INFO][4072] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" iface="eth0" netns="/var/run/netns/cni-5ea1767d-e812-8274-2589-2789392ed14a" Oct 31 02:14:08.463620 containerd[1506]: 2025-10-31 02:14:08.056 [INFO][4072] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" iface="eth0" netns="/var/run/netns/cni-5ea1767d-e812-8274-2589-2789392ed14a" Oct 31 02:14:08.463620 containerd[1506]: 2025-10-31 02:14:08.065 [INFO][4072] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" iface="eth0" netns="/var/run/netns/cni-5ea1767d-e812-8274-2589-2789392ed14a" Oct 31 02:14:08.463620 containerd[1506]: 2025-10-31 02:14:08.065 [INFO][4072] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Oct 31 02:14:08.463620 containerd[1506]: 2025-10-31 02:14:08.065 [INFO][4072] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Oct 31 02:14:08.463620 containerd[1506]: 2025-10-31 02:14:08.259 [INFO][4113] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" HandleID="k8s-pod-network.0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:08.463620 containerd[1506]: 2025-10-31 02:14:08.259 [INFO][4113] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:08.463620 containerd[1506]: 2025-10-31 02:14:08.330 [INFO][4113] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:08.463620 containerd[1506]: 2025-10-31 02:14:08.379 [WARNING][4113] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" HandleID="k8s-pod-network.0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:08.463620 containerd[1506]: 2025-10-31 02:14:08.379 [INFO][4113] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" HandleID="k8s-pod-network.0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:08.463620 containerd[1506]: 2025-10-31 02:14:08.402 [INFO][4113] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:08.463620 containerd[1506]: 2025-10-31 02:14:08.444 [INFO][4072] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Oct 31 02:14:08.465253 containerd[1506]: time="2025-10-31T02:14:08.464927801Z" level=info msg="TearDown network for sandbox \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\" successfully" Oct 31 02:14:08.465253 containerd[1506]: time="2025-10-31T02:14:08.465000439Z" level=info msg="StopPodSandbox for \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\" returns successfully" Oct 31 02:14:08.470860 containerd[1506]: time="2025-10-31T02:14:08.469353467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c48557b4b-xk5jv,Uid:8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d,Namespace:calico-apiserver,Attempt:1,}" Oct 31 02:14:08.470251 systemd[1]: run-netns-cni\x2d5ea1767d\x2de812\x2d8274\x2d2589\x2d2789392ed14a.mount: Deactivated successfully. Oct 31 02:14:08.488428 systemd[1]: Started cri-containerd-59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab.scope - libcontainer container 59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab. Oct 31 02:14:08.507213 containerd[1506]: 2025-10-31 02:14:07.974 [INFO][4066] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Oct 31 02:14:08.507213 containerd[1506]: 2025-10-31 02:14:07.978 [INFO][4066] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" iface="eth0" netns="/var/run/netns/cni-76e78308-d9a5-6801-57f8-b3045f217613" Oct 31 02:14:08.507213 containerd[1506]: 2025-10-31 02:14:07.981 [INFO][4066] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" iface="eth0" netns="/var/run/netns/cni-76e78308-d9a5-6801-57f8-b3045f217613" Oct 31 02:14:08.507213 containerd[1506]: 2025-10-31 02:14:07.992 [INFO][4066] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" iface="eth0" netns="/var/run/netns/cni-76e78308-d9a5-6801-57f8-b3045f217613" Oct 31 02:14:08.507213 containerd[1506]: 2025-10-31 02:14:07.992 [INFO][4066] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Oct 31 02:14:08.507213 containerd[1506]: 2025-10-31 02:14:07.992 [INFO][4066] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Oct 31 02:14:08.507213 containerd[1506]: 2025-10-31 02:14:08.264 [INFO][4100] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" HandleID="k8s-pod-network.c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:08.507213 containerd[1506]: 2025-10-31 02:14:08.269 [INFO][4100] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:08.507213 containerd[1506]: 2025-10-31 02:14:08.402 [INFO][4100] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:08.507213 containerd[1506]: 2025-10-31 02:14:08.489 [WARNING][4100] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" HandleID="k8s-pod-network.c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:08.507213 containerd[1506]: 2025-10-31 02:14:08.489 [INFO][4100] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" HandleID="k8s-pod-network.c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:08.507213 containerd[1506]: 2025-10-31 02:14:08.495 [INFO][4100] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:08.507213 containerd[1506]: 2025-10-31 02:14:08.500 [INFO][4066] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Oct 31 02:14:08.507213 containerd[1506]: time="2025-10-31T02:14:08.504843180Z" level=info msg="TearDown network for sandbox \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\" successfully" Oct 31 02:14:08.507213 containerd[1506]: time="2025-10-31T02:14:08.504878542Z" level=info msg="StopPodSandbox for \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\" returns successfully" Oct 31 02:14:08.509619 systemd[1]: run-netns-cni\x2d76e78308\x2dd9a5\x2d6801\x2d57f8\x2db3045f217613.mount: Deactivated successfully. Oct 31 02:14:08.512380 containerd[1506]: time="2025-10-31T02:14:08.511271584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jbl2g,Uid:71d3c28c-c709-4960-8b43-030748d0a3ca,Namespace:kube-system,Attempt:1,}" Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:07.489 [INFO][3994] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:07.530 [INFO][3994] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0 goldmane-666569f655- calico-system 383b1d33-d54b-4a00-801a-8a36f78ff190 932 0 2025-10-31 02:13:36 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-xg3om.gb1.brightbox.com goldmane-666569f655-5f5wx eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliaaf0c3d2352 [] [] }} ContainerID="5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" Namespace="calico-system" Pod="goldmane-666569f655-5f5wx" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-" Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:07.530 [INFO][3994] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" Namespace="calico-system" Pod="goldmane-666569f655-5f5wx" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:07.710 [INFO][4021] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" HandleID="k8s-pod-network.5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" Workload="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:07.715 [INFO][4021] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" HandleID="k8s-pod-network.5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" Workload="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000358960), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-xg3om.gb1.brightbox.com", "pod":"goldmane-666569f655-5f5wx", "timestamp":"2025-10-31 02:14:07.710207347 +0000 UTC"}, Hostname:"srv-xg3om.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:07.715 [INFO][4021] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:07.997 [INFO][4021] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:07.998 [INFO][4021] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xg3om.gb1.brightbox.com' Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:08.038 [INFO][4021] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:08.098 [INFO][4021] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:08.131 [INFO][4021] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:08.149 [INFO][4021] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:08.219 [INFO][4021] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:08.220 [INFO][4021] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:08.241 [INFO][4021] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77 Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:08.274 [INFO][4021] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:08.329 [INFO][4021] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.1/26] block=192.168.50.0/26 handle="k8s-pod-network.5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:08.330 [INFO][4021] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.1/26] handle="k8s-pod-network.5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:08.330 [INFO][4021] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:08.579091 containerd[1506]: 2025-10-31 02:14:08.330 [INFO][4021] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.1/26] IPv6=[] ContainerID="5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" HandleID="k8s-pod-network.5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" Workload="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:08.581049 containerd[1506]: 2025-10-31 02:14:08.367 [INFO][3994] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" Namespace="calico-system" Pod="goldmane-666569f655-5f5wx" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"383b1d33-d54b-4a00-801a-8a36f78ff190", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-666569f655-5f5wx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaaf0c3d2352", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:08.581049 containerd[1506]: 2025-10-31 02:14:08.369 [INFO][3994] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.1/32] ContainerID="5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" Namespace="calico-system" Pod="goldmane-666569f655-5f5wx" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:08.581049 containerd[1506]: 2025-10-31 02:14:08.369 [INFO][3994] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaaf0c3d2352 ContainerID="5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" Namespace="calico-system" Pod="goldmane-666569f655-5f5wx" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:08.581049 containerd[1506]: 2025-10-31 02:14:08.449 [INFO][3994] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" Namespace="calico-system" Pod="goldmane-666569f655-5f5wx" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:08.581049 containerd[1506]: 2025-10-31 02:14:08.461 [INFO][3994] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" Namespace="calico-system" Pod="goldmane-666569f655-5f5wx" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"383b1d33-d54b-4a00-801a-8a36f78ff190", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77", Pod:"goldmane-666569f655-5f5wx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaaf0c3d2352", MAC:"7e:73:ec:f7:fb:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:08.581049 containerd[1506]: 2025-10-31 02:14:08.547 [INFO][3994] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77" Namespace="calico-system" Pod="goldmane-666569f655-5f5wx" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:08.614831 containerd[1506]: 2025-10-31 02:14:07.979 [INFO][4081] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Oct 31 02:14:08.614831 containerd[1506]: 2025-10-31 02:14:07.993 [INFO][4081] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" iface="eth0" netns="/var/run/netns/cni-b375357b-1de3-1c6e-409d-8357680bc69f" Oct 31 02:14:08.614831 containerd[1506]: 2025-10-31 02:14:07.997 [INFO][4081] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" iface="eth0" netns="/var/run/netns/cni-b375357b-1de3-1c6e-409d-8357680bc69f" Oct 31 02:14:08.614831 containerd[1506]: 2025-10-31 02:14:08.000 [INFO][4081] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" iface="eth0" netns="/var/run/netns/cni-b375357b-1de3-1c6e-409d-8357680bc69f" Oct 31 02:14:08.614831 containerd[1506]: 2025-10-31 02:14:08.000 [INFO][4081] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Oct 31 02:14:08.614831 containerd[1506]: 2025-10-31 02:14:08.000 [INFO][4081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Oct 31 02:14:08.614831 containerd[1506]: 2025-10-31 02:14:08.422 [INFO][4098] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" HandleID="k8s-pod-network.7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:08.614831 containerd[1506]: 2025-10-31 02:14:08.422 [INFO][4098] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:08.614831 containerd[1506]: 2025-10-31 02:14:08.495 [INFO][4098] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:08.614831 containerd[1506]: 2025-10-31 02:14:08.553 [WARNING][4098] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" HandleID="k8s-pod-network.7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:08.614831 containerd[1506]: 2025-10-31 02:14:08.553 [INFO][4098] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" HandleID="k8s-pod-network.7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:08.614831 containerd[1506]: 2025-10-31 02:14:08.584 [INFO][4098] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:08.614831 containerd[1506]: 2025-10-31 02:14:08.591 [INFO][4081] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Oct 31 02:14:08.616185 containerd[1506]: time="2025-10-31T02:14:08.615697613Z" level=info msg="TearDown network for sandbox \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\" successfully" Oct 31 02:14:08.616185 containerd[1506]: time="2025-10-31T02:14:08.615736489Z" level=info msg="StopPodSandbox for \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\" returns successfully" Oct 31 02:14:08.625661 containerd[1506]: time="2025-10-31T02:14:08.625609268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-thfzc,Uid:c8e6d4c7-57e9-4902-bae1-886c53b818d8,Namespace:kube-system,Attempt:1,}" Oct 31 02:14:08.726153 containerd[1506]: 2025-10-31 02:14:08.049 [INFO][4067] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Oct 31 02:14:08.726153 containerd[1506]: 2025-10-31 02:14:08.049 [INFO][4067] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" iface="eth0" netns="/var/run/netns/cni-c3850990-d688-ec05-3b7b-648cdcf69ee0" Oct 31 02:14:08.726153 containerd[1506]: 2025-10-31 02:14:08.050 [INFO][4067] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" iface="eth0" netns="/var/run/netns/cni-c3850990-d688-ec05-3b7b-648cdcf69ee0" Oct 31 02:14:08.726153 containerd[1506]: 2025-10-31 02:14:08.052 [INFO][4067] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" iface="eth0" netns="/var/run/netns/cni-c3850990-d688-ec05-3b7b-648cdcf69ee0" Oct 31 02:14:08.726153 containerd[1506]: 2025-10-31 02:14:08.052 [INFO][4067] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Oct 31 02:14:08.726153 containerd[1506]: 2025-10-31 02:14:08.053 [INFO][4067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Oct 31 02:14:08.726153 containerd[1506]: 2025-10-31 02:14:08.456 [INFO][4111] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" HandleID="k8s-pod-network.d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Workload="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:08.726153 containerd[1506]: 2025-10-31 02:14:08.457 [INFO][4111] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:08.726153 containerd[1506]: 2025-10-31 02:14:08.584 [INFO][4111] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:08.726153 containerd[1506]: 2025-10-31 02:14:08.647 [WARNING][4111] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" HandleID="k8s-pod-network.d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Workload="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:08.726153 containerd[1506]: 2025-10-31 02:14:08.647 [INFO][4111] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" HandleID="k8s-pod-network.d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Workload="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:08.726153 containerd[1506]: 2025-10-31 02:14:08.668 [INFO][4111] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:08.726153 containerd[1506]: 2025-10-31 02:14:08.685 [INFO][4067] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Oct 31 02:14:08.727785 containerd[1506]: time="2025-10-31T02:14:08.727472642Z" level=info msg="TearDown network for sandbox \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\" successfully" Oct 31 02:14:08.727785 containerd[1506]: time="2025-10-31T02:14:08.727508047Z" level=info msg="StopPodSandbox for \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\" returns successfully" Oct 31 02:14:08.730070 containerd[1506]: time="2025-10-31T02:14:08.730034959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rsz7n,Uid:1aba93ae-9569-4e3f-92f8-b96678002f38,Namespace:calico-system,Attempt:1,}" Oct 31 02:14:08.841516 containerd[1506]: time="2025-10-31T02:14:08.839830884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:14:08.841516 containerd[1506]: time="2025-10-31T02:14:08.841394881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:14:08.842011 containerd[1506]: time="2025-10-31T02:14:08.841463459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:08.845502 containerd[1506]: time="2025-10-31T02:14:08.843898503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:08.941913 systemd[1]: Started cri-containerd-5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77.scope - libcontainer container 5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77. Oct 31 02:14:09.171503 systemd-networkd[1429]: calidf697394f62: Link UP Oct 31 02:14:09.190142 systemd-networkd[1429]: calidf697394f62: Gained carrier Oct 31 02:14:09.222838 containerd[1506]: time="2025-10-31T02:14:09.221899020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-979f7c865-m2xgg,Uid:e4c39b9a-a5c9-405f-a471-262b649fbc6a,Namespace:calico-system,Attempt:1,} returns sandbox id \"59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab\"" Oct 31 02:14:09.227118 containerd[1506]: time="2025-10-31T02:14:09.226383929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:08.773 [INFO][4164] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:08.864 [INFO][4164] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0 calico-apiserver-c48557b4b- calico-apiserver 8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d 963 0 2025-10-31 02:13:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c48557b4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-xg3om.gb1.brightbox.com calico-apiserver-c48557b4b-xk5jv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidf697394f62 [] [] }} ContainerID="9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-xk5jv" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-" Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:08.866 [INFO][4164] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-xk5jv" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.006 [INFO][4264] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" HandleID="k8s-pod-network.9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.010 [INFO][4264] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" HandleID="k8s-pod-network.9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000328800), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-xg3om.gb1.brightbox.com", "pod":"calico-apiserver-c48557b4b-xk5jv", "timestamp":"2025-10-31 02:14:09.006624098 +0000 UTC"}, Hostname:"srv-xg3om.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.010 [INFO][4264] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.010 [INFO][4264] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.011 [INFO][4264] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xg3om.gb1.brightbox.com' Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.044 [INFO][4264] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.064 [INFO][4264] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.078 [INFO][4264] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.083 [INFO][4264] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.098 [INFO][4264] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.098 [INFO][4264] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.104 [INFO][4264] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3 Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.112 [INFO][4264] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.128 [INFO][4264] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.3/26] block=192.168.50.0/26 handle="k8s-pod-network.9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.128 [INFO][4264] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.3/26] handle="k8s-pod-network.9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.128 [INFO][4264] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:09.251637 containerd[1506]: 2025-10-31 02:14:09.128 [INFO][4264] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.3/26] IPv6=[] ContainerID="9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" HandleID="k8s-pod-network.9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:09.255362 containerd[1506]: 2025-10-31 02:14:09.155 [INFO][4164] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-xk5jv" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0", GenerateName:"calico-apiserver-c48557b4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c48557b4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-c48557b4b-xk5jv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf697394f62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:09.255362 containerd[1506]: 2025-10-31 02:14:09.155 [INFO][4164] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.3/32] ContainerID="9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-xk5jv" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:09.255362 containerd[1506]: 2025-10-31 02:14:09.156 [INFO][4164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf697394f62 ContainerID="9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-xk5jv" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:09.255362 containerd[1506]: 2025-10-31 02:14:09.189 [INFO][4164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-xk5jv" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:09.255362 containerd[1506]: 2025-10-31 02:14:09.196 [INFO][4164] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-xk5jv" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0", GenerateName:"calico-apiserver-c48557b4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c48557b4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3", Pod:"calico-apiserver-c48557b4b-xk5jv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf697394f62", MAC:"96:e3:fa:be:57:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:09.255362 containerd[1506]: 2025-10-31 02:14:09.228 [INFO][4164] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-xk5jv" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:09.341818 systemd-networkd[1429]: cali3f442ed9fea: Link UP Oct 31 02:14:09.351421 systemd-networkd[1429]: cali3f442ed9fea: Gained carrier Oct 31 02:14:09.408330 containerd[1506]: time="2025-10-31T02:14:09.408113338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5f5wx,Uid:383b1d33-d54b-4a00-801a-8a36f78ff190,Namespace:calico-system,Attempt:1,} returns sandbox id \"5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77\"" Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:08.862 [INFO][4185] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:08.911 [INFO][4185] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0 coredns-674b8bbfcf- kube-system 71d3c28c-c709-4960-8b43-030748d0a3ca 960 0 2025-10-31 02:13:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-xg3om.gb1.brightbox.com coredns-674b8bbfcf-jbl2g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3f442ed9fea [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" Namespace="kube-system" Pod="coredns-674b8bbfcf-jbl2g" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-" Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:08.911 [INFO][4185] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" Namespace="kube-system" Pod="coredns-674b8bbfcf-jbl2g" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.154 [INFO][4275] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" HandleID="k8s-pod-network.1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.155 [INFO][4275] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" HandleID="k8s-pod-network.1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aa200), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-xg3om.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-jbl2g", "timestamp":"2025-10-31 02:14:09.154856354 +0000 UTC"}, Hostname:"srv-xg3om.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.157 [INFO][4275] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.160 [INFO][4275] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.161 [INFO][4275] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xg3om.gb1.brightbox.com' Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.194 [INFO][4275] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.209 [INFO][4275] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.236 [INFO][4275] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.241 [INFO][4275] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.244 [INFO][4275] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.244 [INFO][4275] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.248 [INFO][4275] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140 Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.265 [INFO][4275] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.284 [INFO][4275] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.4/26] block=192.168.50.0/26 handle="k8s-pod-network.1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.285 [INFO][4275] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.4/26] handle="k8s-pod-network.1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.285 [INFO][4275] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:09.417817 containerd[1506]: 2025-10-31 02:14:09.285 [INFO][4275] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.4/26] IPv6=[] ContainerID="1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" HandleID="k8s-pod-network.1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:09.420952 containerd[1506]: 2025-10-31 02:14:09.305 [INFO][4185] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" Namespace="kube-system" Pod="coredns-674b8bbfcf-jbl2g" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"71d3c28c-c709-4960-8b43-030748d0a3ca", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-jbl2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f442ed9fea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:09.420952 containerd[1506]: 2025-10-31 02:14:09.305 [INFO][4185] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.4/32] ContainerID="1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" Namespace="kube-system" Pod="coredns-674b8bbfcf-jbl2g" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:09.420952 containerd[1506]: 2025-10-31 02:14:09.305 [INFO][4185] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f442ed9fea ContainerID="1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" Namespace="kube-system" Pod="coredns-674b8bbfcf-jbl2g" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:09.420952 containerd[1506]: 2025-10-31 02:14:09.365 [INFO][4185] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" Namespace="kube-system" Pod="coredns-674b8bbfcf-jbl2g" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:09.420952 containerd[1506]: 2025-10-31 02:14:09.371 [INFO][4185] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" Namespace="kube-system" Pod="coredns-674b8bbfcf-jbl2g" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"71d3c28c-c709-4960-8b43-030748d0a3ca", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140", Pod:"coredns-674b8bbfcf-jbl2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f442ed9fea", MAC:"4e:e9:9d:07:83:f1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:09.420952 containerd[1506]: 2025-10-31 02:14:09.401 [INFO][4185] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140" Namespace="kube-system" Pod="coredns-674b8bbfcf-jbl2g" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:09.420952 containerd[1506]: time="2025-10-31T02:14:09.417135179Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:14:09.420952 containerd[1506]: time="2025-10-31T02:14:09.419090767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:14:09.420952 containerd[1506]: time="2025-10-31T02:14:09.419113021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:09.423152 containerd[1506]: time="2025-10-31T02:14:09.420869410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:09.462393 systemd[1]: Started cri-containerd-9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3.scope - libcontainer container 9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3. Oct 31 02:14:09.484408 systemd[1]: run-netns-cni\x2db375357b\x2d1de3\x2d1c6e\x2d409d\x2d8357680bc69f.mount: Deactivated successfully. Oct 31 02:14:09.487509 systemd[1]: run-netns-cni\x2dc3850990\x2dd688\x2dec05\x2d3b7b\x2d648cdcf69ee0.mount: Deactivated successfully. Oct 31 02:14:09.514770 containerd[1506]: time="2025-10-31T02:14:09.513134282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:14:09.514770 containerd[1506]: time="2025-10-31T02:14:09.513231920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:14:09.514770 containerd[1506]: time="2025-10-31T02:14:09.513265091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:09.514770 containerd[1506]: time="2025-10-31T02:14:09.513438616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:09.527384 systemd-networkd[1429]: cali9d74ca420fa: Link UP Oct 31 02:14:09.533592 systemd-networkd[1429]: cali9d74ca420fa: Gained carrier Oct 31 02:14:09.579656 systemd[1]: run-containerd-runc-k8s.io-1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140-runc.rZwQoD.mount: Deactivated successfully. Oct 31 02:14:09.592417 systemd[1]: Started cri-containerd-1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140.scope - libcontainer container 1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140. Oct 31 02:14:09.614047 containerd[1506]: time="2025-10-31T02:14:09.613908179Z" level=info msg="StopPodSandbox for \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\"" Oct 31 02:14:09.644882 containerd[1506]: time="2025-10-31T02:14:09.644554952Z" level=info msg="StopPodSandbox for \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\"" Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:08.891 [INFO][4221] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:08.947 [INFO][4221] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0 csi-node-driver- calico-system 1aba93ae-9569-4e3f-92f8-b96678002f38 964 0 2025-10-31 02:13:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-xg3om.gb1.brightbox.com csi-node-driver-rsz7n eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9d74ca420fa [] [] }} ContainerID="60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" Namespace="calico-system" Pod="csi-node-driver-rsz7n" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-" Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:08.947 [INFO][4221] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" Namespace="calico-system" Pod="csi-node-driver-rsz7n" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.237 [INFO][4282] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" HandleID="k8s-pod-network.60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" Workload="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.238 [INFO][4282] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" HandleID="k8s-pod-network.60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" Workload="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8e0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-xg3om.gb1.brightbox.com", "pod":"csi-node-driver-rsz7n", "timestamp":"2025-10-31 02:14:09.23740594 +0000 UTC"}, Hostname:"srv-xg3om.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.238 [INFO][4282] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.288 [INFO][4282] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.288 [INFO][4282] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xg3om.gb1.brightbox.com' Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.361 [INFO][4282] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.382 [INFO][4282] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.403 [INFO][4282] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.409 [INFO][4282] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.420 [INFO][4282] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.422 [INFO][4282] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.426 [INFO][4282] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9 Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.442 [INFO][4282] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.467 [INFO][4282] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.5/26] block=192.168.50.0/26 handle="k8s-pod-network.60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.467 [INFO][4282] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.5/26] handle="k8s-pod-network.60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.467 [INFO][4282] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:09.650130 containerd[1506]: 2025-10-31 02:14:09.471 [INFO][4282] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.5/26] IPv6=[] ContainerID="60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" HandleID="k8s-pod-network.60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" Workload="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:09.651827 containerd[1506]: 2025-10-31 02:14:09.500 [INFO][4221] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" Namespace="calico-system" Pod="csi-node-driver-rsz7n" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1aba93ae-9569-4e3f-92f8-b96678002f38", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-rsz7n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9d74ca420fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:09.651827 containerd[1506]: 2025-10-31 02:14:09.500 [INFO][4221] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.5/32] ContainerID="60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" Namespace="calico-system" Pod="csi-node-driver-rsz7n" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:09.651827 containerd[1506]: 2025-10-31 02:14:09.500 [INFO][4221] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9d74ca420fa ContainerID="60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" Namespace="calico-system" Pod="csi-node-driver-rsz7n" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:09.651827 containerd[1506]: 2025-10-31 02:14:09.547 [INFO][4221] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" Namespace="calico-system" Pod="csi-node-driver-rsz7n" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:09.651827 containerd[1506]: 2025-10-31 02:14:09.550 [INFO][4221] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" Namespace="calico-system" Pod="csi-node-driver-rsz7n" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1aba93ae-9569-4e3f-92f8-b96678002f38", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9", Pod:"csi-node-driver-rsz7n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9d74ca420fa", MAC:"52:98:28:9d:45:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:09.651827 containerd[1506]: 2025-10-31 02:14:09.604 [INFO][4221] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9" Namespace="calico-system" Pod="csi-node-driver-rsz7n" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:09.660916 containerd[1506]: time="2025-10-31T02:14:09.660798559Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:09.725737 containerd[1506]: time="2025-10-31T02:14:09.670185159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 02:14:09.728251 containerd[1506]: time="2025-10-31T02:14:09.670590097Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 02:14:09.746976 kubelet[2763]: E1031 02:14:09.741075 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 02:14:09.755892 kubelet[2763]: E1031 02:14:09.755719 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 02:14:09.770052 containerd[1506]: time="2025-10-31T02:14:09.769805090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 02:14:09.777035 systemd-networkd[1429]: calida1fbd8febb: Link UP Oct 31 02:14:09.780917 systemd-networkd[1429]: calida1fbd8febb: Gained carrier Oct 31 02:14:09.832007 kubelet[2763]: E1031 02:14:09.831820 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ef444e7681884ff38a072ba2825613b9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96scz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-979f7c865-m2xgg_calico-system(e4c39b9a-a5c9-405f-a471-262b649fbc6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:09.869209 containerd[1506]: time="2025-10-31T02:14:09.868899822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jbl2g,Uid:71d3c28c-c709-4960-8b43-030748d0a3ca,Namespace:kube-system,Attempt:1,} returns sandbox id \"1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140\"" Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.007 [INFO][4247] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.075 [INFO][4247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0 coredns-674b8bbfcf- kube-system c8e6d4c7-57e9-4902-bae1-886c53b818d8 961 0 2025-10-31 02:13:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-xg3om.gb1.brightbox.com coredns-674b8bbfcf-thfzc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calida1fbd8febb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" Namespace="kube-system" Pod="coredns-674b8bbfcf-thfzc" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-" Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.076 [INFO][4247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" Namespace="kube-system" Pod="coredns-674b8bbfcf-thfzc" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.366 [INFO][4302] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" HandleID="k8s-pod-network.eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.366 [INFO][4302] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" HandleID="k8s-pod-network.eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000350400), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-xg3om.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-thfzc", "timestamp":"2025-10-31 02:14:09.366151535 +0000 UTC"}, Hostname:"srv-xg3om.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.366 [INFO][4302] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.467 [INFO][4302] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.468 [INFO][4302] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xg3om.gb1.brightbox.com' Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.508 [INFO][4302] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.535 [INFO][4302] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.598 [INFO][4302] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.610 [INFO][4302] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.630 [INFO][4302] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.642 [INFO][4302] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.652 [INFO][4302] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.675 [INFO][4302] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.716 [INFO][4302] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.6/26] block=192.168.50.0/26 handle="k8s-pod-network.eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.716 [INFO][4302] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.6/26] handle="k8s-pod-network.eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.716 [INFO][4302] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:09.870957 containerd[1506]: 2025-10-31 02:14:09.716 [INFO][4302] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.6/26] IPv6=[] ContainerID="eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" HandleID="k8s-pod-network.eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:09.872840 containerd[1506]: 2025-10-31 02:14:09.736 [INFO][4247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" Namespace="kube-system" Pod="coredns-674b8bbfcf-thfzc" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8e6d4c7-57e9-4902-bae1-886c53b818d8", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-thfzc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida1fbd8febb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:09.872840 containerd[1506]: 2025-10-31 02:14:09.736 [INFO][4247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.6/32] ContainerID="eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" Namespace="kube-system" Pod="coredns-674b8bbfcf-thfzc" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:09.872840 containerd[1506]: 2025-10-31 02:14:09.736 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida1fbd8febb ContainerID="eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" Namespace="kube-system" Pod="coredns-674b8bbfcf-thfzc" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:09.872840 containerd[1506]: 2025-10-31 02:14:09.785 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" Namespace="kube-system" Pod="coredns-674b8bbfcf-thfzc" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:09.872840 containerd[1506]: 2025-10-31 02:14:09.787 [INFO][4247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" Namespace="kube-system" Pod="coredns-674b8bbfcf-thfzc" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8e6d4c7-57e9-4902-bae1-886c53b818d8", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc", Pod:"coredns-674b8bbfcf-thfzc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida1fbd8febb", MAC:"c6:21:60:eb:e8:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:09.872840 containerd[1506]: 2025-10-31 02:14:09.838 [INFO][4247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc" Namespace="kube-system" Pod="coredns-674b8bbfcf-thfzc" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:09.910024 containerd[1506]: time="2025-10-31T02:14:09.909969230Z" level=info msg="CreateContainer within sandbox \"1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 02:14:09.937442 containerd[1506]: time="2025-10-31T02:14:09.937389391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c48557b4b-xk5jv,Uid:8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3\"" Oct 31 02:14:09.951149 containerd[1506]: time="2025-10-31T02:14:09.950710623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:14:09.951149 containerd[1506]: time="2025-10-31T02:14:09.950818226Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:14:09.951149 containerd[1506]: time="2025-10-31T02:14:09.950845285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:09.951149 containerd[1506]: time="2025-10-31T02:14:09.951032621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:10.005723 containerd[1506]: time="2025-10-31T02:14:10.005475245Z" level=info msg="CreateContainer within sandbox \"1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"961431488d6ecffa21a6731f25ba2a5929d1b3e810181ed0b2258c8c356055e5\"" Oct 31 02:14:10.010320 containerd[1506]: time="2025-10-31T02:14:10.009485091Z" level=info msg="StartContainer for \"961431488d6ecffa21a6731f25ba2a5929d1b3e810181ed0b2258c8c356055e5\"" Oct 31 02:14:10.028389 systemd[1]: Started cri-containerd-60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9.scope - libcontainer container 60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9. Oct 31 02:14:10.082426 systemd-networkd[1429]: cali74955880cec: Gained IPv6LL Oct 31 02:14:10.092356 containerd[1506]: time="2025-10-31T02:14:10.088880629Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:14:10.092356 containerd[1506]: time="2025-10-31T02:14:10.091870732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:14:10.092356 containerd[1506]: time="2025-10-31T02:14:10.091910202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:10.097736 containerd[1506]: time="2025-10-31T02:14:10.093909976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:10.135208 containerd[1506]: 2025-10-31 02:14:09.795 [INFO][4435] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Oct 31 02:14:10.135208 containerd[1506]: 2025-10-31 02:14:09.795 [INFO][4435] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" iface="eth0" netns="/var/run/netns/cni-7c624bb6-b3dc-c6e3-a7b4-70d548c50f20" Oct 31 02:14:10.135208 containerd[1506]: 2025-10-31 02:14:09.795 [INFO][4435] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" iface="eth0" netns="/var/run/netns/cni-7c624bb6-b3dc-c6e3-a7b4-70d548c50f20" Oct 31 02:14:10.135208 containerd[1506]: 2025-10-31 02:14:09.799 [INFO][4435] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" iface="eth0" netns="/var/run/netns/cni-7c624bb6-b3dc-c6e3-a7b4-70d548c50f20" Oct 31 02:14:10.135208 containerd[1506]: 2025-10-31 02:14:09.799 [INFO][4435] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Oct 31 02:14:10.135208 containerd[1506]: 2025-10-31 02:14:09.799 [INFO][4435] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Oct 31 02:14:10.135208 containerd[1506]: 2025-10-31 02:14:10.090 [INFO][4479] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" HandleID="k8s-pod-network.09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:14:10.135208 containerd[1506]: 2025-10-31 02:14:10.093 [INFO][4479] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:10.135208 containerd[1506]: 2025-10-31 02:14:10.093 [INFO][4479] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:10.135208 containerd[1506]: 2025-10-31 02:14:10.113 [WARNING][4479] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" HandleID="k8s-pod-network.09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:14:10.135208 containerd[1506]: 2025-10-31 02:14:10.113 [INFO][4479] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" HandleID="k8s-pod-network.09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:14:10.135208 containerd[1506]: 2025-10-31 02:14:10.118 [INFO][4479] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:10.135208 containerd[1506]: 2025-10-31 02:14:10.124 [INFO][4435] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Oct 31 02:14:10.136862 containerd[1506]: time="2025-10-31T02:14:10.136051035Z" level=info msg="TearDown network for sandbox \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\" successfully" Oct 31 02:14:10.136862 containerd[1506]: time="2025-10-31T02:14:10.136092085Z" level=info msg="StopPodSandbox for \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\" returns successfully" Oct 31 02:14:10.139602 containerd[1506]: time="2025-10-31T02:14:10.139435759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c48557b4b-ts64b,Uid:e8cd4f39-3f1e-47f1-8de2-399f0cec4257,Namespace:calico-apiserver,Attempt:1,}" Oct 31 02:14:10.165427 containerd[1506]: time="2025-10-31T02:14:10.162901520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rsz7n,Uid:1aba93ae-9569-4e3f-92f8-b96678002f38,Namespace:calico-system,Attempt:1,} returns sandbox id \"60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9\"" Oct 31 02:14:10.171545 systemd[1]: Started cri-containerd-961431488d6ecffa21a6731f25ba2a5929d1b3e810181ed0b2258c8c356055e5.scope - libcontainer container 961431488d6ecffa21a6731f25ba2a5929d1b3e810181ed0b2258c8c356055e5. Oct 31 02:14:10.192021 systemd[1]: Started cri-containerd-eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc.scope - libcontainer container eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc. Oct 31 02:14:10.208241 containerd[1506]: time="2025-10-31T02:14:10.208191238Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:10.210235 containerd[1506]: time="2025-10-31T02:14:10.210037268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 02:14:10.211929 containerd[1506]: time="2025-10-31T02:14:10.211333884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 02:14:10.212292 kubelet[2763]: E1031 02:14:10.212229 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 02:14:10.212470 kubelet[2763]: E1031 02:14:10.212422 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 02:14:10.213874 kubelet[2763]: E1031 02:14:10.212913 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbvnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5f5wx_calico-system(383b1d33-d54b-4a00-801a-8a36f78ff190): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:10.215100 kubelet[2763]: E1031 02:14:10.215040 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5f5wx" podUID="383b1d33-d54b-4a00-801a-8a36f78ff190" Oct 31 02:14:10.220033 containerd[1506]: time="2025-10-31T02:14:10.219892276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 02:14:10.294405 containerd[1506]: 2025-10-31 02:14:10.047 [INFO][4458] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Oct 31 02:14:10.294405 containerd[1506]: 2025-10-31 02:14:10.051 [INFO][4458] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" iface="eth0" netns="/var/run/netns/cni-cc72db3a-b680-6fb2-42f0-124afb5a8ccf" Oct 31 02:14:10.294405 containerd[1506]: 2025-10-31 02:14:10.052 [INFO][4458] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" iface="eth0" netns="/var/run/netns/cni-cc72db3a-b680-6fb2-42f0-124afb5a8ccf" Oct 31 02:14:10.294405 containerd[1506]: 2025-10-31 02:14:10.052 [INFO][4458] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" iface="eth0" netns="/var/run/netns/cni-cc72db3a-b680-6fb2-42f0-124afb5a8ccf" Oct 31 02:14:10.294405 containerd[1506]: 2025-10-31 02:14:10.052 [INFO][4458] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Oct 31 02:14:10.294405 containerd[1506]: 2025-10-31 02:14:10.052 [INFO][4458] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Oct 31 02:14:10.294405 containerd[1506]: 2025-10-31 02:14:10.248 [INFO][4553] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" HandleID="k8s-pod-network.fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:14:10.294405 containerd[1506]: 2025-10-31 02:14:10.249 [INFO][4553] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:10.294405 containerd[1506]: 2025-10-31 02:14:10.249 [INFO][4553] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:10.294405 containerd[1506]: 2025-10-31 02:14:10.268 [WARNING][4553] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" HandleID="k8s-pod-network.fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:14:10.294405 containerd[1506]: 2025-10-31 02:14:10.269 [INFO][4553] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" HandleID="k8s-pod-network.fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:14:10.294405 containerd[1506]: 2025-10-31 02:14:10.276 [INFO][4553] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:10.294405 containerd[1506]: 2025-10-31 02:14:10.284 [INFO][4458] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Oct 31 02:14:10.297035 containerd[1506]: time="2025-10-31T02:14:10.295976098Z" level=info msg="TearDown network for sandbox \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\" successfully" Oct 31 02:14:10.297035 containerd[1506]: time="2025-10-31T02:14:10.296027934Z" level=info msg="StopPodSandbox for \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\" returns successfully" Oct 31 02:14:10.297824 containerd[1506]: time="2025-10-31T02:14:10.297773128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b84756f78-vnktk,Uid:5c5691c8-bb57-4400-82c8-d0c76d156189,Namespace:calico-system,Attempt:1,}" Oct 31 02:14:10.301482 containerd[1506]: time="2025-10-31T02:14:10.301098890Z" level=info msg="StartContainer for \"961431488d6ecffa21a6731f25ba2a5929d1b3e810181ed0b2258c8c356055e5\" returns successfully" Oct 31 02:14:10.369054 containerd[1506]: time="2025-10-31T02:14:10.368678649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-thfzc,Uid:c8e6d4c7-57e9-4902-bae1-886c53b818d8,Namespace:kube-system,Attempt:1,} returns sandbox id \"eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc\"" Oct 31 02:14:10.383911 containerd[1506]: time="2025-10-31T02:14:10.383862762Z" level=info msg="CreateContainer within sandbox \"eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 02:14:10.460331 containerd[1506]: time="2025-10-31T02:14:10.458830934Z" level=info msg="CreateContainer within sandbox \"eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4323fe24e38ebed9ce498cbf61415c367c53d206087bac7a8b323979488e1fef\"" Oct 31 02:14:10.462777 containerd[1506]: time="2025-10-31T02:14:10.461857568Z" level=info msg="StartContainer for \"4323fe24e38ebed9ce498cbf61415c367c53d206087bac7a8b323979488e1fef\"" Oct 31 02:14:10.466793 systemd-networkd[1429]: calidf697394f62: Gained IPv6LL Oct 31 02:14:10.467424 systemd-networkd[1429]: caliaaf0c3d2352: Gained IPv6LL Oct 31 02:14:10.490878 systemd[1]: run-netns-cni\x2dcc72db3a\x2db680\x2d6fb2\x2d42f0\x2d124afb5a8ccf.mount: Deactivated successfully. Oct 31 02:14:10.493294 systemd[1]: run-netns-cni\x2d7c624bb6\x2db3dc\x2dc6e3\x2da7b4\x2d70d548c50f20.mount: Deactivated successfully. Oct 31 02:14:10.543858 containerd[1506]: time="2025-10-31T02:14:10.543384873Z" level=info msg="StopPodSandbox for \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\"" Oct 31 02:14:10.587626 containerd[1506]: time="2025-10-31T02:14:10.587316336Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:10.593249 containerd[1506]: time="2025-10-31T02:14:10.592528310Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 02:14:10.593627 containerd[1506]: time="2025-10-31T02:14:10.592561743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 02:14:10.594008 kubelet[2763]: E1031 02:14:10.593959 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 02:14:10.594388 kubelet[2763]: E1031 02:14:10.594202 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 02:14:10.594739 kubelet[2763]: E1031 02:14:10.594641 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-96scz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-979f7c865-m2xgg_calico-system(e4c39b9a-a5c9-405f-a471-262b649fbc6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:10.595232 containerd[1506]: time="2025-10-31T02:14:10.595053433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 02:14:10.597142 kubelet[2763]: E1031 02:14:10.596692 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-979f7c865-m2xgg" podUID="e4c39b9a-a5c9-405f-a471-262b649fbc6a" Oct 31 02:14:10.606845 systemd[1]: Started cri-containerd-4323fe24e38ebed9ce498cbf61415c367c53d206087bac7a8b323979488e1fef.scope - libcontainer container 4323fe24e38ebed9ce498cbf61415c367c53d206087bac7a8b323979488e1fef. Oct 31 02:14:10.657527 systemd-networkd[1429]: cali9d74ca420fa: Gained IPv6LL Oct 31 02:14:10.841660 containerd[1506]: time="2025-10-31T02:14:10.841491982Z" level=info msg="StartContainer for \"4323fe24e38ebed9ce498cbf61415c367c53d206087bac7a8b323979488e1fef\" returns successfully" Oct 31 02:14:10.910393 systemd-networkd[1429]: cali2d78dde077d: Link UP Oct 31 02:14:10.913273 systemd-networkd[1429]: cali2d78dde077d: Gained carrier Oct 31 02:14:10.967762 containerd[1506]: time="2025-10-31T02:14:10.967702833Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:10.969447 containerd[1506]: time="2025-10-31T02:14:10.969395478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 02:14:10.969591 containerd[1506]: time="2025-10-31T02:14:10.969511016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 02:14:10.969936 kubelet[2763]: E1031 02:14:10.969879 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 02:14:10.970759 kubelet[2763]: E1031 02:14:10.969950 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 02:14:10.970759 kubelet[2763]: E1031 02:14:10.970279 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zkqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c48557b4b-xk5jv_calico-apiserver(8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:10.971970 containerd[1506]: time="2025-10-31T02:14:10.971929983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 02:14:10.972319 kubelet[2763]: E1031 02:14:10.972237 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-xk5jv" podUID="8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d" Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.285 [INFO][4599] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.324 [INFO][4599] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0 calico-apiserver-c48557b4b- calico-apiserver e8cd4f39-3f1e-47f1-8de2-399f0cec4257 995 0 2025-10-31 02:13:31 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c48557b4b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-xg3om.gb1.brightbox.com calico-apiserver-c48557b4b-ts64b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2d78dde077d [] [] }} ContainerID="e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-ts64b" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-" Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.324 [INFO][4599] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-ts64b" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.642 [INFO][4643] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" HandleID="k8s-pod-network.e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.642 [INFO][4643] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" HandleID="k8s-pod-network.e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002681c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-xg3om.gb1.brightbox.com", "pod":"calico-apiserver-c48557b4b-ts64b", "timestamp":"2025-10-31 02:14:10.642657678 +0000 UTC"}, Hostname:"srv-xg3om.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.643 [INFO][4643] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.643 [INFO][4643] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.643 [INFO][4643] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xg3om.gb1.brightbox.com' Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.693 [INFO][4643] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.721 [INFO][4643] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.771 [INFO][4643] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.785 [INFO][4643] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.805 [INFO][4643] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.805 [INFO][4643] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.827 [INFO][4643] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7 Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.850 [INFO][4643] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.889 [INFO][4643] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.7/26] block=192.168.50.0/26 handle="k8s-pod-network.e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.890 [INFO][4643] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.7/26] handle="k8s-pod-network.e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.890 [INFO][4643] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:10.994636 containerd[1506]: 2025-10-31 02:14:10.890 [INFO][4643] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.7/26] IPv6=[] ContainerID="e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" HandleID="k8s-pod-network.e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:14:10.999568 containerd[1506]: 2025-10-31 02:14:10.902 [INFO][4599] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-ts64b" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0", GenerateName:"calico-apiserver-c48557b4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8cd4f39-3f1e-47f1-8de2-399f0cec4257", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c48557b4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-c48557b4b-ts64b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d78dde077d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:10.999568 containerd[1506]: 2025-10-31 02:14:10.902 [INFO][4599] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.7/32] ContainerID="e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-ts64b" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:14:10.999568 containerd[1506]: 2025-10-31 02:14:10.903 [INFO][4599] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2d78dde077d ContainerID="e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-ts64b" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:14:10.999568 containerd[1506]: 2025-10-31 02:14:10.914 [INFO][4599] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-ts64b" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:14:10.999568 containerd[1506]: 2025-10-31 02:14:10.915 [INFO][4599] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-ts64b" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0", GenerateName:"calico-apiserver-c48557b4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8cd4f39-3f1e-47f1-8de2-399f0cec4257", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c48557b4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7", Pod:"calico-apiserver-c48557b4b-ts64b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d78dde077d", MAC:"ba:d2:9a:6e:24:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:10.999568 containerd[1506]: 2025-10-31 02:14:10.989 [INFO][4599] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7" Namespace="calico-apiserver" Pod="calico-apiserver-c48557b4b-ts64b" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:14:11.042378 systemd-networkd[1429]: calida1fbd8febb: Gained IPv6LL Oct 31 02:14:11.061450 containerd[1506]: time="2025-10-31T02:14:11.061296689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:14:11.061622 containerd[1506]: time="2025-10-31T02:14:11.061417123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:14:11.061622 containerd[1506]: time="2025-10-31T02:14:11.061467605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:11.061765 containerd[1506]: time="2025-10-31T02:14:11.061724625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:11.150188 kubelet[2763]: E1031 02:14:11.149453 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-xk5jv" podUID="8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d" Oct 31 02:14:11.151516 kubelet[2763]: E1031 02:14:11.150617 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5f5wx" podUID="383b1d33-d54b-4a00-801a-8a36f78ff190" Oct 31 02:14:11.159649 containerd[1506]: time="2025-10-31T02:14:11.159608794Z" level=info msg="StopPodSandbox for \"59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab\"" Oct 31 02:14:11.180059 systemd[1]: Started cri-containerd-e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7.scope - libcontainer container e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7. Oct 31 02:14:11.193222 systemd[1]: cri-containerd-59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab.scope: Deactivated successfully. Oct 31 02:14:11.220096 systemd-networkd[1429]: calie0f89cf85df: Link UP Oct 31 02:14:11.228291 systemd-networkd[1429]: calie0f89cf85df: Gained carrier Oct 31 02:14:11.234323 systemd-networkd[1429]: cali3f442ed9fea: Gained IPv6LL Oct 31 02:14:11.250024 kubelet[2763]: I1031 02:14:11.248396 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-jbl2g" podStartSLOduration=55.248369322 podStartE2EDuration="55.248369322s" podCreationTimestamp="2025-10-31 02:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 02:14:11.176314925 +0000 UTC m=+60.926474007" watchObservedRunningTime="2025-10-31 02:14:11.248369322 +0000 UTC m=+60.998528399" Oct 31 02:14:11.266551 containerd[1506]: 2025-10-31 02:14:10.848 [WARNING][4725] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0", GenerateName:"whisker-979f7c865-", Namespace:"calico-system", SelfLink:"", UID:"e4c39b9a-a5c9-405f-a471-262b649fbc6a", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"979f7c865", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab", Pod:"whisker-979f7c865-m2xgg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74955880cec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:11.266551 containerd[1506]: 2025-10-31 02:14:10.848 [INFO][4725] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Oct 31 02:14:11.266551 containerd[1506]: 2025-10-31 02:14:10.848 [INFO][4725] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" iface="eth0" netns="" Oct 31 02:14:11.266551 containerd[1506]: 2025-10-31 02:14:10.848 [INFO][4725] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Oct 31 02:14:11.266551 containerd[1506]: 2025-10-31 02:14:10.848 [INFO][4725] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Oct 31 02:14:11.266551 containerd[1506]: 2025-10-31 02:14:10.980 [INFO][4774] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" HandleID="k8s-pod-network.1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:11.266551 containerd[1506]: 2025-10-31 02:14:10.984 [INFO][4774] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:11.266551 containerd[1506]: 2025-10-31 02:14:11.162 [INFO][4774] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:11.266551 containerd[1506]: 2025-10-31 02:14:11.219 [WARNING][4774] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" HandleID="k8s-pod-network.1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:11.266551 containerd[1506]: 2025-10-31 02:14:11.224 [INFO][4774] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" HandleID="k8s-pod-network.1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:11.266551 containerd[1506]: 2025-10-31 02:14:11.243 [INFO][4774] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:11.266551 containerd[1506]: 2025-10-31 02:14:11.258 [INFO][4725] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Oct 31 02:14:11.274627 containerd[1506]: time="2025-10-31T02:14:11.274336677Z" level=info msg="TearDown network for sandbox \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\" successfully" Oct 31 02:14:11.274627 containerd[1506]: time="2025-10-31T02:14:11.274379691Z" level=info msg="StopPodSandbox for \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\" returns successfully" Oct 31 02:14:11.279469 containerd[1506]: time="2025-10-31T02:14:11.279410255Z" level=info msg="RemovePodSandbox for \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\"" Oct 31 02:14:11.279573 containerd[1506]: time="2025-10-31T02:14:11.279468235Z" level=info msg="Forcibly stopping sandbox \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\"" Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:10.585 [INFO][4642] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:10.695 [INFO][4642] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0 calico-kube-controllers-b84756f78- calico-system 5c5691c8-bb57-4400-82c8-d0c76d156189 1003 0 2025-10-31 02:13:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:b84756f78 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-xg3om.gb1.brightbox.com calico-kube-controllers-b84756f78-vnktk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie0f89cf85df [] [] }} ContainerID="a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" Namespace="calico-system" Pod="calico-kube-controllers-b84756f78-vnktk" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-" Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:10.696 [INFO][4642] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" Namespace="calico-system" Pod="calico-kube-controllers-b84756f78-vnktk" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:10.899 [INFO][4743] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" HandleID="k8s-pod-network.a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:10.899 [INFO][4743] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" HandleID="k8s-pod-network.a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000125de0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-xg3om.gb1.brightbox.com", "pod":"calico-kube-controllers-b84756f78-vnktk", "timestamp":"2025-10-31 02:14:10.899086083 +0000 UTC"}, Hostname:"srv-xg3om.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:10.899 [INFO][4743] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:10.899 [INFO][4743] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:10.899 [INFO][4743] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xg3om.gb1.brightbox.com' Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:10.933 [INFO][4743] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:10.994 [INFO][4743] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:11.023 [INFO][4743] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:11.033 [INFO][4743] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:11.047 [INFO][4743] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:11.048 [INFO][4743] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:11.055 [INFO][4743] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8 Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:11.100 [INFO][4743] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:11.159 [INFO][4743] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.8/26] block=192.168.50.0/26 handle="k8s-pod-network.a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:11.159 [INFO][4743] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.8/26] handle="k8s-pod-network.a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:11.168 [INFO][4743] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:11.294703 containerd[1506]: 2025-10-31 02:14:11.172 [INFO][4743] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.8/26] IPv6=[] ContainerID="a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" HandleID="k8s-pod-network.a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:14:11.295816 containerd[1506]: 2025-10-31 02:14:11.202 [INFO][4642] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" Namespace="calico-system" Pod="calico-kube-controllers-b84756f78-vnktk" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0", GenerateName:"calico-kube-controllers-b84756f78-", Namespace:"calico-system", SelfLink:"", UID:"5c5691c8-bb57-4400-82c8-d0c76d156189", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b84756f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-b84756f78-vnktk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie0f89cf85df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:11.295816 containerd[1506]: 2025-10-31 02:14:11.205 [INFO][4642] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.8/32] ContainerID="a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" Namespace="calico-system" Pod="calico-kube-controllers-b84756f78-vnktk" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:14:11.295816 containerd[1506]: 2025-10-31 02:14:11.205 [INFO][4642] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0f89cf85df ContainerID="a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" Namespace="calico-system" Pod="calico-kube-controllers-b84756f78-vnktk" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:14:11.295816 containerd[1506]: 2025-10-31 02:14:11.239 [INFO][4642] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" Namespace="calico-system" Pod="calico-kube-controllers-b84756f78-vnktk" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:14:11.295816 containerd[1506]: 2025-10-31 02:14:11.252 [INFO][4642] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" Namespace="calico-system" Pod="calico-kube-controllers-b84756f78-vnktk" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0", GenerateName:"calico-kube-controllers-b84756f78-", Namespace:"calico-system", SelfLink:"", UID:"5c5691c8-bb57-4400-82c8-d0c76d156189", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b84756f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8", Pod:"calico-kube-controllers-b84756f78-vnktk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie0f89cf85df", MAC:"7a:d8:ca:17:b6:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:11.295816 containerd[1506]: 2025-10-31 02:14:11.277 [INFO][4642] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8" Namespace="calico-system" Pod="calico-kube-controllers-b84756f78-vnktk" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:14:11.363366 kubelet[2763]: I1031 02:14:11.362995 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-thfzc" podStartSLOduration=55.362952488 podStartE2EDuration="55.362952488s" podCreationTimestamp="2025-10-31 02:13:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 02:14:11.300992102 +0000 UTC m=+61.051151187" watchObservedRunningTime="2025-10-31 02:14:11.362952488 +0000 UTC m=+61.113111566" Oct 31 02:14:11.383341 containerd[1506]: time="2025-10-31T02:14:11.379603342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:14:11.383341 containerd[1506]: time="2025-10-31T02:14:11.380462588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:14:11.383341 containerd[1506]: time="2025-10-31T02:14:11.380485670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:11.383341 containerd[1506]: time="2025-10-31T02:14:11.380597105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:11.393082 containerd[1506]: time="2025-10-31T02:14:11.371376715Z" level=info msg="shim disconnected" id=59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab namespace=k8s.io Oct 31 02:14:11.393082 containerd[1506]: time="2025-10-31T02:14:11.392969335Z" level=warning msg="cleaning up after shim disconnected" id=59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab namespace=k8s.io Oct 31 02:14:11.393082 containerd[1506]: time="2025-10-31T02:14:11.393008549Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 02:14:11.398328 containerd[1506]: time="2025-10-31T02:14:11.378090598Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:11.401851 containerd[1506]: time="2025-10-31T02:14:11.400963446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 02:14:11.402083 kubelet[2763]: E1031 02:14:11.401582 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 02:14:11.403007 kubelet[2763]: E1031 02:14:11.401673 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 02:14:11.403876 containerd[1506]: time="2025-10-31T02:14:11.403631214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 02:14:11.405100 kubelet[2763]: E1031 02:14:11.404937 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p6pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rsz7n_calico-system(1aba93ae-9569-4e3f-92f8-b96678002f38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:11.414041 containerd[1506]: time="2025-10-31T02:14:11.413702134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 02:14:11.433404 systemd[1]: Started cri-containerd-a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8.scope - libcontainer container a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8. Oct 31 02:14:11.482526 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab-rootfs.mount: Deactivated successfully. Oct 31 02:14:11.482984 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab-shm.mount: Deactivated successfully. Oct 31 02:14:11.653354 systemd-networkd[1429]: cali74955880cec: Link DOWN Oct 31 02:14:11.653367 systemd-networkd[1429]: cali74955880cec: Lost carrier Oct 31 02:14:11.661790 containerd[1506]: 2025-10-31 02:14:11.521 [WARNING][4871] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0", GenerateName:"whisker-979f7c865-", Namespace:"calico-system", SelfLink:"", UID:"e4c39b9a-a5c9-405f-a471-262b649fbc6a", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"979f7c865", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab", Pod:"whisker-979f7c865-m2xgg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74955880cec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:11.661790 containerd[1506]: 2025-10-31 02:14:11.522 [INFO][4871] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Oct 31 02:14:11.661790 containerd[1506]: 2025-10-31 02:14:11.522 [INFO][4871] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" iface="eth0" netns="" Oct 31 02:14:11.661790 containerd[1506]: 2025-10-31 02:14:11.522 [INFO][4871] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Oct 31 02:14:11.661790 containerd[1506]: 2025-10-31 02:14:11.522 [INFO][4871] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Oct 31 02:14:11.661790 containerd[1506]: 2025-10-31 02:14:11.619 [INFO][4937] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" HandleID="k8s-pod-network.1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:11.661790 containerd[1506]: 2025-10-31 02:14:11.620 [INFO][4937] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:11.661790 containerd[1506]: 2025-10-31 02:14:11.620 [INFO][4937] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:11.661790 containerd[1506]: 2025-10-31 02:14:11.646 [WARNING][4937] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" HandleID="k8s-pod-network.1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:11.661790 containerd[1506]: 2025-10-31 02:14:11.646 [INFO][4937] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" HandleID="k8s-pod-network.1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:11.661790 containerd[1506]: 2025-10-31 02:14:11.651 [INFO][4937] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:11.661790 containerd[1506]: 2025-10-31 02:14:11.657 [INFO][4871] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04" Oct 31 02:14:11.662498 containerd[1506]: time="2025-10-31T02:14:11.661861986Z" level=info msg="TearDown network for sandbox \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\" successfully" Oct 31 02:14:11.680155 containerd[1506]: time="2025-10-31T02:14:11.680093573Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 02:14:11.680311 containerd[1506]: time="2025-10-31T02:14:11.680213688Z" level=info msg="RemovePodSandbox \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\" returns successfully" Oct 31 02:14:11.681507 containerd[1506]: time="2025-10-31T02:14:11.681125786Z" level=info msg="StopPodSandbox for \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\"" Oct 31 02:14:11.781587 containerd[1506]: time="2025-10-31T02:14:11.781382068Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:11.788366 containerd[1506]: time="2025-10-31T02:14:11.788233113Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 02:14:11.790204 containerd[1506]: time="2025-10-31T02:14:11.788336494Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 02:14:11.790369 kubelet[2763]: E1031 02:14:11.789279 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 02:14:11.790369 kubelet[2763]: E1031 02:14:11.789352 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 02:14:11.790369 kubelet[2763]: E1031 02:14:11.789550 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p6pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rsz7n_calico-system(1aba93ae-9569-4e3f-92f8-b96678002f38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:11.791474 kubelet[2763]: E1031 02:14:11.790917 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:14:11.880542 containerd[1506]: time="2025-10-31T02:14:11.880424581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-b84756f78-vnktk,Uid:5c5691c8-bb57-4400-82c8-d0c76d156189,Namespace:calico-system,Attempt:1,} returns sandbox id \"a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8\"" Oct 31 02:14:11.892149 containerd[1506]: time="2025-10-31T02:14:11.890831102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 02:14:11.940968 containerd[1506]: time="2025-10-31T02:14:11.939747306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c48557b4b-ts64b,Uid:e8cd4f39-3f1e-47f1-8de2-399f0cec4257,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7\"" Oct 31 02:14:11.988731 containerd[1506]: 2025-10-31 02:14:11.649 [INFO][4948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Oct 31 02:14:11.988731 containerd[1506]: 2025-10-31 02:14:11.650 [INFO][4948] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" iface="eth0" netns="/var/run/netns/cni-80f5414b-d143-f64c-08d7-711273c70176" Oct 31 02:14:11.988731 containerd[1506]: 2025-10-31 02:14:11.651 [INFO][4948] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" iface="eth0" netns="/var/run/netns/cni-80f5414b-d143-f64c-08d7-711273c70176" Oct 31 02:14:11.988731 containerd[1506]: 2025-10-31 02:14:11.669 [INFO][4948] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" after=19.252777ms iface="eth0" netns="/var/run/netns/cni-80f5414b-d143-f64c-08d7-711273c70176" Oct 31 02:14:11.988731 containerd[1506]: 2025-10-31 02:14:11.669 [INFO][4948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Oct 31 02:14:11.988731 containerd[1506]: 2025-10-31 02:14:11.670 [INFO][4948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Oct 31 02:14:11.988731 containerd[1506]: 2025-10-31 02:14:11.800 [INFO][4968] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" HandleID="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:11.988731 containerd[1506]: 2025-10-31 02:14:11.802 [INFO][4968] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:11.988731 containerd[1506]: 2025-10-31 02:14:11.803 [INFO][4968] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:11.988731 containerd[1506]: 2025-10-31 02:14:11.951 [INFO][4968] ipam/ipam_plugin.go 455: Released address using handleID ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" HandleID="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:11.988731 containerd[1506]: 2025-10-31 02:14:11.956 [INFO][4968] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" HandleID="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:14:11.988731 containerd[1506]: 2025-10-31 02:14:11.963 [INFO][4968] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:11.988731 containerd[1506]: 2025-10-31 02:14:11.982 [INFO][4948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Oct 31 02:14:11.994616 containerd[1506]: time="2025-10-31T02:14:11.993487941Z" level=info msg="TearDown network for sandbox \"59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab\" successfully" Oct 31 02:14:11.994616 containerd[1506]: time="2025-10-31T02:14:11.993636866Z" level=info msg="StopPodSandbox for \"59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab\" returns successfully" Oct 31 02:14:11.996129 containerd[1506]: time="2025-10-31T02:14:11.995759048Z" level=info msg="StopPodSandbox for \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\"" Oct 31 02:14:11.996129 containerd[1506]: time="2025-10-31T02:14:11.995797857Z" level=info msg="StopPodSandbox for \"1c6aea045872647692b09ac5a70d0bd278be763cb8dc7a1e1e89f9e930f81a04\" returns successfully" Oct 31 02:14:12.000542 systemd[1]: run-netns-cni\x2d80f5414b\x2dd143\x2df64c\x2d08d7\x2d711273c70176.mount: Deactivated successfully. Oct 31 02:14:12.053791 kubelet[2763]: I1031 02:14:12.053721 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4c39b9a-a5c9-405f-a471-262b649fbc6a-whisker-backend-key-pair\") pod \"e4c39b9a-a5c9-405f-a471-262b649fbc6a\" (UID: \"e4c39b9a-a5c9-405f-a471-262b649fbc6a\") " Oct 31 02:14:12.053791 kubelet[2763]: I1031 02:14:12.053786 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96scz\" (UniqueName: \"kubernetes.io/projected/e4c39b9a-a5c9-405f-a471-262b649fbc6a-kube-api-access-96scz\") pod \"e4c39b9a-a5c9-405f-a471-262b649fbc6a\" (UID: \"e4c39b9a-a5c9-405f-a471-262b649fbc6a\") " Oct 31 02:14:12.053791 kubelet[2763]: I1031 02:14:12.053826 2763 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4c39b9a-a5c9-405f-a471-262b649fbc6a-whisker-ca-bundle\") pod \"e4c39b9a-a5c9-405f-a471-262b649fbc6a\" (UID: \"e4c39b9a-a5c9-405f-a471-262b649fbc6a\") " Oct 31 02:14:12.129134 kubelet[2763]: I1031 02:14:12.105132 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4c39b9a-a5c9-405f-a471-262b649fbc6a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e4c39b9a-a5c9-405f-a471-262b649fbc6a" (UID: "e4c39b9a-a5c9-405f-a471-262b649fbc6a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 02:14:12.143869 kubelet[2763]: I1031 02:14:12.143813 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4c39b9a-a5c9-405f-a471-262b649fbc6a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e4c39b9a-a5c9-405f-a471-262b649fbc6a" (UID: "e4c39b9a-a5c9-405f-a471-262b649fbc6a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 02:14:12.148842 systemd[1]: var-lib-kubelet-pods-e4c39b9a\x2da5c9\x2d405f\x2da471\x2d262b649fbc6a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d96scz.mount: Deactivated successfully. Oct 31 02:14:12.158883 kubelet[2763]: I1031 02:14:12.154426 2763 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e4c39b9a-a5c9-405f-a471-262b649fbc6a-whisker-backend-key-pair\") on node \"srv-xg3om.gb1.brightbox.com\" DevicePath \"\"" Oct 31 02:14:12.158883 kubelet[2763]: I1031 02:14:12.154466 2763 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4c39b9a-a5c9-405f-a471-262b649fbc6a-whisker-ca-bundle\") on node \"srv-xg3om.gb1.brightbox.com\" DevicePath \"\"" Oct 31 02:14:12.159118 kubelet[2763]: I1031 02:14:12.158443 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c39b9a-a5c9-405f-a471-262b649fbc6a-kube-api-access-96scz" (OuterVolumeSpecName: "kube-api-access-96scz") pod "e4c39b9a-a5c9-405f-a471-262b649fbc6a" (UID: "e4c39b9a-a5c9-405f-a471-262b649fbc6a"). InnerVolumeSpecName "kube-api-access-96scz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 02:14:12.159467 systemd[1]: var-lib-kubelet-pods-e4c39b9a\x2da5c9\x2d405f\x2da471\x2d262b649fbc6a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 02:14:12.196108 systemd-networkd[1429]: cali2d78dde077d: Gained IPv6LL Oct 31 02:14:12.217205 kubelet[2763]: E1031 02:14:12.215768 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:14:12.219839 containerd[1506]: 2025-10-31 02:14:11.935 [WARNING][4983] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"383b1d33-d54b-4a00-801a-8a36f78ff190", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77", Pod:"goldmane-666569f655-5f5wx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaaf0c3d2352", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:12.219839 containerd[1506]: 2025-10-31 02:14:11.938 [INFO][4983] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Oct 31 02:14:12.219839 containerd[1506]: 2025-10-31 02:14:11.938 [INFO][4983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" iface="eth0" netns="" Oct 31 02:14:12.219839 containerd[1506]: 2025-10-31 02:14:11.938 [INFO][4983] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Oct 31 02:14:12.219839 containerd[1506]: 2025-10-31 02:14:11.938 [INFO][4983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Oct 31 02:14:12.219839 containerd[1506]: 2025-10-31 02:14:12.177 [INFO][5013] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" HandleID="k8s-pod-network.9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Workload="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:12.219839 containerd[1506]: 2025-10-31 02:14:12.177 [INFO][5013] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:12.219839 containerd[1506]: 2025-10-31 02:14:12.177 [INFO][5013] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:12.219839 containerd[1506]: 2025-10-31 02:14:12.195 [WARNING][5013] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" HandleID="k8s-pod-network.9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Workload="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:12.219839 containerd[1506]: 2025-10-31 02:14:12.195 [INFO][5013] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" HandleID="k8s-pod-network.9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Workload="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:12.219839 containerd[1506]: 2025-10-31 02:14:12.202 [INFO][5013] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:12.219839 containerd[1506]: 2025-10-31 02:14:12.212 [INFO][4983] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Oct 31 02:14:12.222594 containerd[1506]: time="2025-10-31T02:14:12.221886767Z" level=info msg="TearDown network for sandbox \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\" successfully" Oct 31 02:14:12.222594 containerd[1506]: time="2025-10-31T02:14:12.221961386Z" level=info msg="StopPodSandbox for \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\" returns successfully" Oct 31 02:14:12.220013 systemd[1]: Removed slice kubepods-besteffort-pode4c39b9a_a5c9_405f_a471_262b649fbc6a.slice - libcontainer container kubepods-besteffort-pode4c39b9a_a5c9_405f_a471_262b649fbc6a.slice. Oct 31 02:14:12.246514 containerd[1506]: time="2025-10-31T02:14:12.246070248Z" level=info msg="RemovePodSandbox for \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\"" Oct 31 02:14:12.246965 containerd[1506]: time="2025-10-31T02:14:12.246693639Z" level=info msg="Forcibly stopping sandbox \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\"" Oct 31 02:14:12.256182 kubelet[2763]: I1031 02:14:12.256024 2763 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-96scz\" (UniqueName: \"kubernetes.io/projected/e4c39b9a-a5c9-405f-a471-262b649fbc6a-kube-api-access-96scz\") on node \"srv-xg3om.gb1.brightbox.com\" DevicePath \"\"" Oct 31 02:14:12.267769 containerd[1506]: time="2025-10-31T02:14:12.267709551Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:12.269712 containerd[1506]: time="2025-10-31T02:14:12.269422383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 02:14:12.271076 containerd[1506]: time="2025-10-31T02:14:12.269639081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 02:14:12.271901 kubelet[2763]: E1031 02:14:12.271258 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 02:14:12.271901 kubelet[2763]: E1031 02:14:12.271332 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 02:14:12.272051 kubelet[2763]: E1031 02:14:12.271941 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69bkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b84756f78-vnktk_calico-system(5c5691c8-bb57-4400-82c8-d0c76d156189): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:12.273527 containerd[1506]: time="2025-10-31T02:14:12.273308415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 02:14:12.273625 kubelet[2763]: E1031 02:14:12.273459 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b84756f78-vnktk" podUID="5c5691c8-bb57-4400-82c8-d0c76d156189" Oct 31 02:14:12.321553 systemd-networkd[1429]: calie0f89cf85df: Gained IPv6LL Oct 31 02:14:12.507343 systemd[1]: Created slice kubepods-besteffort-podef5a12a1_5de2_4b02_a15d_c02d3ef6c7da.slice - libcontainer container kubepods-besteffort-podef5a12a1_5de2_4b02_a15d_c02d3ef6c7da.slice. Oct 31 02:14:12.528208 containerd[1506]: 2025-10-31 02:14:12.405 [WARNING][5031] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"383b1d33-d54b-4a00-801a-8a36f78ff190", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"5a962d48176bdc49448ccc19da85f6efd92effe439b0e8bac90d3bde408c6b77", Pod:"goldmane-666569f655-5f5wx", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliaaf0c3d2352", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:12.528208 containerd[1506]: 2025-10-31 02:14:12.407 [INFO][5031] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Oct 31 02:14:12.528208 containerd[1506]: 2025-10-31 02:14:12.407 [INFO][5031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" iface="eth0" netns="" Oct 31 02:14:12.528208 containerd[1506]: 2025-10-31 02:14:12.407 [INFO][5031] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Oct 31 02:14:12.528208 containerd[1506]: 2025-10-31 02:14:12.407 [INFO][5031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Oct 31 02:14:12.528208 containerd[1506]: 2025-10-31 02:14:12.502 [INFO][5040] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" HandleID="k8s-pod-network.9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Workload="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:12.528208 containerd[1506]: 2025-10-31 02:14:12.502 [INFO][5040] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:12.528208 containerd[1506]: 2025-10-31 02:14:12.502 [INFO][5040] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:12.528208 containerd[1506]: 2025-10-31 02:14:12.514 [WARNING][5040] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" HandleID="k8s-pod-network.9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Workload="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:12.528208 containerd[1506]: 2025-10-31 02:14:12.514 [INFO][5040] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" HandleID="k8s-pod-network.9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Workload="srv--xg3om.gb1.brightbox.com-k8s-goldmane--666569f655--5f5wx-eth0" Oct 31 02:14:12.528208 containerd[1506]: 2025-10-31 02:14:12.520 [INFO][5040] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:12.528208 containerd[1506]: 2025-10-31 02:14:12.526 [INFO][5031] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074" Oct 31 02:14:12.528953 containerd[1506]: time="2025-10-31T02:14:12.528289367Z" level=info msg="TearDown network for sandbox \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\" successfully" Oct 31 02:14:12.533681 containerd[1506]: time="2025-10-31T02:14:12.533585312Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 02:14:12.533780 containerd[1506]: time="2025-10-31T02:14:12.533731363Z" level=info msg="RemovePodSandbox \"9a3ef06d7ac67f60928de692176124f257438ce4d42a6291d40de24b172e6074\" returns successfully" Oct 31 02:14:12.534738 containerd[1506]: time="2025-10-31T02:14:12.534585942Z" level=info msg="StopPodSandbox for \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\"" Oct 31 02:14:12.566765 kubelet[2763]: I1031 02:14:12.564835 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da-whisker-backend-key-pair\") pod \"whisker-75c756744f-85x8s\" (UID: \"ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da\") " pod="calico-system/whisker-75c756744f-85x8s" Oct 31 02:14:12.569125 kubelet[2763]: I1031 02:14:12.568934 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpp4j\" (UniqueName: \"kubernetes.io/projected/ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da-kube-api-access-hpp4j\") pod \"whisker-75c756744f-85x8s\" (UID: \"ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da\") " pod="calico-system/whisker-75c756744f-85x8s" Oct 31 02:14:12.571238 kubelet[2763]: I1031 02:14:12.571208 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da-whisker-ca-bundle\") pod \"whisker-75c756744f-85x8s\" (UID: \"ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da\") " pod="calico-system/whisker-75c756744f-85x8s" Oct 31 02:14:12.629265 kubelet[2763]: I1031 02:14:12.629198 2763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c39b9a-a5c9-405f-a471-262b649fbc6a" path="/var/lib/kubelet/pods/e4c39b9a-a5c9-405f-a471-262b649fbc6a/volumes" Oct 31 02:14:12.636904 containerd[1506]: time="2025-10-31T02:14:12.636831067Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:12.641054 containerd[1506]: time="2025-10-31T02:14:12.640994399Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 02:14:12.641310 containerd[1506]: time="2025-10-31T02:14:12.641130643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 02:14:12.642083 kubelet[2763]: E1031 02:14:12.641599 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 02:14:12.642083 kubelet[2763]: E1031 02:14:12.641684 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 02:14:12.642083 kubelet[2763]: E1031 02:14:12.641946 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59lsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c48557b4b-ts64b_calico-apiserver(e8cd4f39-3f1e-47f1-8de2-399f0cec4257): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:12.643715 kubelet[2763]: E1031 02:14:12.643543 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-ts64b" podUID="e8cd4f39-3f1e-47f1-8de2-399f0cec4257" Oct 31 02:14:12.760974 containerd[1506]: 2025-10-31 02:14:12.634 [WARNING][5054] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1aba93ae-9569-4e3f-92f8-b96678002f38", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9", Pod:"csi-node-driver-rsz7n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9d74ca420fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:12.760974 containerd[1506]: 2025-10-31 02:14:12.635 [INFO][5054] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Oct 31 02:14:12.760974 containerd[1506]: 2025-10-31 02:14:12.635 [INFO][5054] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" iface="eth0" netns="" Oct 31 02:14:12.760974 containerd[1506]: 2025-10-31 02:14:12.635 [INFO][5054] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Oct 31 02:14:12.760974 containerd[1506]: 2025-10-31 02:14:12.635 [INFO][5054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Oct 31 02:14:12.760974 containerd[1506]: 2025-10-31 02:14:12.732 [INFO][5062] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" HandleID="k8s-pod-network.d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Workload="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:12.760974 containerd[1506]: 2025-10-31 02:14:12.733 [INFO][5062] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:12.760974 containerd[1506]: 2025-10-31 02:14:12.733 [INFO][5062] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:12.760974 containerd[1506]: 2025-10-31 02:14:12.747 [WARNING][5062] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" HandleID="k8s-pod-network.d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Workload="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:12.760974 containerd[1506]: 2025-10-31 02:14:12.748 [INFO][5062] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" HandleID="k8s-pod-network.d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Workload="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:12.760974 containerd[1506]: 2025-10-31 02:14:12.752 [INFO][5062] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:12.760974 containerd[1506]: 2025-10-31 02:14:12.758 [INFO][5054] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Oct 31 02:14:12.764337 containerd[1506]: time="2025-10-31T02:14:12.761006473Z" level=info msg="TearDown network for sandbox \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\" successfully" Oct 31 02:14:12.764337 containerd[1506]: time="2025-10-31T02:14:12.761043999Z" level=info msg="StopPodSandbox for \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\" returns successfully" Oct 31 02:14:12.766039 containerd[1506]: time="2025-10-31T02:14:12.765220195Z" level=info msg="RemovePodSandbox for \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\"" Oct 31 02:14:12.766039 containerd[1506]: time="2025-10-31T02:14:12.765287433Z" level=info msg="Forcibly stopping sandbox \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\"" Oct 31 02:14:12.826311 containerd[1506]: time="2025-10-31T02:14:12.825556488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75c756744f-85x8s,Uid:ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da,Namespace:calico-system,Attempt:0,}" Oct 31 02:14:13.028456 containerd[1506]: 2025-10-31 02:14:12.922 [WARNING][5081] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1aba93ae-9569-4e3f-92f8-b96678002f38", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"60e90f9277b0f65d45473b9089f2da6c56f51741078d6202640e3797b172dfd9", Pod:"csi-node-driver-rsz7n", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9d74ca420fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:13.028456 containerd[1506]: 2025-10-31 02:14:12.923 [INFO][5081] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Oct 31 02:14:13.028456 containerd[1506]: 2025-10-31 02:14:12.923 [INFO][5081] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" iface="eth0" netns="" Oct 31 02:14:13.028456 containerd[1506]: 2025-10-31 02:14:12.923 [INFO][5081] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Oct 31 02:14:13.028456 containerd[1506]: 2025-10-31 02:14:12.923 [INFO][5081] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Oct 31 02:14:13.028456 containerd[1506]: 2025-10-31 02:14:12.994 [INFO][5098] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" HandleID="k8s-pod-network.d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Workload="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:13.028456 containerd[1506]: 2025-10-31 02:14:12.995 [INFO][5098] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:13.028456 containerd[1506]: 2025-10-31 02:14:12.995 [INFO][5098] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:13.028456 containerd[1506]: 2025-10-31 02:14:13.011 [WARNING][5098] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" HandleID="k8s-pod-network.d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Workload="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:13.028456 containerd[1506]: 2025-10-31 02:14:13.011 [INFO][5098] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" HandleID="k8s-pod-network.d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Workload="srv--xg3om.gb1.brightbox.com-k8s-csi--node--driver--rsz7n-eth0" Oct 31 02:14:13.028456 containerd[1506]: 2025-10-31 02:14:13.018 [INFO][5098] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:13.028456 containerd[1506]: 2025-10-31 02:14:13.024 [INFO][5081] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66" Oct 31 02:14:13.031653 containerd[1506]: time="2025-10-31T02:14:13.029680573Z" level=info msg="TearDown network for sandbox \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\" successfully" Oct 31 02:14:13.052468 containerd[1506]: time="2025-10-31T02:14:13.051336677Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 02:14:13.052468 containerd[1506]: time="2025-10-31T02:14:13.051462365Z" level=info msg="RemovePodSandbox \"d684a74ea67d220eb67eb10b19022adbeb5bec9063e68371d53b0a7808d79f66\" returns successfully" Oct 31 02:14:13.056183 containerd[1506]: time="2025-10-31T02:14:13.054320673Z" level=info msg="StopPodSandbox for \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\"" Oct 31 02:14:13.200055 kubelet[2763]: E1031 02:14:13.199989 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b84756f78-vnktk" podUID="5c5691c8-bb57-4400-82c8-d0c76d156189" Oct 31 02:14:13.202102 kubelet[2763]: E1031 02:14:13.201432 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-ts64b" podUID="e8cd4f39-3f1e-47f1-8de2-399f0cec4257" Oct 31 02:14:13.296225 systemd-networkd[1429]: cali81984fac757: Link UP Oct 31 02:14:13.299087 systemd-networkd[1429]: cali81984fac757: Gained carrier Oct 31 02:14:13.316262 containerd[1506]: 2025-10-31 02:14:13.178 [WARNING][5120] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"71d3c28c-c709-4960-8b43-030748d0a3ca", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140", Pod:"coredns-674b8bbfcf-jbl2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f442ed9fea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:13.316262 containerd[1506]: 2025-10-31 02:14:13.178 [INFO][5120] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Oct 31 02:14:13.316262 containerd[1506]: 2025-10-31 02:14:13.178 [INFO][5120] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" iface="eth0" netns="" Oct 31 02:14:13.316262 containerd[1506]: 2025-10-31 02:14:13.178 [INFO][5120] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Oct 31 02:14:13.316262 containerd[1506]: 2025-10-31 02:14:13.178 [INFO][5120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Oct 31 02:14:13.316262 containerd[1506]: 2025-10-31 02:14:13.261 [INFO][5133] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" HandleID="k8s-pod-network.c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:13.316262 containerd[1506]: 2025-10-31 02:14:13.262 [INFO][5133] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:13.316262 containerd[1506]: 2025-10-31 02:14:13.273 [INFO][5133] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:13.316262 containerd[1506]: 2025-10-31 02:14:13.301 [WARNING][5133] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" HandleID="k8s-pod-network.c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:13.316262 containerd[1506]: 2025-10-31 02:14:13.301 [INFO][5133] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" HandleID="k8s-pod-network.c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:13.316262 containerd[1506]: 2025-10-31 02:14:13.306 [INFO][5133] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:13.316262 containerd[1506]: 2025-10-31 02:14:13.309 [INFO][5120] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Oct 31 02:14:13.316262 containerd[1506]: time="2025-10-31T02:14:13.316105102Z" level=info msg="TearDown network for sandbox \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\" successfully" Oct 31 02:14:13.316262 containerd[1506]: time="2025-10-31T02:14:13.316153021Z" level=info msg="StopPodSandbox for \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\" returns successfully" Oct 31 02:14:13.321555 containerd[1506]: time="2025-10-31T02:14:13.319531787Z" level=info msg="RemovePodSandbox for \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\"" Oct 31 02:14:13.321555 containerd[1506]: time="2025-10-31T02:14:13.319586406Z" level=info msg="Forcibly stopping sandbox \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\"" Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:12.988 [INFO][5089] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.018 [INFO][5089] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--xg3om.gb1.brightbox.com-k8s-whisker--75c756744f--85x8s-eth0 whisker-75c756744f- calico-system ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da 1086 0 2025-10-31 02:14:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:75c756744f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-xg3om.gb1.brightbox.com whisker-75c756744f-85x8s eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali81984fac757 [] [] }} ContainerID="11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" Namespace="calico-system" Pod="whisker-75c756744f-85x8s" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--75c756744f--85x8s-" Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.018 [INFO][5089] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" Namespace="calico-system" Pod="whisker-75c756744f-85x8s" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--75c756744f--85x8s-eth0" Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.125 [INFO][5108] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" HandleID="k8s-pod-network.11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--75c756744f--85x8s-eth0" Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.127 [INFO][5108] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" HandleID="k8s-pod-network.11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--75c756744f--85x8s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001038d0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-xg3om.gb1.brightbox.com", "pod":"whisker-75c756744f-85x8s", "timestamp":"2025-10-31 02:14:13.12562574 +0000 UTC"}, Hostname:"srv-xg3om.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.128 [INFO][5108] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.128 [INFO][5108] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.128 [INFO][5108] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-xg3om.gb1.brightbox.com' Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.147 [INFO][5108] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.184 [INFO][5108] ipam/ipam.go 394: Looking up existing affinities for host host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.206 [INFO][5108] ipam/ipam.go 511: Trying affinity for 192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.214 [INFO][5108] ipam/ipam.go 158: Attempting to load block cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.224 [INFO][5108] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.225 [INFO][5108] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.231 [INFO][5108] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577 Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.256 [INFO][5108] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.272 [INFO][5108] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.50.9/26] block=192.168.50.0/26 handle="k8s-pod-network.11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.273 [INFO][5108] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.50.9/26] handle="k8s-pod-network.11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" host="srv-xg3om.gb1.brightbox.com" Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.273 [INFO][5108] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:13.354271 containerd[1506]: 2025-10-31 02:14:13.273 [INFO][5108] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.50.9/26] IPv6=[] ContainerID="11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" HandleID="k8s-pod-network.11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--75c756744f--85x8s-eth0" Oct 31 02:14:13.358804 containerd[1506]: 2025-10-31 02:14:13.278 [INFO][5089] cni-plugin/k8s.go 418: Populated endpoint ContainerID="11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" Namespace="calico-system" Pod="whisker-75c756744f-85x8s" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--75c756744f--85x8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-whisker--75c756744f--85x8s-eth0", GenerateName:"whisker-75c756744f-", Namespace:"calico-system", SelfLink:"", UID:"ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 14, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75c756744f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"", Pod:"whisker-75c756744f-85x8s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali81984fac757", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:13.358804 containerd[1506]: 2025-10-31 02:14:13.278 [INFO][5089] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.50.9/32] ContainerID="11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" Namespace="calico-system" Pod="whisker-75c756744f-85x8s" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--75c756744f--85x8s-eth0" Oct 31 02:14:13.358804 containerd[1506]: 2025-10-31 02:14:13.279 [INFO][5089] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81984fac757 ContainerID="11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" Namespace="calico-system" Pod="whisker-75c756744f-85x8s" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--75c756744f--85x8s-eth0" Oct 31 02:14:13.358804 containerd[1506]: 2025-10-31 02:14:13.299 [INFO][5089] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" Namespace="calico-system" Pod="whisker-75c756744f-85x8s" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--75c756744f--85x8s-eth0" Oct 31 02:14:13.358804 containerd[1506]: 2025-10-31 02:14:13.300 [INFO][5089] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" Namespace="calico-system" Pod="whisker-75c756744f-85x8s" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--75c756744f--85x8s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-whisker--75c756744f--85x8s-eth0", GenerateName:"whisker-75c756744f-", Namespace:"calico-system", SelfLink:"", UID:"ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 14, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"75c756744f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577", Pod:"whisker-75c756744f-85x8s", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.50.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali81984fac757", MAC:"36:26:22:5d:97:a9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:13.358804 containerd[1506]: 2025-10-31 02:14:13.337 [INFO][5089] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577" Namespace="calico-system" Pod="whisker-75c756744f-85x8s" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--75c756744f--85x8s-eth0" Oct 31 02:14:13.509985 containerd[1506]: time="2025-10-31T02:14:13.509591591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 02:14:13.510838 containerd[1506]: time="2025-10-31T02:14:13.509915485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 02:14:13.510838 containerd[1506]: time="2025-10-31T02:14:13.509956308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:13.511325 containerd[1506]: time="2025-10-31T02:14:13.511225074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 02:14:13.570417 systemd[1]: Started cri-containerd-11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577.scope - libcontainer container 11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577. Oct 31 02:14:13.703274 containerd[1506]: 2025-10-31 02:14:13.599 [WARNING][5156] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"71d3c28c-c709-4960-8b43-030748d0a3ca", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"1f464b26e20199ad7d297c48389af5d4f3f3b0064821802187e0e16083132140", Pod:"coredns-674b8bbfcf-jbl2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3f442ed9fea", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:13.703274 containerd[1506]: 2025-10-31 02:14:13.601 [INFO][5156] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Oct 31 02:14:13.703274 containerd[1506]: 2025-10-31 02:14:13.601 [INFO][5156] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" iface="eth0" netns="" Oct 31 02:14:13.703274 containerd[1506]: 2025-10-31 02:14:13.602 [INFO][5156] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Oct 31 02:14:13.703274 containerd[1506]: 2025-10-31 02:14:13.602 [INFO][5156] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Oct 31 02:14:13.703274 containerd[1506]: 2025-10-31 02:14:13.671 [INFO][5217] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" HandleID="k8s-pod-network.c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:13.703274 containerd[1506]: 2025-10-31 02:14:13.671 [INFO][5217] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:13.703274 containerd[1506]: 2025-10-31 02:14:13.671 [INFO][5217] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:13.703274 containerd[1506]: 2025-10-31 02:14:13.686 [WARNING][5217] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" HandleID="k8s-pod-network.c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:13.703274 containerd[1506]: 2025-10-31 02:14:13.686 [INFO][5217] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" HandleID="k8s-pod-network.c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--jbl2g-eth0" Oct 31 02:14:13.703274 containerd[1506]: 2025-10-31 02:14:13.688 [INFO][5217] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:13.703274 containerd[1506]: 2025-10-31 02:14:13.692 [INFO][5156] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0" Oct 31 02:14:13.703274 containerd[1506]: time="2025-10-31T02:14:13.698588773Z" level=info msg="TearDown network for sandbox \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\" successfully" Oct 31 02:14:13.707699 containerd[1506]: time="2025-10-31T02:14:13.707654595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 02:14:13.707905 containerd[1506]: time="2025-10-31T02:14:13.707844840Z" level=info msg="RemovePodSandbox \"c4fb1ccfc2630208cf36d2ba627ade8c57d5cb2ba635c95cee59e71cf2362ae0\" returns successfully" Oct 31 02:14:13.713017 containerd[1506]: time="2025-10-31T02:14:13.712981607Z" level=info msg="StopPodSandbox for \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\"" Oct 31 02:14:13.754288 containerd[1506]: time="2025-10-31T02:14:13.752225501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75c756744f-85x8s,Uid:ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da,Namespace:calico-system,Attempt:0,} returns sandbox id \"11df5b1b25a30adad12451e7659e6e964ecf37f7a42da178c393550baeb55577\"" Oct 31 02:14:13.762781 containerd[1506]: time="2025-10-31T02:14:13.762738384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 02:14:13.835216 kernel: bpftool[5255]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 31 02:14:13.913550 containerd[1506]: 2025-10-31 02:14:13.826 [WARNING][5240] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8e6d4c7-57e9-4902-bae1-886c53b818d8", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc", Pod:"coredns-674b8bbfcf-thfzc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida1fbd8febb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:13.913550 containerd[1506]: 2025-10-31 02:14:13.827 [INFO][5240] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Oct 31 02:14:13.913550 containerd[1506]: 2025-10-31 02:14:13.827 [INFO][5240] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" iface="eth0" netns="" Oct 31 02:14:13.913550 containerd[1506]: 2025-10-31 02:14:13.827 [INFO][5240] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Oct 31 02:14:13.913550 containerd[1506]: 2025-10-31 02:14:13.827 [INFO][5240] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Oct 31 02:14:13.913550 containerd[1506]: 2025-10-31 02:14:13.890 [INFO][5253] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" HandleID="k8s-pod-network.7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:13.913550 containerd[1506]: 2025-10-31 02:14:13.891 [INFO][5253] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:13.913550 containerd[1506]: 2025-10-31 02:14:13.891 [INFO][5253] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:13.913550 containerd[1506]: 2025-10-31 02:14:13.904 [WARNING][5253] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" HandleID="k8s-pod-network.7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:13.913550 containerd[1506]: 2025-10-31 02:14:13.904 [INFO][5253] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" HandleID="k8s-pod-network.7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:13.913550 containerd[1506]: 2025-10-31 02:14:13.907 [INFO][5253] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:13.913550 containerd[1506]: 2025-10-31 02:14:13.911 [INFO][5240] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Oct 31 02:14:13.915538 containerd[1506]: time="2025-10-31T02:14:13.913613409Z" level=info msg="TearDown network for sandbox \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\" successfully" Oct 31 02:14:13.915538 containerd[1506]: time="2025-10-31T02:14:13.913650500Z" level=info msg="StopPodSandbox for \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\" returns successfully" Oct 31 02:14:13.915538 containerd[1506]: time="2025-10-31T02:14:13.914553493Z" level=info msg="RemovePodSandbox for \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\"" Oct 31 02:14:13.915538 containerd[1506]: time="2025-10-31T02:14:13.914591794Z" level=info msg="Forcibly stopping sandbox \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\"" Oct 31 02:14:14.068421 containerd[1506]: 2025-10-31 02:14:13.989 [WARNING][5269] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c8e6d4c7-57e9-4902-bae1-886c53b818d8", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"eb5c53a6ff9f7499fbd0b195530f2d4b5e02fc173ba17e9fdbe89f0929bc2adc", Pod:"coredns-674b8bbfcf-thfzc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calida1fbd8febb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:14.068421 containerd[1506]: 2025-10-31 02:14:13.989 [INFO][5269] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Oct 31 02:14:14.068421 containerd[1506]: 2025-10-31 02:14:13.989 [INFO][5269] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" iface="eth0" netns="" Oct 31 02:14:14.068421 containerd[1506]: 2025-10-31 02:14:13.989 [INFO][5269] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Oct 31 02:14:14.068421 containerd[1506]: 2025-10-31 02:14:13.990 [INFO][5269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Oct 31 02:14:14.068421 containerd[1506]: 2025-10-31 02:14:14.050 [INFO][5276] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" HandleID="k8s-pod-network.7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:14.068421 containerd[1506]: 2025-10-31 02:14:14.051 [INFO][5276] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:14.068421 containerd[1506]: 2025-10-31 02:14:14.051 [INFO][5276] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:14.068421 containerd[1506]: 2025-10-31 02:14:14.061 [WARNING][5276] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" HandleID="k8s-pod-network.7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:14.068421 containerd[1506]: 2025-10-31 02:14:14.061 [INFO][5276] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" HandleID="k8s-pod-network.7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Workload="srv--xg3om.gb1.brightbox.com-k8s-coredns--674b8bbfcf--thfzc-eth0" Oct 31 02:14:14.068421 containerd[1506]: 2025-10-31 02:14:14.064 [INFO][5276] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:14.068421 containerd[1506]: 2025-10-31 02:14:14.066 [INFO][5269] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9" Oct 31 02:14:14.069994 containerd[1506]: time="2025-10-31T02:14:14.068454188Z" level=info msg="TearDown network for sandbox \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\" successfully" Oct 31 02:14:14.074448 containerd[1506]: time="2025-10-31T02:14:14.074397172Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 02:14:14.074609 containerd[1506]: time="2025-10-31T02:14:14.074472469Z" level=info msg="RemovePodSandbox \"7c8794cdd62ca755f78276ff104a48ff905fcd956c4f1fe08d84c45c11228fc9\" returns successfully" Oct 31 02:14:14.075656 containerd[1506]: time="2025-10-31T02:14:14.075241833Z" level=info msg="StopPodSandbox for \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\"" Oct 31 02:14:14.119417 containerd[1506]: time="2025-10-31T02:14:14.119354710Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:14.121192 containerd[1506]: time="2025-10-31T02:14:14.121115558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 02:14:14.121488 containerd[1506]: time="2025-10-31T02:14:14.121251803Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 02:14:14.121805 kubelet[2763]: E1031 02:14:14.121673 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 02:14:14.123072 kubelet[2763]: E1031 02:14:14.121895 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 02:14:14.124272 kubelet[2763]: E1031 02:14:14.122408 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ef444e7681884ff38a072ba2825613b9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hpp4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-75c756744f-85x8s_calico-system(ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:14.127637 containerd[1506]: time="2025-10-31T02:14:14.127385051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 02:14:14.289133 containerd[1506]: 2025-10-31 02:14:14.223 [WARNING][5291] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0", GenerateName:"calico-apiserver-c48557b4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c48557b4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3", Pod:"calico-apiserver-c48557b4b-xk5jv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf697394f62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:14.289133 containerd[1506]: 2025-10-31 02:14:14.224 [INFO][5291] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Oct 31 02:14:14.289133 containerd[1506]: 2025-10-31 02:14:14.225 [INFO][5291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" iface="eth0" netns="" Oct 31 02:14:14.289133 containerd[1506]: 2025-10-31 02:14:14.225 [INFO][5291] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Oct 31 02:14:14.289133 containerd[1506]: 2025-10-31 02:14:14.225 [INFO][5291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Oct 31 02:14:14.289133 containerd[1506]: 2025-10-31 02:14:14.265 [INFO][5299] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" HandleID="k8s-pod-network.0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:14.289133 containerd[1506]: 2025-10-31 02:14:14.265 [INFO][5299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:14.289133 containerd[1506]: 2025-10-31 02:14:14.266 [INFO][5299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:14.289133 containerd[1506]: 2025-10-31 02:14:14.280 [WARNING][5299] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" HandleID="k8s-pod-network.0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:14.289133 containerd[1506]: 2025-10-31 02:14:14.280 [INFO][5299] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" HandleID="k8s-pod-network.0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:14.289133 containerd[1506]: 2025-10-31 02:14:14.283 [INFO][5299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:14.289133 containerd[1506]: 2025-10-31 02:14:14.286 [INFO][5291] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Oct 31 02:14:14.292435 containerd[1506]: time="2025-10-31T02:14:14.289255681Z" level=info msg="TearDown network for sandbox \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\" successfully" Oct 31 02:14:14.292435 containerd[1506]: time="2025-10-31T02:14:14.289293887Z" level=info msg="StopPodSandbox for \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\" returns successfully" Oct 31 02:14:14.292435 containerd[1506]: time="2025-10-31T02:14:14.290423754Z" level=info msg="RemovePodSandbox for \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\"" Oct 31 02:14:14.292435 containerd[1506]: time="2025-10-31T02:14:14.290459970Z" level=info msg="Forcibly stopping sandbox \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\"" Oct 31 02:14:14.433954 systemd-networkd[1429]: cali81984fac757: Gained IPv6LL Oct 31 02:14:14.443504 containerd[1506]: 2025-10-31 02:14:14.356 [WARNING][5314] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0", GenerateName:"calico-apiserver-c48557b4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c48557b4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"9393912d5200742af93db1c024928c7e3fb7dd145aa43724f7ac6fd4f4c335b3", Pod:"calico-apiserver-c48557b4b-xk5jv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidf697394f62", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:14:14.443504 containerd[1506]: 2025-10-31 02:14:14.357 [INFO][5314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Oct 31 02:14:14.443504 containerd[1506]: 2025-10-31 02:14:14.357 [INFO][5314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" iface="eth0" netns="" Oct 31 02:14:14.443504 containerd[1506]: 2025-10-31 02:14:14.357 [INFO][5314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Oct 31 02:14:14.443504 containerd[1506]: 2025-10-31 02:14:14.357 [INFO][5314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Oct 31 02:14:14.443504 containerd[1506]: 2025-10-31 02:14:14.419 [INFO][5323] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" HandleID="k8s-pod-network.0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:14.443504 containerd[1506]: 2025-10-31 02:14:14.420 [INFO][5323] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:14:14.443504 containerd[1506]: 2025-10-31 02:14:14.420 [INFO][5323] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:14:14.443504 containerd[1506]: 2025-10-31 02:14:14.433 [WARNING][5323] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" HandleID="k8s-pod-network.0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:14.443504 containerd[1506]: 2025-10-31 02:14:14.433 [INFO][5323] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" HandleID="k8s-pod-network.0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--xk5jv-eth0" Oct 31 02:14:14.443504 containerd[1506]: 2025-10-31 02:14:14.437 [INFO][5323] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:14:14.443504 containerd[1506]: 2025-10-31 02:14:14.439 [INFO][5314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656" Oct 31 02:14:14.444413 containerd[1506]: time="2025-10-31T02:14:14.443665100Z" level=info msg="TearDown network for sandbox \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\" successfully" Oct 31 02:14:14.451264 containerd[1506]: time="2025-10-31T02:14:14.451082213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 02:14:14.451355 containerd[1506]: time="2025-10-31T02:14:14.451259605Z" level=info msg="RemovePodSandbox \"0453b31e619434d8c24a9f6a6303de4ac7e8d018f22d156121567e4bb6c1b656\" returns successfully" Oct 31 02:14:14.508111 containerd[1506]: time="2025-10-31T02:14:14.508027549Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:14.512869 containerd[1506]: time="2025-10-31T02:14:14.511966919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 02:14:14.512869 containerd[1506]: time="2025-10-31T02:14:14.512025959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 02:14:14.513039 kubelet[2763]: E1031 02:14:14.512522 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 02:14:14.513039 kubelet[2763]: E1031 02:14:14.512614 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 02:14:14.513039 kubelet[2763]: E1031 02:14:14.512893 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hpp4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-75c756744f-85x8s_calico-system(ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:14.515522 kubelet[2763]: E1031 02:14:14.514576 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-75c756744f-85x8s" podUID="ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da" Oct 31 02:14:14.574402 systemd-networkd[1429]: vxlan.calico: Link UP Oct 31 02:14:14.574413 systemd-networkd[1429]: vxlan.calico: Gained carrier Oct 31 02:14:15.228383 kubelet[2763]: E1031 02:14:15.228202 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-75c756744f-85x8s" podUID="ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da" Oct 31 02:14:16.097439 systemd-networkd[1429]: vxlan.calico: Gained IPv6LL Oct 31 02:14:23.615732 containerd[1506]: time="2025-10-31T02:14:23.614242186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 02:14:23.946734 containerd[1506]: time="2025-10-31T02:14:23.946340186Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:23.948065 containerd[1506]: time="2025-10-31T02:14:23.947879294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 02:14:23.948065 containerd[1506]: time="2025-10-31T02:14:23.947959955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 02:14:23.948310 kubelet[2763]: E1031 02:14:23.948205 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 02:14:23.948310 kubelet[2763]: E1031 02:14:23.948293 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 02:14:23.949006 kubelet[2763]: E1031 02:14:23.948535 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p6pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rsz7n_calico-system(1aba93ae-9569-4e3f-92f8-b96678002f38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:23.951719 containerd[1506]: time="2025-10-31T02:14:23.951582842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 02:14:24.273343 containerd[1506]: time="2025-10-31T02:14:24.273196901Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:24.274644 containerd[1506]: time="2025-10-31T02:14:24.274597347Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 02:14:24.274837 containerd[1506]: time="2025-10-31T02:14:24.274688259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 02:14:24.274999 kubelet[2763]: E1031 02:14:24.274836 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 02:14:24.274999 kubelet[2763]: E1031 02:14:24.274920 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 02:14:24.276825 kubelet[2763]: E1031 02:14:24.275115 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p6pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rsz7n_calico-system(1aba93ae-9569-4e3f-92f8-b96678002f38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:24.277233 kubelet[2763]: E1031 02:14:24.277106 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:14:24.616309 containerd[1506]: time="2025-10-31T02:14:24.615796194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 02:14:24.932942 containerd[1506]: time="2025-10-31T02:14:24.932692447Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:24.934099 containerd[1506]: time="2025-10-31T02:14:24.933939285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 02:14:24.934099 containerd[1506]: time="2025-10-31T02:14:24.934034561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 02:14:24.934348 kubelet[2763]: E1031 02:14:24.934279 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 02:14:24.934429 kubelet[2763]: E1031 02:14:24.934346 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 02:14:24.936072 kubelet[2763]: E1031 02:14:24.934694 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zkqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c48557b4b-xk5jv_calico-apiserver(8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:24.936450 containerd[1506]: time="2025-10-31T02:14:24.935066194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 02:14:24.936650 kubelet[2763]: E1031 02:14:24.936204 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-xk5jv" podUID="8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d" Oct 31 02:14:25.259119 containerd[1506]: time="2025-10-31T02:14:25.258593124Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:25.260856 containerd[1506]: time="2025-10-31T02:14:25.260636019Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 02:14:25.260856 containerd[1506]: time="2025-10-31T02:14:25.260721863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 02:14:25.261156 kubelet[2763]: E1031 02:14:25.260870 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 02:14:25.261156 kubelet[2763]: E1031 02:14:25.260926 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 02:14:25.262505 kubelet[2763]: E1031 02:14:25.261128 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbvnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5f5wx_calico-system(383b1d33-d54b-4a00-801a-8a36f78ff190): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:25.262661 kubelet[2763]: E1031 02:14:25.262541 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5f5wx" podUID="383b1d33-d54b-4a00-801a-8a36f78ff190" Oct 31 02:14:25.614728 containerd[1506]: time="2025-10-31T02:14:25.614474477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 02:14:25.954889 containerd[1506]: time="2025-10-31T02:14:25.954363493Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:25.956288 containerd[1506]: time="2025-10-31T02:14:25.955962210Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 02:14:25.956288 containerd[1506]: time="2025-10-31T02:14:25.956243056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 02:14:25.956534 kubelet[2763]: E1031 02:14:25.956474 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 02:14:25.956780 kubelet[2763]: E1031 02:14:25.956561 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 02:14:25.956906 kubelet[2763]: E1031 02:14:25.956831 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59lsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c48557b4b-ts64b_calico-apiserver(e8cd4f39-3f1e-47f1-8de2-399f0cec4257): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:25.958365 kubelet[2763]: E1031 02:14:25.958275 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-ts64b" podUID="e8cd4f39-3f1e-47f1-8de2-399f0cec4257" Oct 31 02:14:27.615944 containerd[1506]: time="2025-10-31T02:14:27.615848363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 02:14:27.930550 containerd[1506]: time="2025-10-31T02:14:27.930339403Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:27.932349 containerd[1506]: time="2025-10-31T02:14:27.932193538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 02:14:27.932349 containerd[1506]: time="2025-10-31T02:14:27.932274810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 02:14:27.932669 kubelet[2763]: E1031 02:14:27.932588 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 02:14:27.932669 kubelet[2763]: E1031 02:14:27.932655 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 02:14:27.933321 kubelet[2763]: E1031 02:14:27.933027 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69bkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b84756f78-vnktk_calico-system(5c5691c8-bb57-4400-82c8-d0c76d156189): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:27.934712 containerd[1506]: time="2025-10-31T02:14:27.934656323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 02:14:27.935010 kubelet[2763]: E1031 02:14:27.934907 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b84756f78-vnktk" podUID="5c5691c8-bb57-4400-82c8-d0c76d156189" Oct 31 02:14:28.257669 containerd[1506]: time="2025-10-31T02:14:28.257492852Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:28.259111 containerd[1506]: time="2025-10-31T02:14:28.258969365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 02:14:28.259111 containerd[1506]: time="2025-10-31T02:14:28.259038569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 02:14:28.259391 kubelet[2763]: E1031 02:14:28.259321 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 02:14:28.259465 kubelet[2763]: E1031 02:14:28.259409 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 02:14:28.259661 kubelet[2763]: E1031 02:14:28.259579 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ef444e7681884ff38a072ba2825613b9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hpp4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-75c756744f-85x8s_calico-system(ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:28.262844 containerd[1506]: time="2025-10-31T02:14:28.262660230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 02:14:28.575632 containerd[1506]: time="2025-10-31T02:14:28.575403441Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:28.576623 containerd[1506]: time="2025-10-31T02:14:28.576486601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 02:14:28.577184 containerd[1506]: time="2025-10-31T02:14:28.576581077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 02:14:28.577264 kubelet[2763]: E1031 02:14:28.576772 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 02:14:28.577264 kubelet[2763]: E1031 02:14:28.576829 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 02:14:28.577264 kubelet[2763]: E1031 02:14:28.577033 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hpp4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-75c756744f-85x8s_calico-system(ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:28.578928 kubelet[2763]: E1031 02:14:28.578864 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-75c756744f-85x8s" podUID="ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da" Oct 31 02:14:37.096719 systemd[1]: Started sshd@9-10.230.61.6:22-147.75.109.163:53178.service - OpenSSH per-connection server daemon (147.75.109.163:53178). Oct 31 02:14:38.056257 sshd[5451]: Accepted publickey for core from 147.75.109.163 port 53178 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:14:38.060278 sshd[5451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:14:38.077252 systemd-logind[1484]: New session 12 of user core. Oct 31 02:14:38.084402 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 02:14:38.621081 kubelet[2763]: E1031 02:14:38.620229 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:14:39.352334 sshd[5451]: pam_unix(sshd:session): session closed for user core Oct 31 02:14:39.358747 systemd[1]: sshd@9-10.230.61.6:22-147.75.109.163:53178.service: Deactivated successfully. Oct 31 02:14:39.362886 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 02:14:39.364964 systemd-logind[1484]: Session 12 logged out. Waiting for processes to exit. Oct 31 02:14:39.367058 systemd-logind[1484]: Removed session 12. Oct 31 02:14:39.619920 kubelet[2763]: E1031 02:14:39.619423 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-xk5jv" podUID="8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d" Oct 31 02:14:39.621204 kubelet[2763]: E1031 02:14:39.620619 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-ts64b" podUID="e8cd4f39-3f1e-47f1-8de2-399f0cec4257" Oct 31 02:14:39.632308 systemd[1]: run-containerd-runc-k8s.io-5ad04cd61d2d748849411418925a0c41ec61b0ebcfd90a71b5dff6fee7c8c734-runc.0HUBgr.mount: Deactivated successfully. Oct 31 02:14:40.615825 kubelet[2763]: E1031 02:14:40.615672 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5f5wx" podUID="383b1d33-d54b-4a00-801a-8a36f78ff190" Oct 31 02:14:41.621552 kubelet[2763]: E1031 02:14:41.621378 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-75c756744f-85x8s" podUID="ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da" Oct 31 02:14:42.618306 kubelet[2763]: E1031 02:14:42.618026 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b84756f78-vnktk" podUID="5c5691c8-bb57-4400-82c8-d0c76d156189" Oct 31 02:14:44.520658 systemd[1]: Started sshd@10-10.230.61.6:22-147.75.109.163:33312.service - OpenSSH per-connection server daemon (147.75.109.163:33312). Oct 31 02:14:45.486034 sshd[5489]: Accepted publickey for core from 147.75.109.163 port 33312 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:14:45.489323 sshd[5489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:14:45.499633 systemd-logind[1484]: New session 13 of user core. Oct 31 02:14:45.507399 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 02:14:46.378648 sshd[5489]: pam_unix(sshd:session): session closed for user core Oct 31 02:14:46.390330 systemd[1]: sshd@10-10.230.61.6:22-147.75.109.163:33312.service: Deactivated successfully. Oct 31 02:14:46.394003 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 02:14:46.395966 systemd-logind[1484]: Session 13 logged out. Waiting for processes to exit. Oct 31 02:14:46.397944 systemd-logind[1484]: Removed session 13. Oct 31 02:14:49.615914 containerd[1506]: time="2025-10-31T02:14:49.615778557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 02:14:49.951158 containerd[1506]: time="2025-10-31T02:14:49.950549415Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:49.952569 containerd[1506]: time="2025-10-31T02:14:49.952255603Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 02:14:49.952569 containerd[1506]: time="2025-10-31T02:14:49.952266431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 02:14:49.953027 kubelet[2763]: E1031 02:14:49.952918 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 02:14:49.954392 kubelet[2763]: E1031 02:14:49.953089 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 02:14:49.954392 kubelet[2763]: E1031 02:14:49.953572 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p6pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rsz7n_calico-system(1aba93ae-9569-4e3f-92f8-b96678002f38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:49.958148 containerd[1506]: time="2025-10-31T02:14:49.958091027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 02:14:50.283227 containerd[1506]: time="2025-10-31T02:14:50.283129321Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:50.284417 containerd[1506]: time="2025-10-31T02:14:50.284322762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 02:14:50.284566 containerd[1506]: time="2025-10-31T02:14:50.284456119Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 02:14:50.285245 kubelet[2763]: E1031 02:14:50.284762 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 02:14:50.285245 kubelet[2763]: E1031 02:14:50.284861 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 02:14:50.285245 kubelet[2763]: E1031 02:14:50.285092 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p6pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rsz7n_calico-system(1aba93ae-9569-4e3f-92f8-b96678002f38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:50.286914 kubelet[2763]: E1031 02:14:50.286789 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:14:51.546555 systemd[1]: Started sshd@11-10.230.61.6:22-147.75.109.163:44776.service - OpenSSH per-connection server daemon (147.75.109.163:44776). Oct 31 02:14:52.471149 sshd[5505]: Accepted publickey for core from 147.75.109.163 port 44776 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:14:52.475914 sshd[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:14:52.484570 systemd-logind[1484]: New session 14 of user core. Oct 31 02:14:52.491404 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 02:14:53.232347 sshd[5505]: pam_unix(sshd:session): session closed for user core Oct 31 02:14:53.238850 systemd[1]: sshd@11-10.230.61.6:22-147.75.109.163:44776.service: Deactivated successfully. Oct 31 02:14:53.242006 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 02:14:53.243901 systemd-logind[1484]: Session 14 logged out. Waiting for processes to exit. Oct 31 02:14:53.245646 systemd-logind[1484]: Removed session 14. Oct 31 02:14:53.387418 systemd[1]: Started sshd@12-10.230.61.6:22-147.75.109.163:44782.service - OpenSSH per-connection server daemon (147.75.109.163:44782). Oct 31 02:14:53.615794 containerd[1506]: time="2025-10-31T02:14:53.615191348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 02:14:53.946228 containerd[1506]: time="2025-10-31T02:14:53.945988162Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:53.951939 containerd[1506]: time="2025-10-31T02:14:53.951871763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 02:14:53.952082 containerd[1506]: time="2025-10-31T02:14:53.951991326Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 02:14:53.952664 kubelet[2763]: E1031 02:14:53.952374 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 02:14:53.952664 kubelet[2763]: E1031 02:14:53.952445 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 02:14:53.955081 kubelet[2763]: E1031 02:14:53.952839 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbvnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5f5wx_calico-system(383b1d33-d54b-4a00-801a-8a36f78ff190): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:53.955081 kubelet[2763]: E1031 02:14:53.954450 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5f5wx" podUID="383b1d33-d54b-4a00-801a-8a36f78ff190" Oct 31 02:14:53.955347 containerd[1506]: time="2025-10-31T02:14:53.953225855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 02:14:54.295145 containerd[1506]: time="2025-10-31T02:14:54.295060893Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:54.296352 containerd[1506]: time="2025-10-31T02:14:54.296308664Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 02:14:54.296567 containerd[1506]: time="2025-10-31T02:14:54.296431866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 02:14:54.296692 kubelet[2763]: E1031 02:14:54.296592 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 02:14:54.296692 kubelet[2763]: E1031 02:14:54.296668 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 02:14:54.297478 kubelet[2763]: E1031 02:14:54.296943 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59lsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c48557b4b-ts64b_calico-apiserver(e8cd4f39-3f1e-47f1-8de2-399f0cec4257): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:54.298792 kubelet[2763]: E1031 02:14:54.298742 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-ts64b" podUID="e8cd4f39-3f1e-47f1-8de2-399f0cec4257" Oct 31 02:14:54.314199 sshd[5518]: Accepted publickey for core from 147.75.109.163 port 44782 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:14:54.315736 sshd[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:14:54.324885 systemd-logind[1484]: New session 15 of user core. Oct 31 02:14:54.332402 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 02:14:54.615312 containerd[1506]: time="2025-10-31T02:14:54.615016278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 02:14:54.943325 containerd[1506]: time="2025-10-31T02:14:54.943156762Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:54.945558 containerd[1506]: time="2025-10-31T02:14:54.945308756Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 02:14:54.945558 containerd[1506]: time="2025-10-31T02:14:54.945370378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 02:14:54.946502 containerd[1506]: time="2025-10-31T02:14:54.946079241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 02:14:54.946577 kubelet[2763]: E1031 02:14:54.945596 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 02:14:54.946577 kubelet[2763]: E1031 02:14:54.945680 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 02:14:54.959933 kubelet[2763]: E1031 02:14:54.945983 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ef444e7681884ff38a072ba2825613b9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hpp4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-75c756744f-85x8s_calico-system(ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:55.141737 sshd[5518]: pam_unix(sshd:session): session closed for user core Oct 31 02:14:55.147525 systemd[1]: sshd@12-10.230.61.6:22-147.75.109.163:44782.service: Deactivated successfully. Oct 31 02:14:55.150143 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 02:14:55.151798 systemd-logind[1484]: Session 15 logged out. Waiting for processes to exit. Oct 31 02:14:55.154152 systemd-logind[1484]: Removed session 15. Oct 31 02:14:55.260132 containerd[1506]: time="2025-10-31T02:14:55.259858044Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:55.269460 containerd[1506]: time="2025-10-31T02:14:55.269361991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 02:14:55.269610 containerd[1506]: time="2025-10-31T02:14:55.269423070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 02:14:55.270834 kubelet[2763]: E1031 02:14:55.269828 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 02:14:55.270834 kubelet[2763]: E1031 02:14:55.269903 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 02:14:55.270834 kubelet[2763]: E1031 02:14:55.270315 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8zkqk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c48557b4b-xk5jv_calico-apiserver(8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:55.271685 containerd[1506]: time="2025-10-31T02:14:55.270692401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 02:14:55.272460 kubelet[2763]: E1031 02:14:55.272068 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-xk5jv" podUID="8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d" Oct 31 02:14:55.301583 systemd[1]: Started sshd@13-10.230.61.6:22-147.75.109.163:44786.service - OpenSSH per-connection server daemon (147.75.109.163:44786). Oct 31 02:14:55.586283 containerd[1506]: time="2025-10-31T02:14:55.586141865Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:55.588204 containerd[1506]: time="2025-10-31T02:14:55.587735711Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 02:14:55.588204 containerd[1506]: time="2025-10-31T02:14:55.587805944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 02:14:55.588412 kubelet[2763]: E1031 02:14:55.588014 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 02:14:55.588412 kubelet[2763]: E1031 02:14:55.588081 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 02:14:55.588412 kubelet[2763]: E1031 02:14:55.588279 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hpp4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-75c756744f-85x8s_calico-system(ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:55.589829 kubelet[2763]: E1031 02:14:55.589763 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-75c756744f-85x8s" podUID="ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da" Oct 31 02:14:55.614054 containerd[1506]: time="2025-10-31T02:14:55.613989786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 02:14:55.925332 containerd[1506]: time="2025-10-31T02:14:55.924934907Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:14:55.928346 containerd[1506]: time="2025-10-31T02:14:55.928246865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 02:14:55.928976 containerd[1506]: time="2025-10-31T02:14:55.928322289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 02:14:55.929102 kubelet[2763]: E1031 02:14:55.928886 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 02:14:55.929102 kubelet[2763]: E1031 02:14:55.928951 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 02:14:55.929336 kubelet[2763]: E1031 02:14:55.929180 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69bkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b84756f78-vnktk_calico-system(5c5691c8-bb57-4400-82c8-d0c76d156189): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 02:14:55.930842 kubelet[2763]: E1031 02:14:55.930748 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b84756f78-vnktk" podUID="5c5691c8-bb57-4400-82c8-d0c76d156189" Oct 31 02:14:56.237512 sshd[5537]: Accepted publickey for core from 147.75.109.163 port 44786 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:14:56.241387 sshd[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:14:56.253001 systemd-logind[1484]: New session 16 of user core. Oct 31 02:14:56.261390 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 02:14:56.969366 sshd[5537]: pam_unix(sshd:session): session closed for user core Oct 31 02:14:56.975091 systemd[1]: sshd@13-10.230.61.6:22-147.75.109.163:44786.service: Deactivated successfully. Oct 31 02:14:56.977898 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 02:14:56.979036 systemd-logind[1484]: Session 16 logged out. Waiting for processes to exit. Oct 31 02:14:56.982220 systemd-logind[1484]: Removed session 16. Oct 31 02:15:02.145950 systemd[1]: Started sshd@14-10.230.61.6:22-147.75.109.163:59598.service - OpenSSH per-connection server daemon (147.75.109.163:59598). Oct 31 02:15:02.617601 kubelet[2763]: E1031 02:15:02.617139 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:15:03.051961 sshd[5556]: Accepted publickey for core from 147.75.109.163 port 59598 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:15:03.054801 sshd[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:15:03.064228 systemd-logind[1484]: New session 17 of user core. Oct 31 02:15:03.071532 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 02:15:03.805993 sshd[5556]: pam_unix(sshd:session): session closed for user core Oct 31 02:15:03.811060 systemd-logind[1484]: Session 17 logged out. Waiting for processes to exit. Oct 31 02:15:03.813533 systemd[1]: sshd@14-10.230.61.6:22-147.75.109.163:59598.service: Deactivated successfully. Oct 31 02:15:03.819437 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 02:15:03.822087 systemd-logind[1484]: Removed session 17. Oct 31 02:15:04.615213 kubelet[2763]: E1031 02:15:04.615049 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-ts64b" podUID="e8cd4f39-3f1e-47f1-8de2-399f0cec4257" Oct 31 02:15:05.616742 kubelet[2763]: E1031 02:15:05.616069 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-xk5jv" podUID="8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d" Oct 31 02:15:06.618106 kubelet[2763]: E1031 02:15:06.617515 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-75c756744f-85x8s" podUID="ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da" Oct 31 02:15:08.614283 kubelet[2763]: E1031 02:15:08.614140 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5f5wx" podUID="383b1d33-d54b-4a00-801a-8a36f78ff190" Oct 31 02:15:08.973598 systemd[1]: Started sshd@15-10.230.61.6:22-147.75.109.163:59602.service - OpenSSH per-connection server daemon (147.75.109.163:59602). Oct 31 02:15:09.940397 sshd[5569]: Accepted publickey for core from 147.75.109.163 port 59602 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:15:09.947106 sshd[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:15:09.956675 systemd-logind[1484]: New session 18 of user core. Oct 31 02:15:09.964675 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 02:15:10.616529 kubelet[2763]: E1031 02:15:10.616326 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b84756f78-vnktk" podUID="5c5691c8-bb57-4400-82c8-d0c76d156189" Oct 31 02:15:10.750449 sshd[5569]: pam_unix(sshd:session): session closed for user core Oct 31 02:15:10.756848 systemd[1]: sshd@15-10.230.61.6:22-147.75.109.163:59602.service: Deactivated successfully. Oct 31 02:15:10.768767 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 02:15:10.770593 systemd-logind[1484]: Session 18 logged out. Waiting for processes to exit. Oct 31 02:15:10.774571 systemd-logind[1484]: Removed session 18. Oct 31 02:15:14.479840 containerd[1506]: time="2025-10-31T02:15:14.478493911Z" level=info msg="StopPodSandbox for \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\"" Oct 31 02:15:14.724238 containerd[1506]: 2025-10-31 02:15:14.617 [WARNING][5612] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0", GenerateName:"calico-kube-controllers-b84756f78-", Namespace:"calico-system", SelfLink:"", UID:"5c5691c8-bb57-4400-82c8-d0c76d156189", ResourceVersion:"1456", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b84756f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8", Pod:"calico-kube-controllers-b84756f78-vnktk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie0f89cf85df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:15:14.724238 containerd[1506]: 2025-10-31 02:15:14.619 [INFO][5612] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Oct 31 02:15:14.724238 containerd[1506]: 2025-10-31 02:15:14.619 [INFO][5612] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" iface="eth0" netns="" Oct 31 02:15:14.724238 containerd[1506]: 2025-10-31 02:15:14.619 [INFO][5612] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Oct 31 02:15:14.724238 containerd[1506]: 2025-10-31 02:15:14.619 [INFO][5612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Oct 31 02:15:14.724238 containerd[1506]: 2025-10-31 02:15:14.698 [INFO][5619] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" HandleID="k8s-pod-network.fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:15:14.724238 containerd[1506]: 2025-10-31 02:15:14.698 [INFO][5619] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:15:14.724238 containerd[1506]: 2025-10-31 02:15:14.699 [INFO][5619] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:15:14.724238 containerd[1506]: 2025-10-31 02:15:14.712 [WARNING][5619] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" HandleID="k8s-pod-network.fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:15:14.724238 containerd[1506]: 2025-10-31 02:15:14.712 [INFO][5619] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" HandleID="k8s-pod-network.fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:15:14.724238 containerd[1506]: 2025-10-31 02:15:14.715 [INFO][5619] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:15:14.724238 containerd[1506]: 2025-10-31 02:15:14.720 [INFO][5612] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Oct 31 02:15:14.727856 containerd[1506]: time="2025-10-31T02:15:14.724354385Z" level=info msg="TearDown network for sandbox \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\" successfully" Oct 31 02:15:14.727856 containerd[1506]: time="2025-10-31T02:15:14.724420243Z" level=info msg="StopPodSandbox for \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\" returns successfully" Oct 31 02:15:14.727856 containerd[1506]: time="2025-10-31T02:15:14.725493606Z" level=info msg="RemovePodSandbox for \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\"" Oct 31 02:15:14.727856 containerd[1506]: time="2025-10-31T02:15:14.725642040Z" level=info msg="Forcibly stopping sandbox \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\"" Oct 31 02:15:14.874023 containerd[1506]: 2025-10-31 02:15:14.810 [WARNING][5633] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0", GenerateName:"calico-kube-controllers-b84756f78-", Namespace:"calico-system", SelfLink:"", UID:"5c5691c8-bb57-4400-82c8-d0c76d156189", ResourceVersion:"1456", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"b84756f78", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"a0098d339f691bf3744634df47c65efce41348fd9af85e0d5aaefa1917618fe8", Pod:"calico-kube-controllers-b84756f78-vnktk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie0f89cf85df", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:15:14.874023 containerd[1506]: 2025-10-31 02:15:14.810 [INFO][5633] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Oct 31 02:15:14.874023 containerd[1506]: 2025-10-31 02:15:14.810 [INFO][5633] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" iface="eth0" netns="" Oct 31 02:15:14.874023 containerd[1506]: 2025-10-31 02:15:14.810 [INFO][5633] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Oct 31 02:15:14.874023 containerd[1506]: 2025-10-31 02:15:14.810 [INFO][5633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Oct 31 02:15:14.874023 containerd[1506]: 2025-10-31 02:15:14.846 [INFO][5640] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" HandleID="k8s-pod-network.fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:15:14.874023 containerd[1506]: 2025-10-31 02:15:14.847 [INFO][5640] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:15:14.874023 containerd[1506]: 2025-10-31 02:15:14.847 [INFO][5640] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:15:14.874023 containerd[1506]: 2025-10-31 02:15:14.861 [WARNING][5640] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" HandleID="k8s-pod-network.fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:15:14.874023 containerd[1506]: 2025-10-31 02:15:14.861 [INFO][5640] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" HandleID="k8s-pod-network.fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--kube--controllers--b84756f78--vnktk-eth0" Oct 31 02:15:14.874023 containerd[1506]: 2025-10-31 02:15:14.868 [INFO][5640] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:15:14.874023 containerd[1506]: 2025-10-31 02:15:14.870 [INFO][5633] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde" Oct 31 02:15:14.875050 containerd[1506]: time="2025-10-31T02:15:14.874228572Z" level=info msg="TearDown network for sandbox \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\" successfully" Oct 31 02:15:14.879854 containerd[1506]: time="2025-10-31T02:15:14.879780456Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 02:15:14.879942 containerd[1506]: time="2025-10-31T02:15:14.879896448Z" level=info msg="RemovePodSandbox \"fad185c7112e6de254198081c72b5c90529575f74e646020023dd33bbd4fddde\" returns successfully" Oct 31 02:15:14.881251 containerd[1506]: time="2025-10-31T02:15:14.880702538Z" level=info msg="StopPodSandbox for \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\"" Oct 31 02:15:15.007426 containerd[1506]: 2025-10-31 02:15:14.940 [WARNING][5654] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0", GenerateName:"calico-apiserver-c48557b4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8cd4f39-3f1e-47f1-8de2-399f0cec4257", ResourceVersion:"1424", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c48557b4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7", Pod:"calico-apiserver-c48557b4b-ts64b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d78dde077d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:15:15.007426 containerd[1506]: 2025-10-31 02:15:14.940 [INFO][5654] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Oct 31 02:15:15.007426 containerd[1506]: 2025-10-31 02:15:14.940 [INFO][5654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" iface="eth0" netns="" Oct 31 02:15:15.007426 containerd[1506]: 2025-10-31 02:15:14.940 [INFO][5654] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Oct 31 02:15:15.007426 containerd[1506]: 2025-10-31 02:15:14.940 [INFO][5654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Oct 31 02:15:15.007426 containerd[1506]: 2025-10-31 02:15:14.988 [INFO][5661] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" HandleID="k8s-pod-network.09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:15:15.007426 containerd[1506]: 2025-10-31 02:15:14.989 [INFO][5661] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:15:15.007426 containerd[1506]: 2025-10-31 02:15:14.989 [INFO][5661] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:15:15.007426 containerd[1506]: 2025-10-31 02:15:14.999 [WARNING][5661] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" HandleID="k8s-pod-network.09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:15:15.007426 containerd[1506]: 2025-10-31 02:15:14.999 [INFO][5661] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" HandleID="k8s-pod-network.09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:15:15.007426 containerd[1506]: 2025-10-31 02:15:15.003 [INFO][5661] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:15:15.007426 containerd[1506]: 2025-10-31 02:15:15.005 [INFO][5654] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Oct 31 02:15:15.008476 containerd[1506]: time="2025-10-31T02:15:15.008304091Z" level=info msg="TearDown network for sandbox \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\" successfully" Oct 31 02:15:15.008476 containerd[1506]: time="2025-10-31T02:15:15.008348074Z" level=info msg="StopPodSandbox for \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\" returns successfully" Oct 31 02:15:15.009057 containerd[1506]: time="2025-10-31T02:15:15.008989033Z" level=info msg="RemovePodSandbox for \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\"" Oct 31 02:15:15.009151 containerd[1506]: time="2025-10-31T02:15:15.009052643Z" level=info msg="Forcibly stopping sandbox \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\"" Oct 31 02:15:15.145975 containerd[1506]: 2025-10-31 02:15:15.069 [WARNING][5676] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0", GenerateName:"calico-apiserver-c48557b4b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8cd4f39-3f1e-47f1-8de2-399f0cec4257", ResourceVersion:"1424", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 2, 13, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c48557b4b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-xg3om.gb1.brightbox.com", ContainerID:"e0fe7bca263e863fad4212790befbe4c872d644647e35d1c480116f6c52f53b7", Pod:"calico-apiserver-c48557b4b-ts64b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2d78dde077d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 02:15:15.145975 containerd[1506]: 2025-10-31 02:15:15.069 [INFO][5676] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Oct 31 02:15:15.145975 containerd[1506]: 2025-10-31 02:15:15.069 [INFO][5676] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" iface="eth0" netns="" Oct 31 02:15:15.145975 containerd[1506]: 2025-10-31 02:15:15.069 [INFO][5676] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Oct 31 02:15:15.145975 containerd[1506]: 2025-10-31 02:15:15.069 [INFO][5676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Oct 31 02:15:15.145975 containerd[1506]: 2025-10-31 02:15:15.122 [INFO][5684] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" HandleID="k8s-pod-network.09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:15:15.145975 containerd[1506]: 2025-10-31 02:15:15.123 [INFO][5684] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:15:15.145975 containerd[1506]: 2025-10-31 02:15:15.123 [INFO][5684] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:15:15.145975 containerd[1506]: 2025-10-31 02:15:15.136 [WARNING][5684] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" HandleID="k8s-pod-network.09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:15:15.145975 containerd[1506]: 2025-10-31 02:15:15.136 [INFO][5684] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" HandleID="k8s-pod-network.09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Workload="srv--xg3om.gb1.brightbox.com-k8s-calico--apiserver--c48557b4b--ts64b-eth0" Oct 31 02:15:15.145975 containerd[1506]: 2025-10-31 02:15:15.140 [INFO][5684] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:15:15.145975 containerd[1506]: 2025-10-31 02:15:15.143 [INFO][5676] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d" Oct 31 02:15:15.147924 containerd[1506]: time="2025-10-31T02:15:15.147777682Z" level=info msg="TearDown network for sandbox \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\" successfully" Oct 31 02:15:15.153767 containerd[1506]: time="2025-10-31T02:15:15.153714097Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 02:15:15.153876 containerd[1506]: time="2025-10-31T02:15:15.153819111Z" level=info msg="RemovePodSandbox \"09f38ba72ff046b054094a2398c70294b81d3c143da2b8bcee3c4b480a189e4d\" returns successfully" Oct 31 02:15:15.156229 containerd[1506]: time="2025-10-31T02:15:15.155494144Z" level=info msg="StopPodSandbox for \"59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab\"" Oct 31 02:15:15.276653 containerd[1506]: 2025-10-31 02:15:15.213 [WARNING][5698] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:15:15.276653 containerd[1506]: 2025-10-31 02:15:15.213 [INFO][5698] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Oct 31 02:15:15.276653 containerd[1506]: 2025-10-31 02:15:15.213 [INFO][5698] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" iface="eth0" netns="" Oct 31 02:15:15.276653 containerd[1506]: 2025-10-31 02:15:15.213 [INFO][5698] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Oct 31 02:15:15.276653 containerd[1506]: 2025-10-31 02:15:15.213 [INFO][5698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Oct 31 02:15:15.276653 containerd[1506]: 2025-10-31 02:15:15.251 [INFO][5705] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" HandleID="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:15:15.276653 containerd[1506]: 2025-10-31 02:15:15.253 [INFO][5705] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:15:15.276653 containerd[1506]: 2025-10-31 02:15:15.254 [INFO][5705] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:15:15.276653 containerd[1506]: 2025-10-31 02:15:15.265 [WARNING][5705] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" HandleID="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:15:15.276653 containerd[1506]: 2025-10-31 02:15:15.265 [INFO][5705] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" HandleID="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:15:15.276653 containerd[1506]: 2025-10-31 02:15:15.269 [INFO][5705] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:15:15.276653 containerd[1506]: 2025-10-31 02:15:15.272 [INFO][5698] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Oct 31 02:15:15.276653 containerd[1506]: time="2025-10-31T02:15:15.275411823Z" level=info msg="TearDown network for sandbox \"59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab\" successfully" Oct 31 02:15:15.276653 containerd[1506]: time="2025-10-31T02:15:15.275492313Z" level=info msg="StopPodSandbox for \"59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab\" returns successfully" Oct 31 02:15:15.280048 containerd[1506]: time="2025-10-31T02:15:15.280007007Z" level=info msg="RemovePodSandbox for \"59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab\"" Oct 31 02:15:15.280181 containerd[1506]: time="2025-10-31T02:15:15.280070230Z" level=info msg="Forcibly stopping sandbox \"59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab\"" Oct 31 02:15:15.425346 containerd[1506]: 2025-10-31 02:15:15.361 [WARNING][5719] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" WorkloadEndpoint="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:15:15.425346 containerd[1506]: 2025-10-31 02:15:15.361 [INFO][5719] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Oct 31 02:15:15.425346 containerd[1506]: 2025-10-31 02:15:15.361 [INFO][5719] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" iface="eth0" netns="" Oct 31 02:15:15.425346 containerd[1506]: 2025-10-31 02:15:15.361 [INFO][5719] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Oct 31 02:15:15.425346 containerd[1506]: 2025-10-31 02:15:15.361 [INFO][5719] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Oct 31 02:15:15.425346 containerd[1506]: 2025-10-31 02:15:15.404 [INFO][5726] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" HandleID="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:15:15.425346 containerd[1506]: 2025-10-31 02:15:15.404 [INFO][5726] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 02:15:15.425346 containerd[1506]: 2025-10-31 02:15:15.404 [INFO][5726] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 02:15:15.425346 containerd[1506]: 2025-10-31 02:15:15.414 [WARNING][5726] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" HandleID="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:15:15.425346 containerd[1506]: 2025-10-31 02:15:15.415 [INFO][5726] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" HandleID="k8s-pod-network.59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Workload="srv--xg3om.gb1.brightbox.com-k8s-whisker--979f7c865--m2xgg-eth0" Oct 31 02:15:15.425346 containerd[1506]: 2025-10-31 02:15:15.418 [INFO][5726] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 02:15:15.425346 containerd[1506]: 2025-10-31 02:15:15.422 [INFO][5719] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab" Oct 31 02:15:15.425346 containerd[1506]: time="2025-10-31T02:15:15.425298178Z" level=info msg="TearDown network for sandbox \"59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab\" successfully" Oct 31 02:15:15.433403 containerd[1506]: time="2025-10-31T02:15:15.433355802Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 02:15:15.433502 containerd[1506]: time="2025-10-31T02:15:15.433424559Z" level=info msg="RemovePodSandbox \"59a3e768f0631b0f9f0107a8e118aefa576f542fe8f3efaffbd6c676b40dd5ab\" returns successfully" Oct 31 02:15:15.917670 systemd[1]: Started sshd@16-10.230.61.6:22-147.75.109.163:43564.service - OpenSSH per-connection server daemon (147.75.109.163:43564). Oct 31 02:15:16.617467 kubelet[2763]: E1031 02:15:16.617363 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-ts64b" podUID="e8cd4f39-3f1e-47f1-8de2-399f0cec4257" Oct 31 02:15:16.619491 kubelet[2763]: E1031 02:15:16.619249 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:15:16.897401 sshd[5733]: Accepted publickey for core from 147.75.109.163 port 43564 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:15:16.899048 sshd[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:15:16.909178 systemd-logind[1484]: New session 19 of user core. Oct 31 02:15:16.913381 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 02:15:17.728719 sshd[5733]: pam_unix(sshd:session): session closed for user core Oct 31 02:15:17.735874 systemd[1]: sshd@16-10.230.61.6:22-147.75.109.163:43564.service: Deactivated successfully. Oct 31 02:15:17.740653 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 02:15:17.742258 systemd-logind[1484]: Session 19 logged out. Waiting for processes to exit. Oct 31 02:15:17.744834 systemd-logind[1484]: Removed session 19. Oct 31 02:15:17.896317 systemd[1]: Started sshd@17-10.230.61.6:22-147.75.109.163:43578.service - OpenSSH per-connection server daemon (147.75.109.163:43578). Oct 31 02:15:18.793123 sshd[5748]: Accepted publickey for core from 147.75.109.163 port 43578 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:15:18.795433 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:15:18.802313 systemd-logind[1484]: New session 20 of user core. Oct 31 02:15:18.813381 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 02:15:19.617064 kubelet[2763]: E1031 02:15:19.616293 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5f5wx" podUID="383b1d33-d54b-4a00-801a-8a36f78ff190" Oct 31 02:15:20.027388 sshd[5748]: pam_unix(sshd:session): session closed for user core Oct 31 02:15:20.041516 systemd[1]: sshd@17-10.230.61.6:22-147.75.109.163:43578.service: Deactivated successfully. Oct 31 02:15:20.045096 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 02:15:20.047793 systemd-logind[1484]: Session 20 logged out. Waiting for processes to exit. Oct 31 02:15:20.049431 systemd-logind[1484]: Removed session 20. Oct 31 02:15:20.186764 systemd[1]: Started sshd@18-10.230.61.6:22-147.75.109.163:43590.service - OpenSSH per-connection server daemon (147.75.109.163:43590). Oct 31 02:15:20.618731 kubelet[2763]: E1031 02:15:20.617400 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-xk5jv" podUID="8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d" Oct 31 02:15:20.620357 kubelet[2763]: E1031 02:15:20.619229 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-75c756744f-85x8s" podUID="ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da" Oct 31 02:15:21.130551 sshd[5759]: Accepted publickey for core from 147.75.109.163 port 43590 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:15:21.134596 sshd[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:15:21.142062 systemd-logind[1484]: New session 21 of user core. Oct 31 02:15:21.157436 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 31 02:15:22.692923 sshd[5759]: pam_unix(sshd:session): session closed for user core Oct 31 02:15:22.703669 systemd-logind[1484]: Session 21 logged out. Waiting for processes to exit. Oct 31 02:15:22.704758 systemd[1]: sshd@18-10.230.61.6:22-147.75.109.163:43590.service: Deactivated successfully. Oct 31 02:15:22.709596 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 02:15:22.711586 systemd-logind[1484]: Removed session 21. Oct 31 02:15:22.849538 systemd[1]: Started sshd@19-10.230.61.6:22-147.75.109.163:37722.service - OpenSSH per-connection server daemon (147.75.109.163:37722). Oct 31 02:15:23.613816 kubelet[2763]: E1031 02:15:23.613712 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b84756f78-vnktk" podUID="5c5691c8-bb57-4400-82c8-d0c76d156189" Oct 31 02:15:23.779637 sshd[5777]: Accepted publickey for core from 147.75.109.163 port 37722 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:15:23.782385 sshd[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:15:23.790959 systemd-logind[1484]: New session 22 of user core. Oct 31 02:15:23.798426 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 31 02:15:24.944866 sshd[5777]: pam_unix(sshd:session): session closed for user core Oct 31 02:15:24.951920 systemd[1]: sshd@19-10.230.61.6:22-147.75.109.163:37722.service: Deactivated successfully. Oct 31 02:15:24.956539 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 02:15:24.959034 systemd-logind[1484]: Session 22 logged out. Waiting for processes to exit. Oct 31 02:15:24.962255 systemd-logind[1484]: Removed session 22. Oct 31 02:15:25.120544 systemd[1]: Started sshd@20-10.230.61.6:22-147.75.109.163:37732.service - OpenSSH per-connection server daemon (147.75.109.163:37732). Oct 31 02:15:26.061336 sshd[5788]: Accepted publickey for core from 147.75.109.163 port 37732 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:15:26.063830 sshd[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:15:26.071777 systemd-logind[1484]: New session 23 of user core. Oct 31 02:15:26.078377 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 31 02:15:26.808533 sshd[5788]: pam_unix(sshd:session): session closed for user core Oct 31 02:15:26.814920 systemd[1]: sshd@20-10.230.61.6:22-147.75.109.163:37732.service: Deactivated successfully. Oct 31 02:15:26.818619 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 02:15:26.820558 systemd-logind[1484]: Session 23 logged out. Waiting for processes to exit. Oct 31 02:15:26.822732 systemd-logind[1484]: Removed session 23. Oct 31 02:15:27.614011 kubelet[2763]: E1031 02:15:27.613818 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-ts64b" podUID="e8cd4f39-3f1e-47f1-8de2-399f0cec4257" Oct 31 02:15:28.616603 kubelet[2763]: E1031 02:15:28.616212 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:15:30.618754 kubelet[2763]: E1031 02:15:30.618668 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5f5wx" podUID="383b1d33-d54b-4a00-801a-8a36f78ff190" Oct 31 02:15:32.027182 systemd[1]: Started sshd@21-10.230.61.6:22-147.75.109.163:38006.service - OpenSSH per-connection server daemon (147.75.109.163:38006). Oct 31 02:15:32.626100 kubelet[2763]: E1031 02:15:32.625852 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-75c756744f-85x8s" podUID="ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da" Oct 31 02:15:32.989697 sshd[5803]: Accepted publickey for core from 147.75.109.163 port 38006 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:15:32.994595 sshd[5803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:15:33.005950 systemd-logind[1484]: New session 24 of user core. Oct 31 02:15:33.013391 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 31 02:15:33.614768 kubelet[2763]: E1031 02:15:33.614703 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-xk5jv" podUID="8c099e8c-e833-4a6d-9d15-b2b6ba86bb9d" Oct 31 02:15:33.847942 sshd[5803]: pam_unix(sshd:session): session closed for user core Oct 31 02:15:33.855902 systemd[1]: sshd@21-10.230.61.6:22-147.75.109.163:38006.service: Deactivated successfully. Oct 31 02:15:33.861846 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 02:15:33.866000 systemd-logind[1484]: Session 24 logged out. Waiting for processes to exit. Oct 31 02:15:33.869573 systemd-logind[1484]: Removed session 24. Oct 31 02:15:37.628519 containerd[1506]: time="2025-10-31T02:15:37.628281922Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 02:15:37.959220 containerd[1506]: time="2025-10-31T02:15:37.958335560Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:15:37.962194 containerd[1506]: time="2025-10-31T02:15:37.960864994Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 02:15:37.962194 containerd[1506]: time="2025-10-31T02:15:37.961041807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 02:15:37.962357 kubelet[2763]: E1031 02:15:37.961881 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 02:15:37.962357 kubelet[2763]: E1031 02:15:37.962115 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 02:15:37.974339 kubelet[2763]: E1031 02:15:37.974201 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-69bkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-b84756f78-vnktk_calico-system(5c5691c8-bb57-4400-82c8-d0c76d156189): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 02:15:37.975723 kubelet[2763]: E1031 02:15:37.975638 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-b84756f78-vnktk" podUID="5c5691c8-bb57-4400-82c8-d0c76d156189" Oct 31 02:15:39.016363 systemd[1]: Started sshd@22-10.230.61.6:22-147.75.109.163:38018.service - OpenSSH per-connection server daemon (147.75.109.163:38018). Oct 31 02:15:39.620535 containerd[1506]: time="2025-10-31T02:15:39.620404369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 02:15:39.985208 sshd[5824]: Accepted publickey for core from 147.75.109.163 port 38018 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 02:15:39.986292 sshd[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 02:15:39.998817 systemd-logind[1484]: New session 25 of user core. Oct 31 02:15:40.008393 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 31 02:15:40.030740 containerd[1506]: time="2025-10-31T02:15:40.030664997Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:15:40.032289 containerd[1506]: time="2025-10-31T02:15:40.032227752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 02:15:40.032431 containerd[1506]: time="2025-10-31T02:15:40.032385443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 02:15:40.034204 kubelet[2763]: E1031 02:15:40.032637 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 02:15:40.034204 kubelet[2763]: E1031 02:15:40.032756 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 02:15:40.034204 kubelet[2763]: E1031 02:15:40.033116 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-59lsh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-c48557b4b-ts64b_calico-apiserver(e8cd4f39-3f1e-47f1-8de2-399f0cec4257): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 02:15:40.036531 kubelet[2763]: E1031 02:15:40.034680 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-c48557b4b-ts64b" podUID="e8cd4f39-3f1e-47f1-8de2-399f0cec4257" Oct 31 02:15:40.643496 containerd[1506]: time="2025-10-31T02:15:40.642414557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 02:15:40.974639 containerd[1506]: time="2025-10-31T02:15:40.974442345Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:15:40.976203 containerd[1506]: time="2025-10-31T02:15:40.976084618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 02:15:40.976272 containerd[1506]: time="2025-10-31T02:15:40.976185081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 02:15:40.977466 kubelet[2763]: E1031 02:15:40.977377 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 02:15:40.977606 kubelet[2763]: E1031 02:15:40.977486 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 02:15:40.979200 kubelet[2763]: E1031 02:15:40.978197 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p6pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rsz7n_calico-system(1aba93ae-9569-4e3f-92f8-b96678002f38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 02:15:40.981506 containerd[1506]: time="2025-10-31T02:15:40.981465305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 02:15:41.031090 sshd[5824]: pam_unix(sshd:session): session closed for user core Oct 31 02:15:41.037506 systemd[1]: sshd@22-10.230.61.6:22-147.75.109.163:38018.service: Deactivated successfully. Oct 31 02:15:41.042638 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 02:15:41.045079 systemd-logind[1484]: Session 25 logged out. Waiting for processes to exit. Oct 31 02:15:41.048193 systemd-logind[1484]: Removed session 25. Oct 31 02:15:41.302543 containerd[1506]: time="2025-10-31T02:15:41.302219153Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:15:41.303991 containerd[1506]: time="2025-10-31T02:15:41.303779757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 02:15:41.303991 containerd[1506]: time="2025-10-31T02:15:41.303917277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 02:15:41.304644 kubelet[2763]: E1031 02:15:41.304394 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 02:15:41.304644 kubelet[2763]: E1031 02:15:41.304473 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 02:15:41.305468 kubelet[2763]: E1031 02:15:41.304713 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2p6pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rsz7n_calico-system(1aba93ae-9569-4e3f-92f8-b96678002f38): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 02:15:41.314192 kubelet[2763]: E1031 02:15:41.313958 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rsz7n" podUID="1aba93ae-9569-4e3f-92f8-b96678002f38" Oct 31 02:15:41.635280 containerd[1506]: time="2025-10-31T02:15:41.634844836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 02:15:41.985330 containerd[1506]: time="2025-10-31T02:15:41.984290885Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:15:41.988251 containerd[1506]: time="2025-10-31T02:15:41.986987303Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 02:15:41.988251 containerd[1506]: time="2025-10-31T02:15:41.987112434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 02:15:41.988410 kubelet[2763]: E1031 02:15:41.987742 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 02:15:41.988410 kubelet[2763]: E1031 02:15:41.987851 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 02:15:41.989369 kubelet[2763]: E1031 02:15:41.988205 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wbvnq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5f5wx_calico-system(383b1d33-d54b-4a00-801a-8a36f78ff190): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 02:15:41.991047 kubelet[2763]: E1031 02:15:41.990925 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5f5wx" podUID="383b1d33-d54b-4a00-801a-8a36f78ff190" Oct 31 02:15:43.624231 containerd[1506]: time="2025-10-31T02:15:43.623809121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 02:15:44.001951 containerd[1506]: time="2025-10-31T02:15:44.001740703Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:15:44.007337 containerd[1506]: time="2025-10-31T02:15:44.007271135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 02:15:44.007671 containerd[1506]: time="2025-10-31T02:15:44.007300427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 02:15:44.007783 kubelet[2763]: E1031 02:15:44.007695 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 02:15:44.010406 kubelet[2763]: E1031 02:15:44.007826 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 02:15:44.010406 kubelet[2763]: E1031 02:15:44.008065 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ef444e7681884ff38a072ba2825613b9,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hpp4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-75c756744f-85x8s_calico-system(ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 02:15:44.011667 containerd[1506]: time="2025-10-31T02:15:44.011632423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 02:15:44.364610 containerd[1506]: time="2025-10-31T02:15:44.364528872Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 02:15:44.377575 containerd[1506]: time="2025-10-31T02:15:44.377489189Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 02:15:44.377763 containerd[1506]: time="2025-10-31T02:15:44.377704047Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 02:15:44.378052 kubelet[2763]: E1031 02:15:44.377969 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 02:15:44.378245 kubelet[2763]: E1031 02:15:44.378076 2763 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 02:15:44.378447 kubelet[2763]: E1031 02:15:44.378363 2763 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hpp4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-75c756744f-85x8s_calico-system(ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 02:15:44.380741 kubelet[2763]: E1031 02:15:44.380663 2763 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-75c756744f-85x8s" podUID="ef5a12a1-5de2-4b02-a15d-c02d3ef6c7da"