Oct 31 01:43:35.024365 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Oct 30 22:59:39 -00 2025 Oct 31 01:43:35.024399 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 01:43:35.024420 kernel: BIOS-provided physical RAM map: Oct 31 01:43:35.024434 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Oct 31 01:43:35.024444 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Oct 31 01:43:35.024453 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Oct 31 01:43:35.024472 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Oct 31 01:43:35.024482 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Oct 31 01:43:35.024491 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Oct 31 01:43:35.024501 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Oct 31 01:43:35.024511 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 31 01:43:35.024520 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Oct 31 01:43:35.024541 kernel: NX (Execute Disable) protection: active Oct 31 01:43:35.024552 kernel: APIC: Static calls initialized Oct 31 01:43:35.024571 kernel: SMBIOS 2.8 present. Oct 31 01:43:35.024586 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Oct 31 01:43:35.024598 kernel: Hypervisor detected: KVM Oct 31 01:43:35.024613 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 31 01:43:35.024624 kernel: kvm-clock: using sched offset of 5107038788 cycles Oct 31 01:43:35.024636 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 31 01:43:35.024647 kernel: tsc: Detected 2799.998 MHz processor Oct 31 01:43:35.024658 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 01:43:35.024671 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 01:43:35.024682 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Oct 31 01:43:35.024693 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Oct 31 01:43:35.024704 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 01:43:35.024719 kernel: Using GB pages for direct mapping Oct 31 01:43:35.024731 kernel: ACPI: Early table checksum verification disabled Oct 31 01:43:35.024741 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Oct 31 01:43:35.024752 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:43:35.024763 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:43:35.024774 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:43:35.024785 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Oct 31 01:43:35.024795 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:43:35.024806 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:43:35.024821 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:43:35.024833 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 01:43:35.024844 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Oct 31 01:43:35.024854 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Oct 31 01:43:35.024866 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Oct 31 01:43:35.024890 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Oct 31 01:43:35.024901 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Oct 31 01:43:35.024929 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Oct 31 01:43:35.024944 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Oct 31 01:43:35.024955 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 31 01:43:35.024972 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 31 01:43:35.024989 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Oct 31 01:43:35.025000 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Oct 31 01:43:35.025011 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Oct 31 01:43:35.025023 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Oct 31 01:43:35.025052 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Oct 31 01:43:35.025064 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Oct 31 01:43:35.025075 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Oct 31 01:43:35.025086 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Oct 31 01:43:35.025097 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Oct 31 01:43:35.025108 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Oct 31 01:43:35.025119 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Oct 31 01:43:35.025130 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Oct 31 01:43:35.025147 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Oct 31 01:43:35.025164 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Oct 31 01:43:35.025176 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 31 01:43:35.025188 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Oct 31 01:43:35.025199 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Oct 31 01:43:35.025211 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Oct 31 01:43:35.025222 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Oct 31 01:43:35.025234 kernel: Zone ranges: Oct 31 01:43:35.025245 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 01:43:35.025257 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Oct 31 01:43:35.025273 kernel: Normal empty Oct 31 01:43:35.025284 kernel: Movable zone start for each node Oct 31 01:43:35.025296 kernel: Early memory node ranges Oct 31 01:43:35.025307 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Oct 31 01:43:35.025318 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Oct 31 01:43:35.025329 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Oct 31 01:43:35.025341 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 01:43:35.025352 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Oct 31 01:43:35.025369 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Oct 31 01:43:35.025381 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 31 01:43:35.025398 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 31 01:43:35.025410 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 31 01:43:35.025421 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 31 01:43:35.025432 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 31 01:43:35.025444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 31 01:43:35.025455 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 31 01:43:35.025467 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 31 01:43:35.025480 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 01:43:35.025492 kernel: TSC deadline timer available Oct 31 01:43:35.025508 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Oct 31 01:43:35.025519 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 31 01:43:35.025531 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Oct 31 01:43:35.025542 kernel: Booting paravirtualized kernel on KVM Oct 31 01:43:35.025554 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 01:43:35.025565 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Oct 31 01:43:35.025577 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u262144 Oct 31 01:43:35.025588 kernel: pcpu-alloc: s196712 r8192 d32664 u262144 alloc=1*2097152 Oct 31 01:43:35.025599 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Oct 31 01:43:35.025616 kernel: kvm-guest: PV spinlocks enabled Oct 31 01:43:35.025628 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 31 01:43:35.025640 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 01:43:35.025652 kernel: random: crng init done Oct 31 01:43:35.025667 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 01:43:35.025679 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 31 01:43:35.025690 kernel: Fallback order for Node 0: 0 Oct 31 01:43:35.025701 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Oct 31 01:43:35.025717 kernel: Policy zone: DMA32 Oct 31 01:43:35.025734 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 01:43:35.025746 kernel: software IO TLB: area num 16. Oct 31 01:43:35.025758 kernel: Memory: 1901532K/2096616K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 194824K reserved, 0K cma-reserved) Oct 31 01:43:35.025778 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Oct 31 01:43:35.025791 kernel: Kernel/User page tables isolation: enabled Oct 31 01:43:35.025802 kernel: ftrace: allocating 37980 entries in 149 pages Oct 31 01:43:35.025813 kernel: ftrace: allocated 149 pages with 4 groups Oct 31 01:43:35.025825 kernel: Dynamic Preempt: voluntary Oct 31 01:43:35.025842 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 01:43:35.025854 kernel: rcu: RCU event tracing is enabled. Oct 31 01:43:35.025866 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Oct 31 01:43:35.025877 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 01:43:35.025889 kernel: Rude variant of Tasks RCU enabled. Oct 31 01:43:35.025912 kernel: Tracing variant of Tasks RCU enabled. Oct 31 01:43:35.025954 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 01:43:35.025967 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Oct 31 01:43:35.025979 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Oct 31 01:43:35.025998 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 31 01:43:35.026009 kernel: Console: colour VGA+ 80x25 Oct 31 01:43:35.026021 kernel: printk: console [tty0] enabled Oct 31 01:43:35.026049 kernel: printk: console [ttyS0] enabled Oct 31 01:43:35.026062 kernel: ACPI: Core revision 20230628 Oct 31 01:43:35.026074 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 01:43:35.026085 kernel: x2apic enabled Oct 31 01:43:35.026097 kernel: APIC: Switched APIC routing to: physical x2apic Oct 31 01:43:35.026115 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Oct 31 01:43:35.026133 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Oct 31 01:43:35.026146 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 31 01:43:35.026158 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Oct 31 01:43:35.026170 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Oct 31 01:43:35.026181 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 01:43:35.026193 kernel: Spectre V2 : Mitigation: Retpolines Oct 31 01:43:35.026205 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 31 01:43:35.026217 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Oct 31 01:43:35.026234 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 01:43:35.026246 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 31 01:43:35.026258 kernel: MDS: Mitigation: Clear CPU buffers Oct 31 01:43:35.026270 kernel: MMIO Stale Data: Unknown: No mitigations Oct 31 01:43:35.026282 kernel: SRBDS: Unknown: Dependent on hypervisor status Oct 31 01:43:35.026294 kernel: active return thunk: its_return_thunk Oct 31 01:43:35.026305 kernel: ITS: Mitigation: Aligned branch/return thunks Oct 31 01:43:35.026318 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 01:43:35.026330 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 01:43:35.026342 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 01:43:35.026354 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 01:43:35.026370 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 31 01:43:35.026383 kernel: Freeing SMP alternatives memory: 32K Oct 31 01:43:35.026399 kernel: pid_max: default: 32768 minimum: 301 Oct 31 01:43:35.026412 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 31 01:43:35.026424 kernel: landlock: Up and running. Oct 31 01:43:35.026436 kernel: SELinux: Initializing. Oct 31 01:43:35.026448 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 01:43:35.026459 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Oct 31 01:43:35.026472 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Oct 31 01:43:35.026484 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Oct 31 01:43:35.026496 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Oct 31 01:43:35.026513 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Oct 31 01:43:35.026530 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Oct 31 01:43:35.026542 kernel: signal: max sigframe size: 1776 Oct 31 01:43:35.026554 kernel: rcu: Hierarchical SRCU implementation. Oct 31 01:43:35.026567 kernel: rcu: Max phase no-delay instances is 400. Oct 31 01:43:35.026579 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 31 01:43:35.026591 kernel: smp: Bringing up secondary CPUs ... Oct 31 01:43:35.026603 kernel: smpboot: x86: Booting SMP configuration: Oct 31 01:43:35.026615 kernel: .... node #0, CPUs: #1 Oct 31 01:43:35.026632 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Oct 31 01:43:35.026644 kernel: smp: Brought up 1 node, 2 CPUs Oct 31 01:43:35.026656 kernel: smpboot: Max logical packages: 16 Oct 31 01:43:35.026668 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Oct 31 01:43:35.026680 kernel: devtmpfs: initialized Oct 31 01:43:35.026692 kernel: x86/mm: Memory block size: 128MB Oct 31 01:43:35.026704 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 01:43:35.026716 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Oct 31 01:43:35.026728 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 01:43:35.026745 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 01:43:35.026757 kernel: audit: initializing netlink subsys (disabled) Oct 31 01:43:35.026769 kernel: audit: type=2000 audit(1761875013.499:1): state=initialized audit_enabled=0 res=1 Oct 31 01:43:35.026781 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 01:43:35.026793 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 01:43:35.026805 kernel: cpuidle: using governor menu Oct 31 01:43:35.026817 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 01:43:35.026829 kernel: dca service started, version 1.12.1 Oct 31 01:43:35.026841 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Oct 31 01:43:35.026857 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Oct 31 01:43:35.026870 kernel: PCI: Using configuration type 1 for base access Oct 31 01:43:35.026882 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 01:43:35.026894 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 01:43:35.026906 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 01:43:35.026945 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 01:43:35.026959 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 01:43:35.026971 kernel: ACPI: Added _OSI(Module Device) Oct 31 01:43:35.026983 kernel: ACPI: Added _OSI(Processor Device) Oct 31 01:43:35.027000 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 01:43:35.027024 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 01:43:35.027046 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Oct 31 01:43:35.027058 kernel: ACPI: Interpreter enabled Oct 31 01:43:35.027070 kernel: ACPI: PM: (supports S0 S5) Oct 31 01:43:35.027082 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 01:43:35.027095 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 01:43:35.027107 kernel: PCI: Using E820 reservations for host bridge windows Oct 31 01:43:35.027119 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 31 01:43:35.027137 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 01:43:35.027435 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 01:43:35.027648 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 31 01:43:35.027856 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 31 01:43:35.027873 kernel: PCI host bridge to bus 0000:00 Oct 31 01:43:35.028124 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 01:43:35.028286 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 31 01:43:35.028454 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 01:43:35.028618 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Oct 31 01:43:35.028787 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Oct 31 01:43:35.029003 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Oct 31 01:43:35.029172 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 01:43:35.029395 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Oct 31 01:43:35.029606 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Oct 31 01:43:35.029781 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Oct 31 01:43:35.029990 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Oct 31 01:43:35.030178 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Oct 31 01:43:35.030350 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 01:43:35.030543 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Oct 31 01:43:35.030763 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Oct 31 01:43:35.032109 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Oct 31 01:43:35.032291 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Oct 31 01:43:35.032489 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Oct 31 01:43:35.032685 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Oct 31 01:43:35.032875 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Oct 31 01:43:35.033125 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Oct 31 01:43:35.033346 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Oct 31 01:43:35.035359 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Oct 31 01:43:35.035598 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Oct 31 01:43:35.035775 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Oct 31 01:43:35.036075 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Oct 31 01:43:35.036253 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Oct 31 01:43:35.036452 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Oct 31 01:43:35.036639 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Oct 31 01:43:35.036836 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Oct 31 01:43:35.038199 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Oct 31 01:43:35.038395 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Oct 31 01:43:35.038585 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Oct 31 01:43:35.038786 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Oct 31 01:43:35.039018 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Oct 31 01:43:35.039207 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Oct 31 01:43:35.039380 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Oct 31 01:43:35.039551 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Oct 31 01:43:35.039753 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Oct 31 01:43:35.041995 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 31 01:43:35.042223 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Oct 31 01:43:35.042410 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Oct 31 01:43:35.042609 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Oct 31 01:43:35.042809 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Oct 31 01:43:35.044085 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Oct 31 01:43:35.044282 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Oct 31 01:43:35.044543 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Oct 31 01:43:35.044728 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Oct 31 01:43:35.044937 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Oct 31 01:43:35.045670 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 31 01:43:35.045911 kernel: pci_bus 0000:02: extended config space not accessible Oct 31 01:43:35.047188 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Oct 31 01:43:35.047390 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Oct 31 01:43:35.047586 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Oct 31 01:43:35.047763 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 31 01:43:35.049039 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Oct 31 01:43:35.049228 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Oct 31 01:43:35.049404 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Oct 31 01:43:35.049573 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Oct 31 01:43:35.049743 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 31 01:43:35.050979 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Oct 31 01:43:35.051197 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Oct 31 01:43:35.051377 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Oct 31 01:43:35.051550 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Oct 31 01:43:35.051720 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 31 01:43:35.051891 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Oct 31 01:43:35.055023 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Oct 31 01:43:35.055223 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 31 01:43:35.055407 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Oct 31 01:43:35.055578 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Oct 31 01:43:35.055745 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 31 01:43:35.055969 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Oct 31 01:43:35.056189 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Oct 31 01:43:35.056357 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 31 01:43:35.056527 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Oct 31 01:43:35.056730 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Oct 31 01:43:35.056902 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 31 01:43:35.061068 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Oct 31 01:43:35.061252 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Oct 31 01:43:35.061430 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 31 01:43:35.061449 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 31 01:43:35.061462 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 31 01:43:35.061475 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 01:43:35.061498 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 31 01:43:35.061510 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 31 01:43:35.061531 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 31 01:43:35.061543 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 31 01:43:35.061556 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 31 01:43:35.061568 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 31 01:43:35.061581 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 31 01:43:35.061593 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 31 01:43:35.061605 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 31 01:43:35.061618 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 31 01:43:35.061630 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 31 01:43:35.061647 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 31 01:43:35.061659 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 31 01:43:35.061672 kernel: iommu: Default domain type: Translated Oct 31 01:43:35.061684 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 01:43:35.061696 kernel: PCI: Using ACPI for IRQ routing Oct 31 01:43:35.061708 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 01:43:35.061721 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Oct 31 01:43:35.061733 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Oct 31 01:43:35.061929 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 31 01:43:35.062120 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 31 01:43:35.062288 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 01:43:35.062307 kernel: vgaarb: loaded Oct 31 01:43:35.062319 kernel: clocksource: Switched to clocksource kvm-clock Oct 31 01:43:35.062332 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 01:43:35.062344 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 01:43:35.062357 kernel: pnp: PnP ACPI init Oct 31 01:43:35.062545 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Oct 31 01:43:35.062572 kernel: pnp: PnP ACPI: found 5 devices Oct 31 01:43:35.062585 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 01:43:35.062598 kernel: NET: Registered PF_INET protocol family Oct 31 01:43:35.062610 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 01:43:35.062623 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Oct 31 01:43:35.062635 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 01:43:35.062647 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 31 01:43:35.062659 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Oct 31 01:43:35.062677 kernel: TCP: Hash tables configured (established 16384 bind 16384) Oct 31 01:43:35.062689 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 01:43:35.062701 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Oct 31 01:43:35.062714 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 01:43:35.062726 kernel: NET: Registered PF_XDP protocol family Oct 31 01:43:35.062910 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Oct 31 01:43:35.065228 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Oct 31 01:43:35.065443 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Oct 31 01:43:35.065647 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Oct 31 01:43:35.065827 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Oct 31 01:43:35.067006 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Oct 31 01:43:35.067198 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Oct 31 01:43:35.067369 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Oct 31 01:43:35.067539 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Oct 31 01:43:35.067715 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Oct 31 01:43:35.067884 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Oct 31 01:43:35.069905 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Oct 31 01:43:35.070126 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Oct 31 01:43:35.070297 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Oct 31 01:43:35.070467 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Oct 31 01:43:35.070645 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Oct 31 01:43:35.070839 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Oct 31 01:43:35.071092 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Oct 31 01:43:35.071266 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Oct 31 01:43:35.071455 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Oct 31 01:43:35.071637 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Oct 31 01:43:35.071824 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Oct 31 01:43:35.074017 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Oct 31 01:43:35.074204 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Oct 31 01:43:35.074399 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Oct 31 01:43:35.074696 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 31 01:43:35.075046 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Oct 31 01:43:35.075250 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Oct 31 01:43:35.076190 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Oct 31 01:43:35.076388 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 31 01:43:35.076568 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Oct 31 01:43:35.076751 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Oct 31 01:43:35.076971 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Oct 31 01:43:35.077172 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 31 01:43:35.077344 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Oct 31 01:43:35.077511 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Oct 31 01:43:35.077678 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Oct 31 01:43:35.077845 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 31 01:43:35.078064 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Oct 31 01:43:35.078245 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Oct 31 01:43:35.078424 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Oct 31 01:43:35.078595 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 31 01:43:35.078771 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Oct 31 01:43:35.079011 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Oct 31 01:43:35.079193 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Oct 31 01:43:35.079369 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 31 01:43:35.079538 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Oct 31 01:43:35.079706 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Oct 31 01:43:35.079872 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Oct 31 01:43:35.080083 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 31 01:43:35.080245 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 31 01:43:35.080399 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 31 01:43:35.080552 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 31 01:43:35.080714 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Oct 31 01:43:35.080876 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Oct 31 01:43:35.081096 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Oct 31 01:43:35.081308 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Oct 31 01:43:35.081478 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Oct 31 01:43:35.081638 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Oct 31 01:43:35.081810 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Oct 31 01:43:35.082015 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Oct 31 01:43:35.082198 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Oct 31 01:43:35.082370 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Oct 31 01:43:35.082588 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Oct 31 01:43:35.082762 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Oct 31 01:43:35.082959 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Oct 31 01:43:35.083164 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Oct 31 01:43:35.083344 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Oct 31 01:43:35.083515 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Oct 31 01:43:35.083715 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Oct 31 01:43:35.083881 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Oct 31 01:43:35.084099 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Oct 31 01:43:35.084284 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Oct 31 01:43:35.084476 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Oct 31 01:43:35.084646 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Oct 31 01:43:35.084895 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Oct 31 01:43:35.085138 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Oct 31 01:43:35.085312 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Oct 31 01:43:35.085507 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Oct 31 01:43:35.085681 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Oct 31 01:43:35.085853 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Oct 31 01:43:35.085887 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 31 01:43:35.085901 kernel: PCI: CLS 0 bytes, default 64 Oct 31 01:43:35.085914 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 31 01:43:35.085927 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Oct 31 01:43:35.085940 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 31 01:43:35.086000 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Oct 31 01:43:35.086013 kernel: Initialise system trusted keyrings Oct 31 01:43:35.086035 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Oct 31 01:43:35.086056 kernel: Key type asymmetric registered Oct 31 01:43:35.086069 kernel: Asymmetric key parser 'x509' registered Oct 31 01:43:35.086082 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Oct 31 01:43:35.086095 kernel: io scheduler mq-deadline registered Oct 31 01:43:35.086108 kernel: io scheduler kyber registered Oct 31 01:43:35.086121 kernel: io scheduler bfq registered Oct 31 01:43:35.086292 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Oct 31 01:43:35.086472 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Oct 31 01:43:35.086645 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 01:43:35.086854 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Oct 31 01:43:35.087084 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Oct 31 01:43:35.087254 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 01:43:35.087430 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Oct 31 01:43:35.087637 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Oct 31 01:43:35.087806 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 01:43:35.087996 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Oct 31 01:43:35.088186 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Oct 31 01:43:35.088362 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 01:43:35.088542 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Oct 31 01:43:35.088731 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Oct 31 01:43:35.088910 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 01:43:35.089156 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Oct 31 01:43:35.089324 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Oct 31 01:43:35.089491 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 01:43:35.089658 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Oct 31 01:43:35.089829 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Oct 31 01:43:35.090024 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 01:43:35.090212 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Oct 31 01:43:35.090380 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Oct 31 01:43:35.090561 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Oct 31 01:43:35.090581 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 01:43:35.090595 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 31 01:43:35.090609 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 31 01:43:35.090629 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 01:43:35.090642 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 01:43:35.090657 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 31 01:43:35.090670 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 01:43:35.090683 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 01:43:35.090886 kernel: rtc_cmos 00:03: RTC can wake from S4 Oct 31 01:43:35.090907 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 01:43:35.091105 kernel: rtc_cmos 00:03: registered as rtc0 Oct 31 01:43:35.091275 kernel: rtc_cmos 00:03: setting system clock to 2025-10-31T01:43:34 UTC (1761875014) Oct 31 01:43:35.091436 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Oct 31 01:43:35.091455 kernel: intel_pstate: CPU model not supported Oct 31 01:43:35.091468 kernel: NET: Registered PF_INET6 protocol family Oct 31 01:43:35.091481 kernel: Segment Routing with IPv6 Oct 31 01:43:35.091494 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 01:43:35.091507 kernel: NET: Registered PF_PACKET protocol family Oct 31 01:43:35.091519 kernel: Key type dns_resolver registered Oct 31 01:43:35.091532 kernel: IPI shorthand broadcast: enabled Oct 31 01:43:35.091552 kernel: sched_clock: Marking stable (1459005114, 218410140)->(1908975347, -231560093) Oct 31 01:43:35.091565 kernel: registered taskstats version 1 Oct 31 01:43:35.091577 kernel: Loading compiled-in X.509 certificates Oct 31 01:43:35.091591 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: 3640cadef2ce00a652278ae302be325ebb54a228' Oct 31 01:43:35.091603 kernel: Key type .fscrypt registered Oct 31 01:43:35.091616 kernel: Key type fscrypt-provisioning registered Oct 31 01:43:35.091628 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 01:43:35.091641 kernel: ima: Allocated hash algorithm: sha1 Oct 31 01:43:35.091653 kernel: ima: No architecture policies found Oct 31 01:43:35.091670 kernel: clk: Disabling unused clocks Oct 31 01:43:35.091683 kernel: Freeing unused kernel image (initmem) memory: 42880K Oct 31 01:43:35.091696 kernel: Write protecting the kernel read-only data: 36864k Oct 31 01:43:35.091709 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Oct 31 01:43:35.091722 kernel: Run /init as init process Oct 31 01:43:35.091735 kernel: with arguments: Oct 31 01:43:35.091748 kernel: /init Oct 31 01:43:35.091761 kernel: with environment: Oct 31 01:43:35.091773 kernel: HOME=/ Oct 31 01:43:35.091785 kernel: TERM=linux Oct 31 01:43:35.091806 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 01:43:35.091822 systemd[1]: Detected virtualization kvm. Oct 31 01:43:35.091836 systemd[1]: Detected architecture x86-64. Oct 31 01:43:35.091849 systemd[1]: Running in initrd. Oct 31 01:43:35.091862 systemd[1]: No hostname configured, using default hostname. Oct 31 01:43:35.091875 systemd[1]: Hostname set to . Oct 31 01:43:35.091889 systemd[1]: Initializing machine ID from VM UUID. Oct 31 01:43:35.091911 systemd[1]: Queued start job for default target initrd.target. Oct 31 01:43:35.091965 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 01:43:35.091980 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 01:43:35.091995 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 01:43:35.092009 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 01:43:35.092022 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 01:43:35.092050 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 01:43:35.092073 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 31 01:43:35.092087 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 31 01:43:35.092101 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 01:43:35.092115 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 01:43:35.092128 systemd[1]: Reached target paths.target - Path Units. Oct 31 01:43:35.092142 systemd[1]: Reached target slices.target - Slice Units. Oct 31 01:43:35.092155 systemd[1]: Reached target swap.target - Swaps. Oct 31 01:43:35.092169 systemd[1]: Reached target timers.target - Timer Units. Oct 31 01:43:35.092187 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 01:43:35.092201 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 01:43:35.092215 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 01:43:35.092228 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Oct 31 01:43:35.092242 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 01:43:35.092255 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 01:43:35.092269 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 01:43:35.092282 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 01:43:35.092313 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 01:43:35.092326 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 01:43:35.092340 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 01:43:35.092353 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 01:43:35.092378 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 01:43:35.092391 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 01:43:35.092403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 01:43:35.092467 systemd-journald[202]: Collecting audit messages is disabled. Oct 31 01:43:35.092513 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 01:43:35.092526 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 01:43:35.092538 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 01:43:35.092569 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 01:43:35.092582 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 01:43:35.092594 kernel: Bridge firewalling registered Oct 31 01:43:35.092607 systemd-journald[202]: Journal started Oct 31 01:43:35.092635 systemd-journald[202]: Runtime Journal (/run/log/journal/cb2ae7e7c8d74ae7a8391d028744279d) is 4.7M, max 38.0M, 33.2M free. Oct 31 01:43:35.038668 systemd-modules-load[203]: Inserted module 'overlay' Oct 31 01:43:35.081355 systemd-modules-load[203]: Inserted module 'br_netfilter' Oct 31 01:43:35.148964 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 01:43:35.149665 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 01:43:35.150759 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 01:43:35.159202 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 01:43:35.163154 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 01:43:35.176628 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 01:43:35.180038 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 01:43:35.183168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 01:43:35.196369 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 01:43:35.209629 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 01:43:35.210864 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 01:43:35.219192 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 01:43:35.224171 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 01:43:35.226322 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 01:43:35.243382 dracut-cmdline[234]: dracut-dracut-053 Oct 31 01:43:35.248967 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=950876ad7bc3e9634b7585a81697da4ef03ac6558969e5c002165369dd7c7885 Oct 31 01:43:35.278180 systemd-resolved[236]: Positive Trust Anchors: Oct 31 01:43:35.279307 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 01:43:35.280287 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 01:43:35.287243 systemd-resolved[236]: Defaulting to hostname 'linux'. Oct 31 01:43:35.289693 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 01:43:35.291411 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 01:43:35.343974 kernel: SCSI subsystem initialized Oct 31 01:43:35.355951 kernel: Loading iSCSI transport class v2.0-870. Oct 31 01:43:35.368959 kernel: iscsi: registered transport (tcp) Oct 31 01:43:35.395457 kernel: iscsi: registered transport (qla4xxx) Oct 31 01:43:35.395536 kernel: QLogic iSCSI HBA Driver Oct 31 01:43:35.454714 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 01:43:35.462166 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 01:43:35.506701 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 01:43:35.506838 kernel: device-mapper: uevent: version 1.0.3 Oct 31 01:43:35.507604 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 31 01:43:35.559043 kernel: raid6: sse2x4 gen() 7564 MB/s Oct 31 01:43:35.575985 kernel: raid6: sse2x2 gen() 5289 MB/s Oct 31 01:43:35.594573 kernel: raid6: sse2x1 gen() 5370 MB/s Oct 31 01:43:35.594705 kernel: raid6: using algorithm sse2x4 gen() 7564 MB/s Oct 31 01:43:35.613727 kernel: raid6: .... xor() 7969 MB/s, rmw enabled Oct 31 01:43:35.613840 kernel: raid6: using ssse3x2 recovery algorithm Oct 31 01:43:35.639071 kernel: xor: automatically using best checksumming function avx Oct 31 01:43:35.840119 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 01:43:35.857630 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 01:43:35.867233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 01:43:35.887408 systemd-udevd[420]: Using default interface naming scheme 'v255'. Oct 31 01:43:35.894159 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 01:43:35.902136 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 01:43:35.930537 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Oct 31 01:43:35.972174 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 01:43:35.980147 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 01:43:36.096051 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 01:43:36.105219 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 01:43:36.140182 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 01:43:36.143944 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 01:43:36.145452 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 01:43:36.147756 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 01:43:36.159097 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 01:43:36.184060 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 01:43:36.225074 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Oct 31 01:43:36.239393 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Oct 31 01:43:36.239695 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 01:43:36.253983 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 01:43:36.254052 kernel: GPT:17805311 != 125829119 Oct 31 01:43:36.255092 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 01:43:36.257134 kernel: GPT:17805311 != 125829119 Oct 31 01:43:36.257173 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 01:43:36.259690 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 01:43:36.276974 kernel: AVX version of gcm_enc/dec engaged. Oct 31 01:43:36.278436 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 01:43:36.278613 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 01:43:36.283350 kernel: AES CTR mode by8 optimization enabled Oct 31 01:43:36.283134 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 01:43:36.285918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 01:43:36.286146 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 01:43:36.296028 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 01:43:36.307973 kernel: ACPI: bus type USB registered Oct 31 01:43:36.308046 kernel: usbcore: registered new interface driver usbfs Oct 31 01:43:36.308678 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 01:43:36.313561 kernel: usbcore: registered new interface driver hub Oct 31 01:43:36.313594 kernel: usbcore: registered new device driver usb Oct 31 01:43:36.357107 kernel: BTRFS: device fsid 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (478) Oct 31 01:43:36.359951 kernel: libata version 3.00 loaded. Oct 31 01:43:36.376965 kernel: ahci 0000:00:1f.2: version 3.0 Oct 31 01:43:36.380945 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (475) Oct 31 01:43:36.384080 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 31 01:43:36.386137 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Oct 31 01:43:36.386396 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 31 01:43:36.389974 kernel: scsi host0: ahci Oct 31 01:43:36.390941 kernel: scsi host1: ahci Oct 31 01:43:36.391208 kernel: scsi host2: ahci Oct 31 01:43:36.391937 kernel: scsi host3: ahci Oct 31 01:43:36.395097 kernel: scsi host4: ahci Oct 31 01:43:36.397078 kernel: scsi host5: ahci Oct 31 01:43:36.400079 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Oct 31 01:43:36.400111 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Oct 31 01:43:36.400128 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Oct 31 01:43:36.400145 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Oct 31 01:43:36.400161 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Oct 31 01:43:36.400177 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Oct 31 01:43:36.408669 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 31 01:43:36.485365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 01:43:36.493234 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 31 01:43:36.499428 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 31 01:43:36.500271 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 31 01:43:36.507798 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 01:43:36.519189 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 01:43:36.523529 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 01:43:36.528403 disk-uuid[563]: Primary Header is updated. Oct 31 01:43:36.528403 disk-uuid[563]: Secondary Entries is updated. Oct 31 01:43:36.528403 disk-uuid[563]: Secondary Header is updated. Oct 31 01:43:36.535933 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 01:43:36.545416 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 01:43:36.553174 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 01:43:36.711950 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 31 01:43:36.719663 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 31 01:43:36.719725 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 31 01:43:36.719757 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 31 01:43:36.719776 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 31 01:43:36.719792 kernel: ata3: SATA link down (SStatus 0 SControl 300) Oct 31 01:43:36.730953 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Oct 31 01:43:36.736956 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Oct 31 01:43:36.739945 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Oct 31 01:43:36.744032 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Oct 31 01:43:36.744293 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Oct 31 01:43:36.744504 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Oct 31 01:43:36.746161 kernel: hub 1-0:1.0: USB hub found Oct 31 01:43:36.747186 kernel: hub 1-0:1.0: 4 ports detected Oct 31 01:43:36.751530 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Oct 31 01:43:36.751828 kernel: hub 2-0:1.0: USB hub found Oct 31 01:43:36.752102 kernel: hub 2-0:1.0: 4 ports detected Oct 31 01:43:36.988013 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Oct 31 01:43:37.130252 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 31 01:43:37.134842 kernel: usbcore: registered new interface driver usbhid Oct 31 01:43:37.134949 kernel: usbhid: USB HID core driver Oct 31 01:43:37.143171 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Oct 31 01:43:37.143221 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Oct 31 01:43:37.551267 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 01:43:37.552424 disk-uuid[564]: The operation has completed successfully. Oct 31 01:43:37.602414 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 01:43:37.602585 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 01:43:37.624135 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 31 01:43:37.637150 sh[587]: Success Oct 31 01:43:37.654136 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Oct 31 01:43:37.713438 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 31 01:43:37.722122 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 31 01:43:37.728699 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 31 01:43:37.754994 kernel: BTRFS info (device dm-0): first mount of filesystem 1021cdf2-f4a0-46ed-8fe0-b31d3115a6e0 Oct 31 01:43:37.755061 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 31 01:43:37.755080 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 31 01:43:37.758661 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 01:43:37.758699 kernel: BTRFS info (device dm-0): using free space tree Oct 31 01:43:37.770207 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 31 01:43:37.771254 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 31 01:43:37.777120 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 01:43:37.781146 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 01:43:37.794952 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 01:43:37.795025 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 01:43:37.795046 kernel: BTRFS info (device vda6): using free space tree Oct 31 01:43:37.803937 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 01:43:37.815391 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 31 01:43:37.818668 kernel: BTRFS info (device vda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 01:43:37.825083 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 01:43:37.833157 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 01:43:38.046618 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 01:43:38.064435 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 01:43:38.140627 systemd-networkd[770]: lo: Link UP Oct 31 01:43:38.140640 systemd-networkd[770]: lo: Gained carrier Oct 31 01:43:38.147578 systemd-networkd[770]: Enumeration completed Oct 31 01:43:38.147709 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 01:43:38.150163 systemd[1]: Reached target network.target - Network. Oct 31 01:43:38.150867 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 01:43:38.150877 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 01:43:38.154561 systemd-networkd[770]: eth0: Link UP Oct 31 01:43:38.154575 systemd-networkd[770]: eth0: Gained carrier Oct 31 01:43:38.154587 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 01:43:38.203031 systemd-networkd[770]: eth0: DHCPv4 address 10.230.44.66/30, gateway 10.230.44.65 acquired from 10.230.44.65 Oct 31 01:43:38.209346 ignition[675]: Ignition 2.19.0 Oct 31 01:43:38.210325 ignition[675]: Stage: fetch-offline Oct 31 01:43:38.211035 ignition[675]: no configs at "/usr/lib/ignition/base.d" Oct 31 01:43:38.211066 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 01:43:38.211236 ignition[675]: parsed url from cmdline: "" Oct 31 01:43:38.211244 ignition[675]: no config URL provided Oct 31 01:43:38.213337 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 01:43:38.211253 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 01:43:38.211269 ignition[675]: no config at "/usr/lib/ignition/user.ign" Oct 31 01:43:38.211278 ignition[675]: failed to fetch config: resource requires networking Oct 31 01:43:38.211547 ignition[675]: Ignition finished successfully Oct 31 01:43:38.228272 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Oct 31 01:43:38.259884 ignition[778]: Ignition 2.19.0 Oct 31 01:43:38.259908 ignition[778]: Stage: fetch Oct 31 01:43:38.261769 ignition[778]: no configs at "/usr/lib/ignition/base.d" Oct 31 01:43:38.261796 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 01:43:38.261997 ignition[778]: parsed url from cmdline: "" Oct 31 01:43:38.262004 ignition[778]: no config URL provided Oct 31 01:43:38.262014 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 01:43:38.262031 ignition[778]: no config at "/usr/lib/ignition/user.ign" Oct 31 01:43:38.262222 ignition[778]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Oct 31 01:43:38.262248 ignition[778]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Oct 31 01:43:38.262296 ignition[778]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Oct 31 01:43:38.281164 ignition[778]: GET result: OK Oct 31 01:43:38.282069 ignition[778]: parsing config with SHA512: 3f34f6480eb7a43f4aa0311a109268f08eacf74f05e39f152a5092619d98c97295a007df8742c0fa42656950574a3f89d4291a7916960e6582d74f4445bbb993 Oct 31 01:43:38.289885 unknown[778]: fetched base config from "system" Oct 31 01:43:38.289903 unknown[778]: fetched base config from "system" Oct 31 01:43:38.290613 ignition[778]: fetch: fetch complete Oct 31 01:43:38.289915 unknown[778]: fetched user config from "openstack" Oct 31 01:43:38.290622 ignition[778]: fetch: fetch passed Oct 31 01:43:38.292568 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Oct 31 01:43:38.290708 ignition[778]: Ignition finished successfully Oct 31 01:43:38.300242 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 01:43:38.323211 ignition[784]: Ignition 2.19.0 Oct 31 01:43:38.323231 ignition[784]: Stage: kargs Oct 31 01:43:38.323462 ignition[784]: no configs at "/usr/lib/ignition/base.d" Oct 31 01:43:38.323482 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 01:43:38.326447 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 01:43:38.325067 ignition[784]: kargs: kargs passed Oct 31 01:43:38.325135 ignition[784]: Ignition finished successfully Oct 31 01:43:38.335160 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 01:43:38.353163 ignition[792]: Ignition 2.19.0 Oct 31 01:43:38.353183 ignition[792]: Stage: disks Oct 31 01:43:38.353457 ignition[792]: no configs at "/usr/lib/ignition/base.d" Oct 31 01:43:38.356371 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 01:43:38.353477 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 01:43:38.354818 ignition[792]: disks: disks passed Oct 31 01:43:38.358173 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 01:43:38.354884 ignition[792]: Ignition finished successfully Oct 31 01:43:38.359729 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 01:43:38.361149 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 01:43:38.362601 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 01:43:38.363886 systemd[1]: Reached target basic.target - Basic System. Oct 31 01:43:38.372177 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 01:43:38.393007 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Oct 31 01:43:38.396901 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 01:43:38.405058 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 01:43:38.532963 kernel: EXT4-fs (vda9): mounted filesystem 044ea9d4-3e15-48f6-be3f-240ec74f6b62 r/w with ordered data mode. Quota mode: none. Oct 31 01:43:38.534102 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 01:43:38.535371 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 01:43:38.545075 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 01:43:38.549128 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 01:43:38.550441 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 01:43:38.552131 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Oct 31 01:43:38.557543 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 01:43:38.568119 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (808) Oct 31 01:43:38.568153 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 01:43:38.568178 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 01:43:38.568197 kernel: BTRFS info (device vda6): using free space tree Oct 31 01:43:38.557607 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 01:43:38.570580 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 01:43:38.574937 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 01:43:38.578265 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 01:43:38.583616 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 01:43:38.683340 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 01:43:38.693510 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Oct 31 01:43:38.700986 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 01:43:38.709656 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 01:43:38.817844 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 01:43:38.826049 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 01:43:38.829132 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 01:43:38.837166 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 01:43:38.839487 kernel: BTRFS info (device vda6): last unmount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 01:43:38.939358 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 01:43:38.947237 ignition[925]: INFO : Ignition 2.19.0 Oct 31 01:43:38.948990 ignition[925]: INFO : Stage: mount Oct 31 01:43:38.948990 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 01:43:38.948990 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 01:43:38.952044 ignition[925]: INFO : mount: mount passed Oct 31 01:43:38.952044 ignition[925]: INFO : Ignition finished successfully Oct 31 01:43:38.952379 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 01:43:39.512317 systemd-networkd[770]: eth0: Gained IPv6LL Oct 31 01:43:41.023052 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8b10:24:19ff:fee6:2c42/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8b10:24:19ff:fee6:2c42/64 assigned by NDisc. Oct 31 01:43:41.023067 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Oct 31 01:43:45.757761 coreos-metadata[810]: Oct 31 01:43:45.757 WARN failed to locate config-drive, using the metadata service API instead Oct 31 01:43:45.781490 coreos-metadata[810]: Oct 31 01:43:45.781 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 31 01:43:45.797328 coreos-metadata[810]: Oct 31 01:43:45.797 INFO Fetch successful Oct 31 01:43:45.798657 coreos-metadata[810]: Oct 31 01:43:45.797 INFO wrote hostname srv-n5tpq.gb1.brightbox.com to /sysroot/etc/hostname Oct 31 01:43:45.801000 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Oct 31 01:43:45.801221 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Oct 31 01:43:45.810120 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 01:43:45.840247 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 01:43:45.857959 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Oct 31 01:43:45.869209 kernel: BTRFS info (device vda6): first mount of filesystem 1a1fe00d-a5e5-45c6-a30a-fcc91f19f9ea Oct 31 01:43:45.869310 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 01:43:45.869332 kernel: BTRFS info (device vda6): using free space tree Oct 31 01:43:45.874960 kernel: BTRFS info (device vda6): auto enabling async discard Oct 31 01:43:45.878187 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 01:43:45.916511 ignition[959]: INFO : Ignition 2.19.0 Oct 31 01:43:45.916511 ignition[959]: INFO : Stage: files Oct 31 01:43:45.916511 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 01:43:45.916511 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 01:43:45.916511 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Oct 31 01:43:45.921522 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 01:43:45.921522 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 01:43:45.923698 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 01:43:45.923698 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 01:43:45.925872 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 01:43:45.925027 unknown[959]: wrote ssh authorized keys file for user: core Oct 31 01:43:45.927894 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 01:43:45.927894 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 31 01:43:46.125006 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 31 01:43:46.372216 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 01:43:46.372216 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 31 01:43:46.374691 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 01:43:46.374691 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 01:43:46.374691 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 01:43:46.374691 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 01:43:46.374691 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 01:43:46.374691 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 01:43:46.374691 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 01:43:46.387753 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 01:43:46.387753 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 01:43:46.387753 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 01:43:46.387753 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 01:43:46.387753 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 01:43:46.387753 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 31 01:43:46.771068 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 31 01:43:49.235746 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 01:43:49.235746 ignition[959]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 31 01:43:49.240933 ignition[959]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 01:43:49.240933 ignition[959]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 01:43:49.240933 ignition[959]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 31 01:43:49.240933 ignition[959]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Oct 31 01:43:49.240933 ignition[959]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 01:43:49.240933 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 01:43:49.240933 ignition[959]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 01:43:49.240933 ignition[959]: INFO : files: files passed Oct 31 01:43:49.240933 ignition[959]: INFO : Ignition finished successfully Oct 31 01:43:49.240481 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 01:43:49.252226 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 01:43:49.258129 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 01:43:49.266504 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 01:43:49.266677 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 01:43:49.286864 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 01:43:49.286864 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 01:43:49.289915 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 01:43:49.290253 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 01:43:49.292419 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 01:43:49.299117 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 01:43:49.334162 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 01:43:49.334349 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 01:43:49.336506 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 01:43:49.337639 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 01:43:49.339355 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 01:43:49.346150 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 01:43:49.363852 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 01:43:49.372124 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 01:43:49.386397 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 01:43:49.388152 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 01:43:49.389163 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 01:43:49.390598 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 01:43:49.390781 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 01:43:49.392741 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 01:43:49.393606 systemd[1]: Stopped target basic.target - Basic System. Oct 31 01:43:49.395099 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 01:43:49.396468 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 01:43:49.397999 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 01:43:49.399487 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 01:43:49.400977 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 01:43:49.402615 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 01:43:49.404064 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 01:43:49.405558 systemd[1]: Stopped target swap.target - Swaps. Oct 31 01:43:49.406942 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 01:43:49.407167 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 01:43:49.409068 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 01:43:49.410631 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 01:43:49.411990 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 01:43:49.412388 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 01:43:49.413539 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 01:43:49.413739 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 01:43:49.415528 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 01:43:49.415740 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 01:43:49.417620 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 01:43:49.417794 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 01:43:49.429795 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 01:43:49.430512 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 01:43:49.430786 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 01:43:49.434151 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 01:43:49.435639 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 01:43:49.435895 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 01:43:49.438318 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 01:43:49.438486 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 01:43:49.450389 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 01:43:49.450533 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 01:43:49.464882 ignition[1012]: INFO : Ignition 2.19.0 Oct 31 01:43:49.464882 ignition[1012]: INFO : Stage: umount Oct 31 01:43:49.464882 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 01:43:49.464882 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Oct 31 01:43:49.464882 ignition[1012]: INFO : umount: umount passed Oct 31 01:43:49.464882 ignition[1012]: INFO : Ignition finished successfully Oct 31 01:43:49.466315 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 01:43:49.466556 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 01:43:49.468664 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 01:43:49.468765 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 01:43:49.471668 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 01:43:49.471774 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 01:43:49.473007 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 31 01:43:49.473076 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Oct 31 01:43:49.474592 systemd[1]: Stopped target network.target - Network. Oct 31 01:43:49.479295 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 01:43:49.479397 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 01:43:49.480742 systemd[1]: Stopped target paths.target - Path Units. Oct 31 01:43:49.482050 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 01:43:49.482404 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 01:43:49.484310 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 01:43:49.484978 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 01:43:49.485643 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 01:43:49.485740 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 01:43:49.488055 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 01:43:49.488122 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 01:43:49.489311 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 01:43:49.489400 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 01:43:49.491976 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 01:43:49.492074 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 01:43:49.494198 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 01:43:49.496260 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 01:43:49.499293 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 01:43:49.500112 systemd-networkd[770]: eth0: DHCPv6 lease lost Oct 31 01:43:49.501121 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 01:43:49.501273 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 01:43:49.502373 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 01:43:49.502559 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 01:43:49.504710 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 01:43:49.505244 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 01:43:49.506302 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 01:43:49.506382 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 01:43:49.514039 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 01:43:49.514799 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 01:43:49.514876 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 01:43:49.517229 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 01:43:49.521853 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 01:43:49.522199 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 01:43:49.525654 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 01:43:49.526338 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 01:43:49.534658 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 01:43:49.534864 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 01:43:49.536279 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 01:43:49.536334 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 01:43:49.538474 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 01:43:49.538550 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 01:43:49.541745 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 01:43:49.541816 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 01:43:49.543111 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 01:43:49.543204 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 01:43:49.551089 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 01:43:49.551952 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 01:43:49.552046 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 01:43:49.556060 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 01:43:49.556131 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 01:43:49.557423 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 01:43:49.557486 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 01:43:49.560799 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 01:43:49.560865 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 01:43:49.561639 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 01:43:49.561725 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 01:43:49.564698 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 01:43:49.564843 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 01:43:49.567397 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 01:43:49.567546 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 01:43:49.569650 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 01:43:49.578128 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 01:43:49.589315 systemd[1]: Switching root. Oct 31 01:43:49.627566 systemd-journald[202]: Journal stopped Oct 31 01:43:51.195218 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Oct 31 01:43:51.195431 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 01:43:51.195475 kernel: SELinux: policy capability open_perms=1 Oct 31 01:43:51.195504 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 01:43:51.195530 kernel: SELinux: policy capability always_check_network=0 Oct 31 01:43:51.195555 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 01:43:51.195583 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 01:43:51.195602 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 01:43:51.195638 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 01:43:51.195678 kernel: audit: type=1403 audit(1761875029.866:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 01:43:51.195735 systemd[1]: Successfully loaded SELinux policy in 50.132ms. Oct 31 01:43:51.195789 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.503ms. Oct 31 01:43:51.195811 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Oct 31 01:43:51.195831 systemd[1]: Detected virtualization kvm. Oct 31 01:43:51.195850 systemd[1]: Detected architecture x86-64. Oct 31 01:43:51.195869 systemd[1]: Detected first boot. Oct 31 01:43:51.195897 systemd[1]: Hostname set to . Oct 31 01:43:51.195942 systemd[1]: Initializing machine ID from VM UUID. Oct 31 01:43:51.195965 zram_generator::config[1055]: No configuration found. Oct 31 01:43:51.195995 systemd[1]: Populated /etc with preset unit settings. Oct 31 01:43:51.196030 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 31 01:43:51.196051 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 31 01:43:51.196071 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 31 01:43:51.196100 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 31 01:43:51.196122 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 31 01:43:51.196143 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 31 01:43:51.196184 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 31 01:43:51.196206 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 31 01:43:51.196225 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 31 01:43:51.196259 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 31 01:43:51.196281 systemd[1]: Created slice user.slice - User and Session Slice. Oct 31 01:43:51.196300 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 01:43:51.196320 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 01:43:51.196359 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 31 01:43:51.196390 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 31 01:43:51.196423 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 31 01:43:51.196441 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 01:43:51.196459 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 31 01:43:51.196478 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 01:43:51.196496 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 31 01:43:51.196515 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 31 01:43:51.196544 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 31 01:43:51.196575 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 31 01:43:51.196593 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 01:43:51.196618 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 01:43:51.196669 systemd[1]: Reached target slices.target - Slice Units. Oct 31 01:43:51.196691 systemd[1]: Reached target swap.target - Swaps. Oct 31 01:43:51.196711 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 31 01:43:51.196739 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 31 01:43:51.196768 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 01:43:51.196806 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 01:43:51.196828 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 01:43:51.196847 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 31 01:43:51.196866 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 31 01:43:51.196886 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 31 01:43:51.196945 systemd[1]: Mounting media.mount - External Media Directory... Oct 31 01:43:51.196989 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:43:51.197012 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 31 01:43:51.197031 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 31 01:43:51.197051 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 31 01:43:51.197081 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 01:43:51.197117 systemd[1]: Reached target machines.target - Containers. Oct 31 01:43:51.197138 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 31 01:43:51.197176 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 01:43:51.197220 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 01:43:51.197242 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 31 01:43:51.197282 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 01:43:51.197303 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 01:43:51.197341 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 01:43:51.197361 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 01:43:51.197379 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 01:43:51.197398 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 01:43:51.197430 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 31 01:43:51.197458 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 31 01:43:51.197478 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 31 01:43:51.197496 systemd[1]: Stopped systemd-fsck-usr.service. Oct 31 01:43:51.197514 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 01:43:51.197546 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 01:43:51.197567 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 01:43:51.197586 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 31 01:43:51.197611 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 01:43:51.197675 systemd[1]: verity-setup.service: Deactivated successfully. Oct 31 01:43:51.197698 systemd[1]: Stopped verity-setup.service. Oct 31 01:43:51.197718 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:43:51.197750 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 31 01:43:51.197771 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 31 01:43:51.197790 systemd[1]: Mounted media.mount - External Media Directory. Oct 31 01:43:51.197809 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 31 01:43:51.197847 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 31 01:43:51.197869 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 31 01:43:51.197888 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 01:43:51.197907 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 01:43:51.197940 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 31 01:43:51.197960 kernel: loop: module loaded Oct 31 01:43:51.197990 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:43:51.198026 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 01:43:51.198047 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:43:51.198068 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 01:43:51.198101 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:43:51.198122 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 01:43:51.198154 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 01:43:51.198238 systemd-journald[1147]: Collecting audit messages is disabled. Oct 31 01:43:51.198311 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 01:43:51.198333 kernel: fuse: init (API version 7.39) Oct 31 01:43:51.198351 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 31 01:43:51.198389 systemd-journald[1147]: Journal started Oct 31 01:43:51.198444 systemd-journald[1147]: Runtime Journal (/run/log/journal/cb2ae7e7c8d74ae7a8391d028744279d) is 4.7M, max 38.0M, 33.2M free. Oct 31 01:43:50.757213 systemd[1]: Queued start job for default target multi-user.target. Oct 31 01:43:50.776366 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 31 01:43:50.777102 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 31 01:43:51.200940 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 01:43:51.213346 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 01:43:51.213717 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 01:43:51.226138 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 31 01:43:51.287716 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 01:43:51.291682 kernel: ACPI: bus type drm_connector registered Oct 31 01:43:51.303017 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 31 01:43:51.307276 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 31 01:43:51.308286 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 01:43:51.308481 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 01:43:51.310950 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Oct 31 01:43:51.319278 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 31 01:43:51.326079 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 31 01:43:51.328219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 01:43:51.343200 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 31 01:43:51.346663 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 31 01:43:51.347711 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:43:51.350725 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 31 01:43:51.352754 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 01:43:51.354257 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 01:43:51.366092 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 31 01:43:51.374104 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 31 01:43:51.378230 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 01:43:51.378755 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 01:43:51.380338 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 31 01:43:51.382049 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 31 01:43:51.383579 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 31 01:43:51.412364 systemd-journald[1147]: Time spent on flushing to /var/log/journal/cb2ae7e7c8d74ae7a8391d028744279d is 56.064ms for 1136 entries. Oct 31 01:43:51.412364 systemd-journald[1147]: System Journal (/var/log/journal/cb2ae7e7c8d74ae7a8391d028744279d) is 8.0M, max 584.8M, 576.8M free. Oct 31 01:43:51.498191 systemd-journald[1147]: Received client request to flush runtime journal. Oct 31 01:43:51.498262 kernel: loop0: detected capacity change from 0 to 142488 Oct 31 01:43:51.457651 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 31 01:43:51.461801 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 31 01:43:51.472155 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Oct 31 01:43:51.474002 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 01:43:51.500408 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 31 01:43:51.536523 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 01:43:51.537903 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 01:43:51.549808 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 31 01:43:51.562134 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 01:43:51.562959 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Oct 31 01:43:51.584500 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 31 01:43:51.595990 kernel: loop1: detected capacity change from 0 to 219144 Oct 31 01:43:51.596666 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 01:43:51.603873 udevadm[1203]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 31 01:43:51.663831 kernel: loop2: detected capacity change from 0 to 140768 Oct 31 01:43:51.661244 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Oct 31 01:43:51.661264 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Oct 31 01:43:51.676988 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 01:43:52.151240 kernel: loop3: detected capacity change from 0 to 8 Oct 31 01:43:52.200975 kernel: loop4: detected capacity change from 0 to 142488 Oct 31 01:43:52.236859 kernel: loop5: detected capacity change from 0 to 219144 Oct 31 01:43:52.305612 kernel: loop6: detected capacity change from 0 to 140768 Oct 31 01:43:52.328830 kernel: loop7: detected capacity change from 0 to 8 Oct 31 01:43:52.339455 (sd-merge)[1214]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Oct 31 01:43:52.345957 (sd-merge)[1214]: Merged extensions into '/usr'. Oct 31 01:43:52.360699 systemd[1]: Reloading requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Oct 31 01:43:52.360744 systemd[1]: Reloading... Oct 31 01:43:52.434943 ldconfig[1181]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 01:43:52.485984 zram_generator::config[1240]: No configuration found. Oct 31 01:43:52.685049 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:43:52.750194 systemd[1]: Reloading finished in 388 ms. Oct 31 01:43:52.783838 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 31 01:43:52.785382 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 31 01:43:52.799278 systemd[1]: Starting ensure-sysext.service... Oct 31 01:43:52.805191 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 01:43:52.822233 systemd[1]: Reloading requested from client PID 1296 ('systemctl') (unit ensure-sysext.service)... Oct 31 01:43:52.822261 systemd[1]: Reloading... Oct 31 01:43:52.874584 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 01:43:52.875784 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 31 01:43:52.878558 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 01:43:52.879169 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Oct 31 01:43:52.879433 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Oct 31 01:43:52.885461 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 01:43:52.885656 systemd-tmpfiles[1297]: Skipping /boot Oct 31 01:43:52.909874 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 01:43:52.910653 systemd-tmpfiles[1297]: Skipping /boot Oct 31 01:43:52.968448 zram_generator::config[1329]: No configuration found. Oct 31 01:43:53.118206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:43:53.182096 systemd[1]: Reloading finished in 359 ms. Oct 31 01:43:53.201785 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 31 01:43:53.216034 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 01:43:53.241946 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 01:43:53.247142 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 31 01:43:53.250119 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 31 01:43:53.256146 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 01:43:53.261013 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 01:43:53.269155 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 31 01:43:53.278838 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:43:53.279780 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 01:43:53.284304 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 01:43:53.296533 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 01:43:53.300841 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 01:43:53.301734 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 01:43:53.301892 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:43:53.306180 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:43:53.306450 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 01:43:53.306680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 01:43:53.316937 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 31 01:43:53.317857 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:43:53.325463 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:43:53.325883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 01:43:53.333264 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 01:43:53.334978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 01:43:53.335159 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 01:43:53.336821 systemd[1]: Finished ensure-sysext.service. Oct 31 01:43:53.338428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 01:43:53.338678 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 01:43:53.343444 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 31 01:43:53.353337 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 01:43:53.361147 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 31 01:43:53.384531 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 01:43:53.384885 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 01:43:53.394416 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 01:43:53.394898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 01:43:53.397914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 01:43:53.401004 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 31 01:43:53.412063 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 31 01:43:53.413321 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 01:43:53.413556 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 01:43:53.424370 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Oct 31 01:43:53.439602 augenrules[1417]: No rules Oct 31 01:43:53.438731 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 31 01:43:53.443411 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 01:43:53.444551 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 31 01:43:53.446965 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 01:43:53.463291 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 31 01:43:53.474034 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 01:43:53.485161 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 01:43:53.567871 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 31 01:43:53.568789 systemd[1]: Reached target time-set.target - System Time Set. Oct 31 01:43:53.654582 systemd-resolved[1387]: Positive Trust Anchors: Oct 31 01:43:53.654611 systemd-resolved[1387]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 01:43:53.654654 systemd-resolved[1387]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 01:43:53.656263 systemd-networkd[1438]: lo: Link UP Oct 31 01:43:53.656699 systemd-networkd[1438]: lo: Gained carrier Oct 31 01:43:53.659051 systemd-networkd[1438]: Enumeration completed Oct 31 01:43:53.659276 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 01:43:53.680814 systemd-resolved[1387]: Using system hostname 'srv-n5tpq.gb1.brightbox.com'. Oct 31 01:43:53.681138 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 31 01:43:53.692532 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 01:43:53.694098 systemd[1]: Reached target network.target - Network. Oct 31 01:43:53.694745 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 01:43:53.725949 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1431) Oct 31 01:43:53.862194 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 01:43:53.862208 systemd-networkd[1438]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 01:43:53.864438 systemd-networkd[1438]: eth0: Link UP Oct 31 01:43:53.864449 systemd-networkd[1438]: eth0: Gained carrier Oct 31 01:43:53.864468 systemd-networkd[1438]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 31 01:43:53.865178 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 01:43:53.874171 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 31 01:43:53.880048 systemd-networkd[1438]: eth0: DHCPv4 address 10.230.44.66/30, gateway 10.230.44.65 acquired from 10.230.44.65 Oct 31 01:43:53.881417 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Oct 31 01:43:53.899590 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 31 01:43:53.919689 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 31 01:43:53.951944 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 31 01:43:53.968945 kernel: ACPI: button: Power Button [PWRF] Oct 31 01:43:53.973943 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 01:43:54.010981 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 31 01:43:54.017951 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Oct 31 01:43:54.018345 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 31 01:43:54.024957 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 31 01:43:54.136407 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 01:43:54.312593 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 31 01:43:54.362794 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 01:43:54.370286 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 31 01:43:54.392602 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 01:43:54.427361 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 31 01:43:54.430315 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 01:43:54.431085 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 01:43:54.432032 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 31 01:43:54.432864 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 31 01:43:54.434076 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 31 01:43:54.435208 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 31 01:43:54.435977 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 31 01:43:54.436726 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 01:43:54.436792 systemd[1]: Reached target paths.target - Path Units. Oct 31 01:43:54.437437 systemd[1]: Reached target timers.target - Timer Units. Oct 31 01:43:54.441037 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 31 01:43:54.444912 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 31 01:43:54.456909 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 31 01:43:54.460063 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 31 01:43:54.461522 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 31 01:43:54.462358 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 01:43:54.463035 systemd[1]: Reached target basic.target - Basic System. Oct 31 01:43:54.463721 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 31 01:43:54.463781 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 31 01:43:54.467100 systemd[1]: Starting containerd.service - containerd container runtime... Oct 31 01:43:54.477196 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Oct 31 01:43:54.477861 lvm[1474]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 31 01:43:54.482095 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 31 01:43:54.485999 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 31 01:43:54.494176 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 31 01:43:54.496771 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 31 01:43:54.498826 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 31 01:43:54.504078 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 31 01:43:54.513849 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 31 01:43:54.525227 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 31 01:43:54.538172 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 31 01:43:54.540212 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 01:43:54.545561 jq[1478]: false Oct 31 01:43:54.542717 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 31 01:43:54.544857 systemd[1]: Starting update-engine.service - Update Engine... Oct 31 01:43:54.554579 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 31 01:43:54.560454 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 31 01:43:54.564377 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 01:43:54.573730 jq[1489]: true Oct 31 01:43:54.565998 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 31 01:43:54.570616 dbus-daemon[1477]: [system] SELinux support is enabled Oct 31 01:43:54.572297 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 31 01:43:54.576582 dbus-daemon[1477]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1438 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 31 01:43:54.604489 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 01:43:54.604562 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 31 01:43:54.611860 dbus-daemon[1477]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 31 01:43:54.617305 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 01:43:54.617347 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 31 01:43:54.631878 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 01:43:54.632167 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 31 01:43:54.635687 jq[1495]: true Oct 31 01:43:54.638876 update_engine[1487]: I20251031 01:43:54.637116 1487 main.cc:92] Flatcar Update Engine starting Oct 31 01:43:54.641606 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 01:43:54.641859 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 31 01:43:54.653998 update_engine[1487]: I20251031 01:43:54.651212 1487 update_check_scheduler.cc:74] Next update check in 9m35s Oct 31 01:43:54.655774 systemd[1]: Started update-engine.service - Update Engine. Oct 31 01:43:54.657968 (ntainerd)[1499]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 31 01:43:54.669199 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Oct 31 01:43:54.678150 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 31 01:43:54.690288 extend-filesystems[1479]: Found loop4 Oct 31 01:43:54.690288 extend-filesystems[1479]: Found loop5 Oct 31 01:43:54.696113 tar[1507]: linux-amd64/LICENSE Oct 31 01:43:54.696113 tar[1507]: linux-amd64/helm Oct 31 01:43:54.698457 extend-filesystems[1479]: Found loop6 Oct 31 01:43:54.699226 extend-filesystems[1479]: Found loop7 Oct 31 01:43:54.699226 extend-filesystems[1479]: Found vda Oct 31 01:43:54.699226 extend-filesystems[1479]: Found vda1 Oct 31 01:43:54.699226 extend-filesystems[1479]: Found vda2 Oct 31 01:43:54.699226 extend-filesystems[1479]: Found vda3 Oct 31 01:43:54.699226 extend-filesystems[1479]: Found usr Oct 31 01:43:54.699226 extend-filesystems[1479]: Found vda4 Oct 31 01:43:54.699226 extend-filesystems[1479]: Found vda6 Oct 31 01:43:54.699226 extend-filesystems[1479]: Found vda7 Oct 31 01:43:54.699226 extend-filesystems[1479]: Found vda9 Oct 31 01:43:54.699226 extend-filesystems[1479]: Checking size of /dev/vda9 Oct 31 01:43:54.915457 extend-filesystems[1479]: Resized partition /dev/vda9 Oct 31 01:43:54.936608 extend-filesystems[1533]: resize2fs 1.47.1 (20-May-2024) Oct 31 01:43:54.952961 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Oct 31 01:43:54.960959 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1431) Oct 31 01:43:55.011768 systemd-logind[1485]: Watching system buttons on /dev/input/event2 (Power Button) Oct 31 01:43:55.012350 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 01:43:55.013279 systemd-logind[1485]: New seat seat0. Oct 31 01:43:55.017687 systemd[1]: Started systemd-logind.service - User Login Management. Oct 31 01:43:55.036436 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Oct 31 01:43:55.044461 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 31 01:43:55.128268 systemd[1]: Starting sshkeys.service... Oct 31 01:43:55.130222 systemd-networkd[1438]: eth0: Gained IPv6LL Oct 31 01:43:55.142669 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Oct 31 01:43:55.161384 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 31 01:43:55.164881 systemd[1]: Reached target network-online.target - Network is Online. Oct 31 01:43:55.178481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:43:55.186255 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 31 01:43:55.236130 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Oct 31 01:43:55.251503 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Oct 31 01:43:55.339798 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 31 01:43:55.357562 dbus-daemon[1477]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 31 01:43:55.358766 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Oct 31 01:43:55.362271 dbus-daemon[1477]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1514 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 31 01:43:55.374685 systemd[1]: Starting polkit.service - Authorization Manager... Oct 31 01:43:55.398291 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 01:43:55.408953 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Oct 31 01:43:55.425079 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 31 01:43:55.457096 extend-filesystems[1533]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 01:43:55.457096 extend-filesystems[1533]: old_desc_blocks = 1, new_desc_blocks = 8 Oct 31 01:43:55.457096 extend-filesystems[1533]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Oct 31 01:43:55.471031 extend-filesystems[1479]: Resized filesystem in /dev/vda9 Oct 31 01:43:55.466574 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 01:43:55.466894 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 31 01:43:55.480516 polkitd[1556]: Started polkitd version 121 Oct 31 01:43:55.625274 polkitd[1556]: Loading rules from directory /etc/polkit-1/rules.d Oct 31 01:43:55.625413 polkitd[1556]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 31 01:43:55.626330 polkitd[1556]: Finished loading, compiling and executing 2 rules Oct 31 01:43:55.630327 dbus-daemon[1477]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 31 01:43:55.630777 polkitd[1556]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 31 01:43:55.630622 systemd[1]: Started polkit.service - Authorization Manager. Oct 31 01:43:55.763448 systemd-hostnamed[1514]: Hostname set to (static) Oct 31 01:43:55.775831 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Oct 31 01:43:55.786778 systemd-networkd[1438]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8b10:24:19ff:fee6:2c42/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8b10:24:19ff:fee6:2c42/64 assigned by NDisc. Oct 31 01:43:55.786790 systemd-networkd[1438]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Oct 31 01:43:56.005877 containerd[1499]: time="2025-10-31T01:43:56.005732682Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Oct 31 01:43:56.155058 containerd[1499]: time="2025-10-31T01:43:56.154103960Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:43:56.158456 containerd[1499]: time="2025-10-31T01:43:56.158410644Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 31 01:43:56.158550 containerd[1499]: time="2025-10-31T01:43:56.158454400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 31 01:43:56.158550 containerd[1499]: time="2025-10-31T01:43:56.158483380Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 31 01:43:56.159057 containerd[1499]: time="2025-10-31T01:43:56.159029366Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 31 01:43:56.159111 containerd[1499]: time="2025-10-31T01:43:56.159068634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 31 01:43:56.159251 containerd[1499]: time="2025-10-31T01:43:56.159222456Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 01:43:56.159489 containerd[1499]: time="2025-10-31T01:43:56.159254798Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:43:56.159824 containerd[1499]: time="2025-10-31T01:43:56.159777524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 01:43:56.159875 containerd[1499]: time="2025-10-31T01:43:56.159826222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 31 01:43:56.161088 containerd[1499]: time="2025-10-31T01:43:56.159851387Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 01:43:56.161088 containerd[1499]: time="2025-10-31T01:43:56.160140685Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 31 01:43:56.161088 containerd[1499]: time="2025-10-31T01:43:56.160321030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:43:56.161419 containerd[1499]: time="2025-10-31T01:43:56.161390845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 31 01:43:56.161622 containerd[1499]: time="2025-10-31T01:43:56.161590448Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 31 01:43:56.161671 containerd[1499]: time="2025-10-31T01:43:56.161639268Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 31 01:43:56.164934 containerd[1499]: time="2025-10-31T01:43:56.162599119Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 31 01:43:56.164934 containerd[1499]: time="2025-10-31T01:43:56.162792104Z" level=info msg="metadata content store policy set" policy=shared Oct 31 01:43:56.169050 containerd[1499]: time="2025-10-31T01:43:56.168852784Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 31 01:43:56.169050 containerd[1499]: time="2025-10-31T01:43:56.168962801Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 31 01:43:56.169187 containerd[1499]: time="2025-10-31T01:43:56.168992446Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 31 01:43:56.169187 containerd[1499]: time="2025-10-31T01:43:56.169091901Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 31 01:43:56.169187 containerd[1499]: time="2025-10-31T01:43:56.169117858Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 31 01:43:56.169349 containerd[1499]: time="2025-10-31T01:43:56.169321142Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 31 01:43:56.172363 containerd[1499]: time="2025-10-31T01:43:56.172326765Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 31 01:43:56.172598 containerd[1499]: time="2025-10-31T01:43:56.172562571Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 31 01:43:56.172649 containerd[1499]: time="2025-10-31T01:43:56.172603165Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 31 01:43:56.172649 containerd[1499]: time="2025-10-31T01:43:56.172631420Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 31 01:43:56.172744 containerd[1499]: time="2025-10-31T01:43:56.172661343Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 31 01:43:56.172744 containerd[1499]: time="2025-10-31T01:43:56.172688596Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 31 01:43:56.172744 containerd[1499]: time="2025-10-31T01:43:56.172713010Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 31 01:43:56.172744 containerd[1499]: time="2025-10-31T01:43:56.172735635Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 31 01:43:56.172889 containerd[1499]: time="2025-10-31T01:43:56.172761753Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 31 01:43:56.172889 containerd[1499]: time="2025-10-31T01:43:56.172787508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 31 01:43:56.172889 containerd[1499]: time="2025-10-31T01:43:56.172811239Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 31 01:43:56.172889 containerd[1499]: time="2025-10-31T01:43:56.172833928Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 31 01:43:56.172889 containerd[1499]: time="2025-10-31T01:43:56.172873193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173053 containerd[1499]: time="2025-10-31T01:43:56.172901341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173053 containerd[1499]: time="2025-10-31T01:43:56.172946369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173053 containerd[1499]: time="2025-10-31T01:43:56.172973264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173053 containerd[1499]: time="2025-10-31T01:43:56.172997350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173053 containerd[1499]: time="2025-10-31T01:43:56.173023199Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173230 containerd[1499]: time="2025-10-31T01:43:56.173070012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173230 containerd[1499]: time="2025-10-31T01:43:56.173100303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173230 containerd[1499]: time="2025-10-31T01:43:56.173126148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173230 containerd[1499]: time="2025-10-31T01:43:56.173155194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173230 containerd[1499]: time="2025-10-31T01:43:56.173177091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173230 containerd[1499]: time="2025-10-31T01:43:56.173202271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173230 containerd[1499]: time="2025-10-31T01:43:56.173225838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173459 containerd[1499]: time="2025-10-31T01:43:56.173252906Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 31 01:43:56.173459 containerd[1499]: time="2025-10-31T01:43:56.173307626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173459 containerd[1499]: time="2025-10-31T01:43:56.173335891Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173459 containerd[1499]: time="2025-10-31T01:43:56.173357679Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 31 01:43:56.173459 containerd[1499]: time="2025-10-31T01:43:56.173447770Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 31 01:43:56.173629 containerd[1499]: time="2025-10-31T01:43:56.173484748Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 31 01:43:56.173629 containerd[1499]: time="2025-10-31T01:43:56.173509715Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 31 01:43:56.173629 containerd[1499]: time="2025-10-31T01:43:56.173545130Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 31 01:43:56.173629 containerd[1499]: time="2025-10-31T01:43:56.173568433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.173629 containerd[1499]: time="2025-10-31T01:43:56.173599807Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 31 01:43:56.173780 containerd[1499]: time="2025-10-31T01:43:56.173631281Z" level=info msg="NRI interface is disabled by configuration." Oct 31 01:43:56.173780 containerd[1499]: time="2025-10-31T01:43:56.173655526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 31 01:43:56.181774 systemd[1]: Started containerd.service - containerd container runtime. Oct 31 01:43:56.185065 containerd[1499]: time="2025-10-31T01:43:56.179060523Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 31 01:43:56.185065 containerd[1499]: time="2025-10-31T01:43:56.179197847Z" level=info msg="Connect containerd service" Oct 31 01:43:56.185065 containerd[1499]: time="2025-10-31T01:43:56.179264208Z" level=info msg="using legacy CRI server" Oct 31 01:43:56.185065 containerd[1499]: time="2025-10-31T01:43:56.179282128Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 31 01:43:56.185065 containerd[1499]: time="2025-10-31T01:43:56.179410117Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 31 01:43:56.185065 containerd[1499]: time="2025-10-31T01:43:56.180440710Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 01:43:56.185065 containerd[1499]: time="2025-10-31T01:43:56.181132368Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 01:43:56.185065 containerd[1499]: time="2025-10-31T01:43:56.181213724Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 01:43:56.185065 containerd[1499]: time="2025-10-31T01:43:56.181308676Z" level=info msg="Start subscribing containerd event" Oct 31 01:43:56.185065 containerd[1499]: time="2025-10-31T01:43:56.181375415Z" level=info msg="Start recovering state" Oct 31 01:43:56.185065 containerd[1499]: time="2025-10-31T01:43:56.181479076Z" level=info msg="Start event monitor" Oct 31 01:43:56.185065 containerd[1499]: time="2025-10-31T01:43:56.181509183Z" level=info msg="Start snapshots syncer" Oct 31 01:43:56.185065 containerd[1499]: time="2025-10-31T01:43:56.181548863Z" level=info msg="Start cni network conf syncer for default" Oct 31 01:43:56.185065 containerd[1499]: time="2025-10-31T01:43:56.181564799Z" level=info msg="Start streaming server" Oct 31 01:43:56.185065 containerd[1499]: time="2025-10-31T01:43:56.183002253Z" level=info msg="containerd successfully booted in 0.180000s" Oct 31 01:43:56.404097 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 01:43:56.473643 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 31 01:43:56.510343 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 31 01:43:56.528787 systemd[1]: Started sshd@0-10.230.44.66:22-147.75.109.163:54070.service - OpenSSH per-connection server daemon (147.75.109.163:54070). Oct 31 01:43:56.533169 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 01:43:56.534664 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 31 01:43:56.566554 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 31 01:43:56.619243 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 31 01:43:56.631351 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 31 01:43:56.644484 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 31 01:43:56.646858 systemd[1]: Reached target getty.target - Login Prompts. Oct 31 01:43:56.999000 tar[1507]: linux-amd64/README.md Oct 31 01:43:57.149665 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 31 01:43:57.433387 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Oct 31 01:43:57.529576 sshd[1591]: Accepted publickey for core from 147.75.109.163 port 54070 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:43:57.530309 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:43:57.548740 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 31 01:43:57.571073 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 31 01:43:57.582454 systemd-logind[1485]: New session 1 of user core. Oct 31 01:43:57.697573 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 31 01:43:57.710226 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 31 01:43:57.715135 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:43:57.729200 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 01:43:57.732551 (systemd)[1607]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 01:43:57.885785 systemd[1607]: Queued start job for default target default.target. Oct 31 01:43:57.894345 systemd[1607]: Created slice app.slice - User Application Slice. Oct 31 01:43:57.894385 systemd[1607]: Reached target paths.target - Paths. Oct 31 01:43:57.894408 systemd[1607]: Reached target timers.target - Timers. Oct 31 01:43:57.897099 systemd[1607]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 31 01:43:57.925751 systemd[1607]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 31 01:43:57.926625 systemd[1607]: Reached target sockets.target - Sockets. Oct 31 01:43:57.926651 systemd[1607]: Reached target basic.target - Basic System. Oct 31 01:43:57.926728 systemd[1607]: Reached target default.target - Main User Target. Oct 31 01:43:57.926811 systemd[1607]: Startup finished in 183ms. Oct 31 01:43:57.927190 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 31 01:43:57.938945 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 31 01:43:58.480700 kubelet[1608]: E1031 01:43:58.480221 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:43:58.483349 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:43:58.483632 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:43:58.484275 systemd[1]: kubelet.service: Consumed 1.858s CPU time. Oct 31 01:43:58.590050 systemd[1]: Started sshd@1-10.230.44.66:22-147.75.109.163:54076.service - OpenSSH per-connection server daemon (147.75.109.163:54076). Oct 31 01:43:59.521555 sshd[1628]: Accepted publickey for core from 147.75.109.163 port 54076 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:43:59.524402 sshd[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:43:59.531979 systemd-logind[1485]: New session 2 of user core. Oct 31 01:43:59.543332 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 31 01:44:00.152336 sshd[1628]: pam_unix(sshd:session): session closed for user core Oct 31 01:44:00.156354 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Oct 31 01:44:00.157122 systemd[1]: sshd@1-10.230.44.66:22-147.75.109.163:54076.service: Deactivated successfully. Oct 31 01:44:00.159726 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 01:44:00.162074 systemd-logind[1485]: Removed session 2. Oct 31 01:44:00.312390 systemd[1]: Started sshd@2-10.230.44.66:22-147.75.109.163:57602.service - OpenSSH per-connection server daemon (147.75.109.163:57602). Oct 31 01:44:01.201780 sshd[1635]: Accepted publickey for core from 147.75.109.163 port 57602 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:44:01.203781 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:44:01.210207 systemd-logind[1485]: New session 3 of user core. Oct 31 01:44:01.221296 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 31 01:44:01.714095 login[1596]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 31 01:44:01.716535 login[1597]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Oct 31 01:44:01.722734 systemd-logind[1485]: New session 4 of user core. Oct 31 01:44:01.730267 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 31 01:44:01.734336 systemd-logind[1485]: New session 5 of user core. Oct 31 01:44:01.739159 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 31 01:44:01.830160 sshd[1635]: pam_unix(sshd:session): session closed for user core Oct 31 01:44:01.834382 systemd[1]: sshd@2-10.230.44.66:22-147.75.109.163:57602.service: Deactivated successfully. Oct 31 01:44:01.836625 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 01:44:01.838529 systemd-logind[1485]: Session 3 logged out. Waiting for processes to exit. Oct 31 01:44:01.839981 systemd-logind[1485]: Removed session 3. Oct 31 01:44:01.890650 coreos-metadata[1476]: Oct 31 01:44:01.890 WARN failed to locate config-drive, using the metadata service API instead Oct 31 01:44:01.915420 coreos-metadata[1476]: Oct 31 01:44:01.915 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Oct 31 01:44:01.924441 coreos-metadata[1476]: Oct 31 01:44:01.924 INFO Fetch failed with 404: resource not found Oct 31 01:44:01.924441 coreos-metadata[1476]: Oct 31 01:44:01.924 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Oct 31 01:44:01.925078 coreos-metadata[1476]: Oct 31 01:44:01.924 INFO Fetch successful Oct 31 01:44:01.925225 coreos-metadata[1476]: Oct 31 01:44:01.925 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Oct 31 01:44:01.943206 coreos-metadata[1476]: Oct 31 01:44:01.943 INFO Fetch successful Oct 31 01:44:01.943528 coreos-metadata[1476]: Oct 31 01:44:01.943 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Oct 31 01:44:01.959248 coreos-metadata[1476]: Oct 31 01:44:01.959 INFO Fetch successful Oct 31 01:44:01.959611 coreos-metadata[1476]: Oct 31 01:44:01.959 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Oct 31 01:44:01.976448 coreos-metadata[1476]: Oct 31 01:44:01.976 INFO Fetch successful Oct 31 01:44:01.977209 coreos-metadata[1476]: Oct 31 01:44:01.977 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Oct 31 01:44:01.994410 coreos-metadata[1476]: Oct 31 01:44:01.994 INFO Fetch successful Oct 31 01:44:02.025100 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Oct 31 01:44:02.026024 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 31 01:44:02.514267 coreos-metadata[1544]: Oct 31 01:44:02.514 WARN failed to locate config-drive, using the metadata service API instead Oct 31 01:44:02.537105 coreos-metadata[1544]: Oct 31 01:44:02.537 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Oct 31 01:44:02.566415 coreos-metadata[1544]: Oct 31 01:44:02.566 INFO Fetch successful Oct 31 01:44:02.566627 coreos-metadata[1544]: Oct 31 01:44:02.566 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Oct 31 01:44:02.598415 coreos-metadata[1544]: Oct 31 01:44:02.598 INFO Fetch successful Oct 31 01:44:02.600340 unknown[1544]: wrote ssh authorized keys file for user: core Oct 31 01:44:02.628039 update-ssh-keys[1674]: Updated "/home/core/.ssh/authorized_keys" Oct 31 01:44:02.628829 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Oct 31 01:44:02.631578 systemd[1]: Finished sshkeys.service. Oct 31 01:44:02.634453 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 31 01:44:02.637166 systemd[1]: Startup finished in 1.629s (kernel) + 15.116s (initrd) + 12.819s (userspace) = 29.566s. Oct 31 01:44:08.502351 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 01:44:08.513436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:44:08.819300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:44:08.839439 (kubelet)[1686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 01:44:08.916312 kubelet[1686]: E1031 01:44:08.916214 1686 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:44:08.920712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:44:08.921127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:44:11.992673 systemd[1]: Started sshd@3-10.230.44.66:22-147.75.109.163:38330.service - OpenSSH per-connection server daemon (147.75.109.163:38330). Oct 31 01:44:12.906192 sshd[1694]: Accepted publickey for core from 147.75.109.163 port 38330 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:44:12.908507 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:44:12.917263 systemd-logind[1485]: New session 6 of user core. Oct 31 01:44:12.926277 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 31 01:44:13.535033 sshd[1694]: pam_unix(sshd:session): session closed for user core Oct 31 01:44:13.539805 systemd[1]: sshd@3-10.230.44.66:22-147.75.109.163:38330.service: Deactivated successfully. Oct 31 01:44:13.541749 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 01:44:13.542588 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Oct 31 01:44:13.543900 systemd-logind[1485]: Removed session 6. Oct 31 01:44:13.698343 systemd[1]: Started sshd@4-10.230.44.66:22-147.75.109.163:38332.service - OpenSSH per-connection server daemon (147.75.109.163:38332). Oct 31 01:44:14.590649 sshd[1701]: Accepted publickey for core from 147.75.109.163 port 38332 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:44:14.592676 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:44:14.598534 systemd-logind[1485]: New session 7 of user core. Oct 31 01:44:14.610773 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 31 01:44:15.207404 sshd[1701]: pam_unix(sshd:session): session closed for user core Oct 31 01:44:15.212858 systemd[1]: sshd@4-10.230.44.66:22-147.75.109.163:38332.service: Deactivated successfully. Oct 31 01:44:15.215542 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 01:44:15.216603 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Oct 31 01:44:15.218446 systemd-logind[1485]: Removed session 7. Oct 31 01:44:15.374820 systemd[1]: Started sshd@5-10.230.44.66:22-147.75.109.163:38348.service - OpenSSH per-connection server daemon (147.75.109.163:38348). Oct 31 01:44:16.278335 sshd[1708]: Accepted publickey for core from 147.75.109.163 port 38348 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:44:16.280622 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:44:16.287259 systemd-logind[1485]: New session 8 of user core. Oct 31 01:44:16.295230 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 31 01:44:16.912450 sshd[1708]: pam_unix(sshd:session): session closed for user core Oct 31 01:44:16.917316 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Oct 31 01:44:16.918628 systemd[1]: sshd@5-10.230.44.66:22-147.75.109.163:38348.service: Deactivated successfully. Oct 31 01:44:16.920845 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 01:44:16.922207 systemd-logind[1485]: Removed session 8. Oct 31 01:44:17.068231 systemd[1]: Started sshd@6-10.230.44.66:22-147.75.109.163:38352.service - OpenSSH per-connection server daemon (147.75.109.163:38352). Oct 31 01:44:17.978123 sshd[1715]: Accepted publickey for core from 147.75.109.163 port 38352 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:44:17.980289 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:44:17.988670 systemd-logind[1485]: New session 9 of user core. Oct 31 01:44:17.995197 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 31 01:44:18.488182 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 01:44:18.488655 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 01:44:18.506684 sudo[1718]: pam_unix(sudo:session): session closed for user root Oct 31 01:44:18.653492 sshd[1715]: pam_unix(sshd:session): session closed for user core Oct 31 01:44:18.659212 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Oct 31 01:44:18.660142 systemd[1]: sshd@6-10.230.44.66:22-147.75.109.163:38352.service: Deactivated successfully. Oct 31 01:44:18.662433 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 01:44:18.663878 systemd-logind[1485]: Removed session 9. Oct 31 01:44:18.812260 systemd[1]: Started sshd@7-10.230.44.66:22-147.75.109.163:38354.service - OpenSSH per-connection server daemon (147.75.109.163:38354). Oct 31 01:44:19.002111 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 31 01:44:19.009220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:44:19.180654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:44:19.194465 (kubelet)[1733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 01:44:19.293620 kubelet[1733]: E1031 01:44:19.293520 1733 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:44:19.296504 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:44:19.296770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:44:19.727187 sshd[1723]: Accepted publickey for core from 147.75.109.163 port 38354 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:44:19.729252 sshd[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:44:19.737058 systemd-logind[1485]: New session 10 of user core. Oct 31 01:44:19.746191 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 01:44:20.210603 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 01:44:20.211125 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 01:44:20.216786 sudo[1742]: pam_unix(sudo:session): session closed for user root Oct 31 01:44:20.224914 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 31 01:44:20.225370 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 01:44:20.243297 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Oct 31 01:44:20.246911 auditctl[1745]: No rules Oct 31 01:44:20.247448 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 01:44:20.247749 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Oct 31 01:44:20.255851 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Oct 31 01:44:20.288448 augenrules[1763]: No rules Oct 31 01:44:20.290267 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Oct 31 01:44:20.291744 sudo[1741]: pam_unix(sudo:session): session closed for user root Oct 31 01:44:20.438082 sshd[1723]: pam_unix(sshd:session): session closed for user core Oct 31 01:44:20.441770 systemd[1]: sshd@7-10.230.44.66:22-147.75.109.163:38354.service: Deactivated successfully. Oct 31 01:44:20.444113 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 01:44:20.446263 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Oct 31 01:44:20.447907 systemd-logind[1485]: Removed session 10. Oct 31 01:44:20.592753 systemd[1]: Started sshd@8-10.230.44.66:22-147.75.109.163:48104.service - OpenSSH per-connection server daemon (147.75.109.163:48104). Oct 31 01:44:21.496893 sshd[1771]: Accepted publickey for core from 147.75.109.163 port 48104 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:44:21.498872 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:44:21.504483 systemd-logind[1485]: New session 11 of user core. Oct 31 01:44:21.524287 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 01:44:21.978617 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 01:44:21.979667 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 01:44:22.707307 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 31 01:44:22.710756 (dockerd)[1790]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 31 01:44:23.447983 dockerd[1790]: time="2025-10-31T01:44:23.447788066Z" level=info msg="Starting up" Oct 31 01:44:23.711809 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2736521964-merged.mount: Deactivated successfully. Oct 31 01:44:23.738953 dockerd[1790]: time="2025-10-31T01:44:23.738668948Z" level=info msg="Loading containers: start." Oct 31 01:44:23.886036 kernel: Initializing XFRM netlink socket Oct 31 01:44:23.930754 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Oct 31 01:44:24.000144 systemd-networkd[1438]: docker0: Link UP Oct 31 01:44:24.026170 dockerd[1790]: time="2025-10-31T01:44:24.026117382Z" level=info msg="Loading containers: done." Oct 31 01:44:24.045907 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck490599407-merged.mount: Deactivated successfully. Oct 31 01:44:24.048164 dockerd[1790]: time="2025-10-31T01:44:24.048099163Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 01:44:24.048293 dockerd[1790]: time="2025-10-31T01:44:24.048268234Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Oct 31 01:44:24.048482 dockerd[1790]: time="2025-10-31T01:44:24.048445816Z" level=info msg="Daemon has completed initialization" Oct 31 01:44:24.102224 dockerd[1790]: time="2025-10-31T01:44:24.102137825Z" level=info msg="API listen on /run/docker.sock" Oct 31 01:44:24.102747 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 31 01:44:24.730598 systemd-timesyncd[1407]: Contacted time server [2a01:7e00::f03c:94ff:fe5e:ce98]:123 (2.flatcar.pool.ntp.org). Oct 31 01:44:24.730788 systemd-timesyncd[1407]: Initial clock synchronization to Fri 2025-10-31 01:44:24.730111 UTC. Oct 31 01:44:24.731725 systemd-resolved[1387]: Clock change detected. Flushing caches. Oct 31 01:44:25.735628 containerd[1499]: time="2025-10-31T01:44:25.730917464Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 31 01:44:26.431126 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 31 01:44:26.806192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1468148971.mount: Deactivated successfully. Oct 31 01:44:28.879993 containerd[1499]: time="2025-10-31T01:44:28.879923936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:28.881371 containerd[1499]: time="2025-10-31T01:44:28.881314367Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065400" Oct 31 01:44:28.882303 containerd[1499]: time="2025-10-31T01:44:28.882265069Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:28.886346 containerd[1499]: time="2025-10-31T01:44:28.886307946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:28.888220 containerd[1499]: time="2025-10-31T01:44:28.888180155Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 3.157135509s" Oct 31 01:44:28.888308 containerd[1499]: time="2025-10-31T01:44:28.888248888Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 31 01:44:28.890108 containerd[1499]: time="2025-10-31T01:44:28.890069560Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 31 01:44:30.103480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 31 01:44:30.113906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:44:30.510521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:44:30.512276 (kubelet)[2005]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 01:44:30.626333 kubelet[2005]: E1031 01:44:30.626130 2005 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:44:30.629636 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:44:30.629899 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:44:31.544392 containerd[1499]: time="2025-10-31T01:44:31.544266358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:31.546159 containerd[1499]: time="2025-10-31T01:44:31.545973604Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159765" Oct 31 01:44:31.548614 containerd[1499]: time="2025-10-31T01:44:31.546821044Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:31.557325 containerd[1499]: time="2025-10-31T01:44:31.557273419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:31.559924 containerd[1499]: time="2025-10-31T01:44:31.559880507Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 2.66975889s" Oct 31 01:44:31.560018 containerd[1499]: time="2025-10-31T01:44:31.559970815Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 31 01:44:31.562294 containerd[1499]: time="2025-10-31T01:44:31.562237390Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 31 01:44:33.446937 containerd[1499]: time="2025-10-31T01:44:33.446877784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:33.450936 containerd[1499]: time="2025-10-31T01:44:33.450527894Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725101" Oct 31 01:44:33.451988 containerd[1499]: time="2025-10-31T01:44:33.451954296Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:33.457603 containerd[1499]: time="2025-10-31T01:44:33.456039232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:33.457868 containerd[1499]: time="2025-10-31T01:44:33.457833714Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.895258466s" Oct 31 01:44:33.457999 containerd[1499]: time="2025-10-31T01:44:33.457972070Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 31 01:44:33.458872 containerd[1499]: time="2025-10-31T01:44:33.458830109Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 31 01:44:35.509822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount330074237.mount: Deactivated successfully. Oct 31 01:44:36.211440 containerd[1499]: time="2025-10-31T01:44:36.211367008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:36.212616 containerd[1499]: time="2025-10-31T01:44:36.211790451Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964707" Oct 31 01:44:36.213200 containerd[1499]: time="2025-10-31T01:44:36.213156242Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:36.216334 containerd[1499]: time="2025-10-31T01:44:36.216221521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:36.217956 containerd[1499]: time="2025-10-31T01:44:36.217453623Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.758577589s" Oct 31 01:44:36.217956 containerd[1499]: time="2025-10-31T01:44:36.217498963Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 31 01:44:36.218339 containerd[1499]: time="2025-10-31T01:44:36.218305930Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 31 01:44:37.104194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount353407507.mount: Deactivated successfully. Oct 31 01:44:38.929889 containerd[1499]: time="2025-10-31T01:44:38.929760520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:38.932426 containerd[1499]: time="2025-10-31T01:44:38.932085058Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Oct 31 01:44:38.933224 containerd[1499]: time="2025-10-31T01:44:38.933152872Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:38.937897 containerd[1499]: time="2025-10-31T01:44:38.937836808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:38.939827 containerd[1499]: time="2025-10-31T01:44:38.939780417Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.721432118s" Oct 31 01:44:38.939934 containerd[1499]: time="2025-10-31T01:44:38.939836343Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 31 01:44:38.941779 containerd[1499]: time="2025-10-31T01:44:38.941485815Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 31 01:44:39.698221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2870531892.mount: Deactivated successfully. Oct 31 01:44:39.705712 containerd[1499]: time="2025-10-31T01:44:39.704441553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:39.705712 containerd[1499]: time="2025-10-31T01:44:39.705661990Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Oct 31 01:44:39.706077 containerd[1499]: time="2025-10-31T01:44:39.706042388Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:39.709154 containerd[1499]: time="2025-10-31T01:44:39.709114304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:39.710568 containerd[1499]: time="2025-10-31T01:44:39.710521772Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 768.951968ms" Oct 31 01:44:39.710795 containerd[1499]: time="2025-10-31T01:44:39.710753960Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 31 01:44:39.712347 containerd[1499]: time="2025-10-31T01:44:39.712302635Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 31 01:44:40.654659 update_engine[1487]: I20251031 01:44:40.654457 1487 update_attempter.cc:509] Updating boot flags... Oct 31 01:44:40.666974 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Oct 31 01:44:40.676870 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:44:40.757650 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2127) Oct 31 01:44:40.987626 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2130) Oct 31 01:44:41.217818 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:44:41.220916 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 01:44:41.332168 kubelet[2139]: E1031 01:44:41.332093 2139 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 01:44:41.334295 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 01:44:41.334608 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 01:44:45.501708 containerd[1499]: time="2025-10-31T01:44:45.500605365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:45.503476 containerd[1499]: time="2025-10-31T01:44:45.503119934Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514601" Oct 31 01:44:45.505595 containerd[1499]: time="2025-10-31T01:44:45.504971017Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:45.512352 containerd[1499]: time="2025-10-31T01:44:45.512292546Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 5.799666988s" Oct 31 01:44:45.512635 containerd[1499]: time="2025-10-31T01:44:45.512520202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:44:45.513440 containerd[1499]: time="2025-10-31T01:44:45.513404191Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 31 01:44:50.648279 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:44:50.655960 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:44:50.700251 systemd[1]: Reloading requested from client PID 2179 ('systemctl') (unit session-11.scope)... Oct 31 01:44:50.700604 systemd[1]: Reloading... Oct 31 01:44:50.822611 zram_generator::config[2214]: No configuration found. Oct 31 01:44:51.041429 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:44:51.149005 systemd[1]: Reloading finished in 447 ms. Oct 31 01:44:51.212702 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 01:44:51.212853 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 01:44:51.213243 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:44:51.223142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:44:51.447265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:44:51.458066 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 01:44:51.528463 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 01:44:51.528463 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:44:51.529039 kubelet[2284]: I1031 01:44:51.528537 2284 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 01:44:52.268823 kubelet[2284]: I1031 01:44:52.268767 2284 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 31 01:44:52.268823 kubelet[2284]: I1031 01:44:52.268808 2284 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 01:44:52.269130 kubelet[2284]: I1031 01:44:52.268885 2284 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 31 01:44:52.269130 kubelet[2284]: I1031 01:44:52.268915 2284 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 01:44:52.269326 kubelet[2284]: I1031 01:44:52.269295 2284 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 01:44:52.290416 kubelet[2284]: I1031 01:44:52.288685 2284 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 01:44:52.293491 kubelet[2284]: E1031 01:44:52.292860 2284 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.44.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.44.66:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 01:44:52.302889 kubelet[2284]: E1031 01:44:52.302810 2284 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 01:44:52.302987 kubelet[2284]: I1031 01:44:52.302928 2284 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Oct 31 01:44:52.318617 kubelet[2284]: I1031 01:44:52.318226 2284 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 31 01:44:52.319655 kubelet[2284]: I1031 01:44:52.319614 2284 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 01:44:52.321303 kubelet[2284]: I1031 01:44:52.319733 2284 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-n5tpq.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 01:44:52.321703 kubelet[2284]: I1031 01:44:52.321681 2284 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 01:44:52.322313 kubelet[2284]: I1031 01:44:52.321814 2284 container_manager_linux.go:306] "Creating device plugin manager" Oct 31 01:44:52.322313 kubelet[2284]: I1031 01:44:52.322013 2284 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 31 01:44:52.324599 kubelet[2284]: I1031 01:44:52.324428 2284 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:44:52.326727 kubelet[2284]: I1031 01:44:52.326189 2284 kubelet.go:475] "Attempting to sync node with API server" Oct 31 01:44:52.326727 kubelet[2284]: I1031 01:44:52.326244 2284 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 01:44:52.326727 kubelet[2284]: I1031 01:44:52.326314 2284 kubelet.go:387] "Adding apiserver pod source" Oct 31 01:44:52.326727 kubelet[2284]: I1031 01:44:52.326372 2284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 01:44:52.329744 kubelet[2284]: E1031 01:44:52.329708 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.44.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-n5tpq.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.44.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 01:44:52.330413 kubelet[2284]: E1031 01:44:52.330383 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.44.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.44.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 01:44:52.331230 kubelet[2284]: I1031 01:44:52.331068 2284 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 01:44:52.333082 kubelet[2284]: I1031 01:44:52.333056 2284 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 01:44:52.333345 kubelet[2284]: I1031 01:44:52.333179 2284 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 31 01:44:52.337632 kubelet[2284]: W1031 01:44:52.336687 2284 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 01:44:52.343603 kubelet[2284]: I1031 01:44:52.343557 2284 server.go:1262] "Started kubelet" Oct 31 01:44:52.344030 kubelet[2284]: I1031 01:44:52.343989 2284 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 01:44:52.348085 kubelet[2284]: I1031 01:44:52.348061 2284 server.go:310] "Adding debug handlers to kubelet server" Oct 31 01:44:52.349563 kubelet[2284]: I1031 01:44:52.349524 2284 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 01:44:52.349656 kubelet[2284]: I1031 01:44:52.349602 2284 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 31 01:44:52.350106 kubelet[2284]: I1031 01:44:52.350079 2284 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 01:44:52.352235 kubelet[2284]: E1031 01:44:52.351022 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.44.66:6443/api/v1/namespaces/default/events\": dial tcp 10.230.44.66:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-n5tpq.gb1.brightbox.com.1873700fc9be2607 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-n5tpq.gb1.brightbox.com,UID:srv-n5tpq.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-n5tpq.gb1.brightbox.com,},FirstTimestamp:2025-10-31 01:44:52.343498247 +0000 UTC m=+0.879645424,LastTimestamp:2025-10-31 01:44:52.343498247 +0000 UTC m=+0.879645424,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-n5tpq.gb1.brightbox.com,}" Oct 31 01:44:52.354602 kubelet[2284]: I1031 01:44:52.353888 2284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 01:44:52.354602 kubelet[2284]: I1031 01:44:52.354248 2284 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 01:44:52.366673 kubelet[2284]: E1031 01:44:52.366253 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-n5tpq.gb1.brightbox.com\" not found" Oct 31 01:44:52.366673 kubelet[2284]: I1031 01:44:52.366334 2284 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 31 01:44:52.367537 kubelet[2284]: I1031 01:44:52.367083 2284 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 31 01:44:52.367537 kubelet[2284]: I1031 01:44:52.367200 2284 reconciler.go:29] "Reconciler: start to sync state" Oct 31 01:44:52.370983 kubelet[2284]: E1031 01:44:52.370947 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.44.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.44.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 01:44:52.372463 kubelet[2284]: I1031 01:44:52.372420 2284 factory.go:223] Registration of the systemd container factory successfully Oct 31 01:44:52.372770 kubelet[2284]: I1031 01:44:52.372743 2284 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 01:44:52.375618 kubelet[2284]: E1031 01:44:52.374834 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.44.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-n5tpq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.44.66:6443: connect: connection refused" interval="200ms" Oct 31 01:44:52.375618 kubelet[2284]: E1031 01:44:52.375169 2284 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 01:44:52.375618 kubelet[2284]: I1031 01:44:52.375615 2284 factory.go:223] Registration of the containerd container factory successfully Oct 31 01:44:52.396613 kubelet[2284]: I1031 01:44:52.396521 2284 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 31 01:44:52.398200 kubelet[2284]: I1031 01:44:52.398158 2284 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 31 01:44:52.398303 kubelet[2284]: I1031 01:44:52.398205 2284 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 31 01:44:52.398303 kubelet[2284]: I1031 01:44:52.398269 2284 kubelet.go:2427] "Starting kubelet main sync loop" Oct 31 01:44:52.398407 kubelet[2284]: E1031 01:44:52.398340 2284 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 01:44:52.409764 kubelet[2284]: E1031 01:44:52.409725 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.44.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.44.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 01:44:52.423090 kubelet[2284]: I1031 01:44:52.422705 2284 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 01:44:52.423090 kubelet[2284]: I1031 01:44:52.422740 2284 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 01:44:52.423090 kubelet[2284]: I1031 01:44:52.422771 2284 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:44:52.433306 kubelet[2284]: I1031 01:44:52.433275 2284 policy_none.go:49] "None policy: Start" Oct 31 01:44:52.433529 kubelet[2284]: I1031 01:44:52.433503 2284 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 31 01:44:52.433826 kubelet[2284]: I1031 01:44:52.433704 2284 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 31 01:44:52.435243 kubelet[2284]: I1031 01:44:52.435224 2284 policy_none.go:47] "Start" Oct 31 01:44:52.442347 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 31 01:44:52.466606 kubelet[2284]: E1031 01:44:52.466517 2284 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"srv-n5tpq.gb1.brightbox.com\" not found" Oct 31 01:44:52.467083 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 31 01:44:52.471440 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 31 01:44:52.482390 kubelet[2284]: E1031 01:44:52.482046 2284 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 01:44:52.482390 kubelet[2284]: I1031 01:44:52.482377 2284 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 01:44:52.483013 kubelet[2284]: I1031 01:44:52.482402 2284 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 01:44:52.483013 kubelet[2284]: I1031 01:44:52.482834 2284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 01:44:52.485363 kubelet[2284]: E1031 01:44:52.484853 2284 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 01:44:52.485625 kubelet[2284]: E1031 01:44:52.485591 2284 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-n5tpq.gb1.brightbox.com\" not found" Oct 31 01:44:52.521773 systemd[1]: Created slice kubepods-burstable-podbc8044976732589488b269ca52b70897.slice - libcontainer container kubepods-burstable-podbc8044976732589488b269ca52b70897.slice. Oct 31 01:44:52.537115 kubelet[2284]: E1031 01:44:52.537042 2284 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n5tpq.gb1.brightbox.com\" not found" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.542526 systemd[1]: Created slice kubepods-burstable-pod9bf5f65bdf0b1c939c98b7c5240b448a.slice - libcontainer container kubepods-burstable-pod9bf5f65bdf0b1c939c98b7c5240b448a.slice. Oct 31 01:44:52.545271 kubelet[2284]: E1031 01:44:52.545247 2284 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n5tpq.gb1.brightbox.com\" not found" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.548454 systemd[1]: Created slice kubepods-burstable-pode66832db63e893b00338c2bc0ae01429.slice - libcontainer container kubepods-burstable-pode66832db63e893b00338c2bc0ae01429.slice. Oct 31 01:44:52.551293 kubelet[2284]: E1031 01:44:52.551260 2284 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n5tpq.gb1.brightbox.com\" not found" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.568691 kubelet[2284]: I1031 01:44:52.568642 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9bf5f65bdf0b1c939c98b7c5240b448a-ca-certs\") pod \"kube-controller-manager-srv-n5tpq.gb1.brightbox.com\" (UID: \"9bf5f65bdf0b1c939c98b7c5240b448a\") " pod="kube-system/kube-controller-manager-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.569262 kubelet[2284]: I1031 01:44:52.569041 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9bf5f65bdf0b1c939c98b7c5240b448a-flexvolume-dir\") pod \"kube-controller-manager-srv-n5tpq.gb1.brightbox.com\" (UID: \"9bf5f65bdf0b1c939c98b7c5240b448a\") " pod="kube-system/kube-controller-manager-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.569262 kubelet[2284]: I1031 01:44:52.569112 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9bf5f65bdf0b1c939c98b7c5240b448a-k8s-certs\") pod \"kube-controller-manager-srv-n5tpq.gb1.brightbox.com\" (UID: \"9bf5f65bdf0b1c939c98b7c5240b448a\") " pod="kube-system/kube-controller-manager-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.569262 kubelet[2284]: I1031 01:44:52.569145 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e66832db63e893b00338c2bc0ae01429-kubeconfig\") pod \"kube-scheduler-srv-n5tpq.gb1.brightbox.com\" (UID: \"e66832db63e893b00338c2bc0ae01429\") " pod="kube-system/kube-scheduler-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.569262 kubelet[2284]: I1031 01:44:52.569204 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc8044976732589488b269ca52b70897-ca-certs\") pod \"kube-apiserver-srv-n5tpq.gb1.brightbox.com\" (UID: \"bc8044976732589488b269ca52b70897\") " pod="kube-system/kube-apiserver-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.569704 kubelet[2284]: I1031 01:44:52.569348 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc8044976732589488b269ca52b70897-k8s-certs\") pod \"kube-apiserver-srv-n5tpq.gb1.brightbox.com\" (UID: \"bc8044976732589488b269ca52b70897\") " pod="kube-system/kube-apiserver-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.569704 kubelet[2284]: I1031 01:44:52.569398 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc8044976732589488b269ca52b70897-usr-share-ca-certificates\") pod \"kube-apiserver-srv-n5tpq.gb1.brightbox.com\" (UID: \"bc8044976732589488b269ca52b70897\") " pod="kube-system/kube-apiserver-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.569704 kubelet[2284]: I1031 01:44:52.569448 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9bf5f65bdf0b1c939c98b7c5240b448a-kubeconfig\") pod \"kube-controller-manager-srv-n5tpq.gb1.brightbox.com\" (UID: \"9bf5f65bdf0b1c939c98b7c5240b448a\") " pod="kube-system/kube-controller-manager-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.569704 kubelet[2284]: I1031 01:44:52.569476 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9bf5f65bdf0b1c939c98b7c5240b448a-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-n5tpq.gb1.brightbox.com\" (UID: \"9bf5f65bdf0b1c939c98b7c5240b448a\") " pod="kube-system/kube-controller-manager-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.576215 kubelet[2284]: E1031 01:44:52.576150 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.44.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-n5tpq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.44.66:6443: connect: connection refused" interval="400ms" Oct 31 01:44:52.586650 kubelet[2284]: I1031 01:44:52.586210 2284 kubelet_node_status.go:75] "Attempting to register node" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.586650 kubelet[2284]: E1031 01:44:52.586611 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.44.66:6443/api/v1/nodes\": dial tcp 10.230.44.66:6443: connect: connection refused" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.790123 kubelet[2284]: I1031 01:44:52.789986 2284 kubelet_node_status.go:75] "Attempting to register node" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.790668 kubelet[2284]: E1031 01:44:52.790418 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.44.66:6443/api/v1/nodes\": dial tcp 10.230.44.66:6443: connect: connection refused" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:52.842116 containerd[1499]: time="2025-10-31T01:44:52.842033895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-n5tpq.gb1.brightbox.com,Uid:bc8044976732589488b269ca52b70897,Namespace:kube-system,Attempt:0,}" Oct 31 01:44:52.852553 containerd[1499]: time="2025-10-31T01:44:52.852486890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-n5tpq.gb1.brightbox.com,Uid:9bf5f65bdf0b1c939c98b7c5240b448a,Namespace:kube-system,Attempt:0,}" Oct 31 01:44:52.854818 containerd[1499]: time="2025-10-31T01:44:52.854774578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-n5tpq.gb1.brightbox.com,Uid:e66832db63e893b00338c2bc0ae01429,Namespace:kube-system,Attempt:0,}" Oct 31 01:44:52.977805 kubelet[2284]: E1031 01:44:52.977721 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.44.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-n5tpq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.44.66:6443: connect: connection refused" interval="800ms" Oct 31 01:44:53.194844 kubelet[2284]: I1031 01:44:53.194364 2284 kubelet_node_status.go:75] "Attempting to register node" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:53.195055 kubelet[2284]: E1031 01:44:53.194934 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.44.66:6443/api/v1/nodes\": dial tcp 10.230.44.66:6443: connect: connection refused" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:53.259355 kubelet[2284]: E1031 01:44:53.259255 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.44.66:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.44.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 01:44:53.577960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1252900217.mount: Deactivated successfully. Oct 31 01:44:53.580216 kubelet[2284]: E1031 01:44:53.580153 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.44.66:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.44.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 01:44:53.588917 kubelet[2284]: E1031 01:44:53.588881 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.44.66:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-n5tpq.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.44.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 01:44:53.590875 containerd[1499]: time="2025-10-31T01:44:53.590807605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 01:44:53.592171 containerd[1499]: time="2025-10-31T01:44:53.592117994Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Oct 31 01:44:53.592742 containerd[1499]: time="2025-10-31T01:44:53.592696775Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 01:44:53.595123 containerd[1499]: time="2025-10-31T01:44:53.595021661Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 01:44:53.596291 containerd[1499]: time="2025-10-31T01:44:53.596171082Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 31 01:44:53.598152 containerd[1499]: time="2025-10-31T01:44:53.597455551Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 01:44:53.604383 containerd[1499]: time="2025-10-31T01:44:53.604350495Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 01:44:53.605805 containerd[1499]: time="2025-10-31T01:44:53.605760464Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 750.901901ms" Oct 31 01:44:53.609571 containerd[1499]: time="2025-10-31T01:44:53.609527057Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 767.348591ms" Oct 31 01:44:53.616607 containerd[1499]: time="2025-10-31T01:44:53.614803753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 01:44:53.616607 containerd[1499]: time="2025-10-31T01:44:53.616329697Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 763.71755ms" Oct 31 01:44:53.949798 kubelet[2284]: E1031 01:44:53.887072 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.44.66:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-n5tpq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.44.66:6443: connect: connection refused" interval="1.6s" Oct 31 01:44:53.949798 kubelet[2284]: E1031 01:44:53.949453 2284 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.44.66:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.44.66:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 01:44:54.002480 kubelet[2284]: I1031 01:44:54.002428 2284 kubelet_node_status.go:75] "Attempting to register node" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:54.003524 kubelet[2284]: E1031 01:44:54.003472 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.44.66:6443/api/v1/nodes\": dial tcp 10.230.44.66:6443: connect: connection refused" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:54.012418 containerd[1499]: time="2025-10-31T01:44:54.012230454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:44:54.012418 containerd[1499]: time="2025-10-31T01:44:54.012318801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:44:54.012418 containerd[1499]: time="2025-10-31T01:44:54.012342658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:44:54.013335 containerd[1499]: time="2025-10-31T01:44:54.012514197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:44:54.015549 containerd[1499]: time="2025-10-31T01:44:54.015024240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:44:54.015549 containerd[1499]: time="2025-10-31T01:44:54.015268815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:44:54.015549 containerd[1499]: time="2025-10-31T01:44:54.015314156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:44:54.015549 containerd[1499]: time="2025-10-31T01:44:54.015444895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:44:54.024849 containerd[1499]: time="2025-10-31T01:44:54.024715696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:44:54.025041 containerd[1499]: time="2025-10-31T01:44:54.024802775Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:44:54.025041 containerd[1499]: time="2025-10-31T01:44:54.024828993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:44:54.025041 containerd[1499]: time="2025-10-31T01:44:54.024959309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:44:54.071793 systemd[1]: Started cri-containerd-40b18f44c7dba0836ff735df809fc83c424417212eb304e982e3ab8c8d453cdb.scope - libcontainer container 40b18f44c7dba0836ff735df809fc83c424417212eb304e982e3ab8c8d453cdb. Oct 31 01:44:54.078204 systemd[1]: Started cri-containerd-02841629bb0fdd6028286a8a209bfd4d23e0f0780ad0e346d236c9734a6aeba7.scope - libcontainer container 02841629bb0fdd6028286a8a209bfd4d23e0f0780ad0e346d236c9734a6aeba7. Oct 31 01:44:54.097789 systemd[1]: Started cri-containerd-01ea58aa96691d3aa08e11fb41582cfd3fcd6000e220d16694f83a402cddbb5f.scope - libcontainer container 01ea58aa96691d3aa08e11fb41582cfd3fcd6000e220d16694f83a402cddbb5f. Oct 31 01:44:54.223767 containerd[1499]: time="2025-10-31T01:44:54.223632871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-n5tpq.gb1.brightbox.com,Uid:9bf5f65bdf0b1c939c98b7c5240b448a,Namespace:kube-system,Attempt:0,} returns sandbox id \"40b18f44c7dba0836ff735df809fc83c424417212eb304e982e3ab8c8d453cdb\"" Oct 31 01:44:54.231143 containerd[1499]: time="2025-10-31T01:44:54.231080485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-n5tpq.gb1.brightbox.com,Uid:bc8044976732589488b269ca52b70897,Namespace:kube-system,Attempt:0,} returns sandbox id \"01ea58aa96691d3aa08e11fb41582cfd3fcd6000e220d16694f83a402cddbb5f\"" Oct 31 01:44:54.234675 containerd[1499]: time="2025-10-31T01:44:54.234612055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-n5tpq.gb1.brightbox.com,Uid:e66832db63e893b00338c2bc0ae01429,Namespace:kube-system,Attempt:0,} returns sandbox id \"02841629bb0fdd6028286a8a209bfd4d23e0f0780ad0e346d236c9734a6aeba7\"" Oct 31 01:44:54.243087 containerd[1499]: time="2025-10-31T01:44:54.243044404Z" level=info msg="CreateContainer within sandbox \"02841629bb0fdd6028286a8a209bfd4d23e0f0780ad0e346d236c9734a6aeba7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 01:44:54.243982 containerd[1499]: time="2025-10-31T01:44:54.243235023Z" level=info msg="CreateContainer within sandbox \"01ea58aa96691d3aa08e11fb41582cfd3fcd6000e220d16694f83a402cddbb5f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 01:44:54.244535 containerd[1499]: time="2025-10-31T01:44:54.244498234Z" level=info msg="CreateContainer within sandbox \"40b18f44c7dba0836ff735df809fc83c424417212eb304e982e3ab8c8d453cdb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 01:44:54.263754 containerd[1499]: time="2025-10-31T01:44:54.263700583Z" level=info msg="CreateContainer within sandbox \"40b18f44c7dba0836ff735df809fc83c424417212eb304e982e3ab8c8d453cdb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f0fe7f4bbc3d7c88a6ba8f386de5e4b286d2eb5fd858cc70014f97afb52a343a\"" Oct 31 01:44:54.265080 containerd[1499]: time="2025-10-31T01:44:54.265038638Z" level=info msg="CreateContainer within sandbox \"02841629bb0fdd6028286a8a209bfd4d23e0f0780ad0e346d236c9734a6aeba7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2eb60904edaf237acff560df150e9a4fe79c821dced8c808404cb1726c6faccb\"" Oct 31 01:44:54.265784 containerd[1499]: time="2025-10-31T01:44:54.265753272Z" level=info msg="StartContainer for \"2eb60904edaf237acff560df150e9a4fe79c821dced8c808404cb1726c6faccb\"" Oct 31 01:44:54.266943 containerd[1499]: time="2025-10-31T01:44:54.266913773Z" level=info msg="StartContainer for \"f0fe7f4bbc3d7c88a6ba8f386de5e4b286d2eb5fd858cc70014f97afb52a343a\"" Oct 31 01:44:54.268926 containerd[1499]: time="2025-10-31T01:44:54.268885041Z" level=info msg="CreateContainer within sandbox \"01ea58aa96691d3aa08e11fb41582cfd3fcd6000e220d16694f83a402cddbb5f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eaa781e239a5e47c4146a05898a0360b062b37d05cb4978303d280f0ac7481ea\"" Oct 31 01:44:54.273744 containerd[1499]: time="2025-10-31T01:44:54.273667708Z" level=info msg="StartContainer for \"eaa781e239a5e47c4146a05898a0360b062b37d05cb4978303d280f0ac7481ea\"" Oct 31 01:44:54.322798 systemd[1]: Started cri-containerd-2eb60904edaf237acff560df150e9a4fe79c821dced8c808404cb1726c6faccb.scope - libcontainer container 2eb60904edaf237acff560df150e9a4fe79c821dced8c808404cb1726c6faccb. Oct 31 01:44:54.343224 systemd[1]: Started cri-containerd-f0fe7f4bbc3d7c88a6ba8f386de5e4b286d2eb5fd858cc70014f97afb52a343a.scope - libcontainer container f0fe7f4bbc3d7c88a6ba8f386de5e4b286d2eb5fd858cc70014f97afb52a343a. Oct 31 01:44:54.362786 systemd[1]: Started cri-containerd-eaa781e239a5e47c4146a05898a0360b062b37d05cb4978303d280f0ac7481ea.scope - libcontainer container eaa781e239a5e47c4146a05898a0360b062b37d05cb4978303d280f0ac7481ea. Oct 31 01:44:54.514984 kubelet[2284]: E1031 01:44:54.490661 2284 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.44.66:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.44.66:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 01:44:54.549993 containerd[1499]: time="2025-10-31T01:44:54.548726638Z" level=info msg="StartContainer for \"eaa781e239a5e47c4146a05898a0360b062b37d05cb4978303d280f0ac7481ea\" returns successfully" Oct 31 01:44:54.549993 containerd[1499]: time="2025-10-31T01:44:54.549192370Z" level=info msg="StartContainer for \"2eb60904edaf237acff560df150e9a4fe79c821dced8c808404cb1726c6faccb\" returns successfully" Oct 31 01:44:54.549993 containerd[1499]: time="2025-10-31T01:44:54.549485778Z" level=info msg="StartContainer for \"f0fe7f4bbc3d7c88a6ba8f386de5e4b286d2eb5fd858cc70014f97afb52a343a\" returns successfully" Oct 31 01:44:55.445247 kubelet[2284]: E1031 01:44:55.445190 2284 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n5tpq.gb1.brightbox.com\" not found" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:55.448814 kubelet[2284]: E1031 01:44:55.445938 2284 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n5tpq.gb1.brightbox.com\" not found" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:55.452022 kubelet[2284]: E1031 01:44:55.451810 2284 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n5tpq.gb1.brightbox.com\" not found" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:55.608212 kubelet[2284]: I1031 01:44:55.606947 2284 kubelet_node_status.go:75] "Attempting to register node" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:56.456622 kubelet[2284]: E1031 01:44:56.456566 2284 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n5tpq.gb1.brightbox.com\" not found" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:56.457631 kubelet[2284]: E1031 01:44:56.457136 2284 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n5tpq.gb1.brightbox.com\" not found" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:56.457631 kubelet[2284]: E1031 01:44:56.457536 2284 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-n5tpq.gb1.brightbox.com\" not found" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:56.980239 kubelet[2284]: E1031 01:44:56.980195 2284 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-n5tpq.gb1.brightbox.com\" not found" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:57.056504 kubelet[2284]: I1031 01:44:57.056462 2284 kubelet_node_status.go:78] "Successfully registered node" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:57.056775 kubelet[2284]: E1031 01:44:57.056518 2284 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"srv-n5tpq.gb1.brightbox.com\": node \"srv-n5tpq.gb1.brightbox.com\" not found" Oct 31 01:44:57.074864 kubelet[2284]: I1031 01:44:57.074825 2284 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:57.135213 kubelet[2284]: E1031 01:44:57.135143 2284 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-n5tpq.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:57.135213 kubelet[2284]: I1031 01:44:57.135190 2284 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:57.138612 kubelet[2284]: E1031 01:44:57.138468 2284 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-n5tpq.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:57.138612 kubelet[2284]: I1031 01:44:57.138527 2284 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:57.140673 kubelet[2284]: E1031 01:44:57.140615 2284 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-n5tpq.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:57.333303 kubelet[2284]: I1031 01:44:57.332801 2284 apiserver.go:52] "Watching apiserver" Oct 31 01:44:57.368296 kubelet[2284]: I1031 01:44:57.368213 2284 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 31 01:44:57.454609 kubelet[2284]: I1031 01:44:57.454382 2284 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:57.454609 kubelet[2284]: I1031 01:44:57.454477 2284 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:57.457217 kubelet[2284]: E1031 01:44:57.456859 2284 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-n5tpq.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:57.458678 kubelet[2284]: E1031 01:44:57.458292 2284 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-n5tpq.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:58.456567 kubelet[2284]: I1031 01:44:58.456261 2284 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-n5tpq.gb1.brightbox.com" Oct 31 01:44:58.472181 kubelet[2284]: I1031 01:44:58.472148 2284 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 31 01:44:59.453696 systemd[1]: Reloading requested from client PID 2570 ('systemctl') (unit session-11.scope)... Oct 31 01:44:59.453719 systemd[1]: Reloading... Oct 31 01:44:59.570759 zram_generator::config[2615]: No configuration found. Oct 31 01:44:59.749616 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 31 01:44:59.875988 systemd[1]: Reloading finished in 420 ms. Oct 31 01:44:59.938962 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:44:59.950118 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 01:44:59.950534 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:44:59.950630 systemd[1]: kubelet.service: Consumed 1.438s CPU time, 122.0M memory peak, 0B memory swap peak. Oct 31 01:44:59.957987 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 01:45:00.272087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 01:45:00.291131 (kubelet)[2672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 01:45:00.382568 kubelet[2672]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 01:45:00.383383 kubelet[2672]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 01:45:00.384887 kubelet[2672]: I1031 01:45:00.384013 2672 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 01:45:00.400008 kubelet[2672]: I1031 01:45:00.399952 2672 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 31 01:45:00.400639 kubelet[2672]: I1031 01:45:00.400142 2672 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 01:45:00.402339 kubelet[2672]: I1031 01:45:00.400749 2672 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 31 01:45:00.402339 kubelet[2672]: I1031 01:45:00.400776 2672 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 01:45:00.402339 kubelet[2672]: I1031 01:45:00.401168 2672 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 01:45:00.406250 kubelet[2672]: I1031 01:45:00.406220 2672 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 31 01:45:00.423800 kubelet[2672]: I1031 01:45:00.423758 2672 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 01:45:00.437616 kubelet[2672]: E1031 01:45:00.437064 2672 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 31 01:45:00.437845 kubelet[2672]: I1031 01:45:00.437673 2672 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Oct 31 01:45:00.448623 kubelet[2672]: I1031 01:45:00.448199 2672 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 31 01:45:00.450277 kubelet[2672]: I1031 01:45:00.449740 2672 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 01:45:00.450277 kubelet[2672]: I1031 01:45:00.449797 2672 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-n5tpq.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 01:45:00.450277 kubelet[2672]: I1031 01:45:00.450027 2672 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 01:45:00.450277 kubelet[2672]: I1031 01:45:00.450041 2672 container_manager_linux.go:306] "Creating device plugin manager" Oct 31 01:45:00.450680 kubelet[2672]: I1031 01:45:00.450073 2672 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 31 01:45:00.455393 kubelet[2672]: I1031 01:45:00.454732 2672 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:45:00.456513 kubelet[2672]: I1031 01:45:00.455909 2672 kubelet.go:475] "Attempting to sync node with API server" Oct 31 01:45:00.456513 kubelet[2672]: I1031 01:45:00.455948 2672 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 01:45:00.459312 kubelet[2672]: I1031 01:45:00.458309 2672 kubelet.go:387] "Adding apiserver pod source" Oct 31 01:45:00.459312 kubelet[2672]: I1031 01:45:00.458359 2672 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 01:45:00.473151 kubelet[2672]: I1031 01:45:00.473111 2672 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Oct 31 01:45:00.474087 kubelet[2672]: I1031 01:45:00.474063 2672 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 01:45:00.475354 kubelet[2672]: I1031 01:45:00.474205 2672 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 31 01:45:00.486276 kubelet[2672]: I1031 01:45:00.486246 2672 server.go:1262] "Started kubelet" Oct 31 01:45:00.491233 kubelet[2672]: I1031 01:45:00.490450 2672 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 01:45:00.512327 kubelet[2672]: I1031 01:45:00.511128 2672 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 01:45:00.528643 kubelet[2672]: I1031 01:45:00.527806 2672 server.go:310] "Adding debug handlers to kubelet server" Oct 31 01:45:00.530393 kubelet[2672]: I1031 01:45:00.517302 2672 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 01:45:00.534643 kubelet[2672]: I1031 01:45:00.534007 2672 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 31 01:45:00.534643 kubelet[2672]: I1031 01:45:00.534565 2672 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 31 01:45:00.537120 kubelet[2672]: I1031 01:45:00.535854 2672 reconciler.go:29] "Reconciler: start to sync state" Oct 31 01:45:00.542388 kubelet[2672]: I1031 01:45:00.511305 2672 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 01:45:00.542694 kubelet[2672]: I1031 01:45:00.542648 2672 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 31 01:45:00.544629 kubelet[2672]: I1031 01:45:00.544567 2672 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 01:45:00.545412 kubelet[2672]: I1031 01:45:00.545346 2672 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 01:45:00.553645 kubelet[2672]: I1031 01:45:00.553592 2672 factory.go:223] Registration of the containerd container factory successfully Oct 31 01:45:00.553645 kubelet[2672]: I1031 01:45:00.553621 2672 factory.go:223] Registration of the systemd container factory successfully Oct 31 01:45:00.555600 kubelet[2672]: E1031 01:45:00.555198 2672 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 01:45:00.571990 kubelet[2672]: I1031 01:45:00.571906 2672 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 31 01:45:00.575688 kubelet[2672]: I1031 01:45:00.575660 2672 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 31 01:45:00.575813 kubelet[2672]: I1031 01:45:00.575716 2672 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 31 01:45:00.577596 kubelet[2672]: I1031 01:45:00.576293 2672 kubelet.go:2427] "Starting kubelet main sync loop" Oct 31 01:45:00.577596 kubelet[2672]: E1031 01:45:00.576373 2672 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 01:45:00.660351 kubelet[2672]: I1031 01:45:00.660294 2672 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 01:45:00.660351 kubelet[2672]: I1031 01:45:00.660336 2672 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 01:45:00.660351 kubelet[2672]: I1031 01:45:00.660365 2672 state_mem.go:36] "Initialized new in-memory state store" Oct 31 01:45:00.662420 kubelet[2672]: I1031 01:45:00.660911 2672 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 01:45:00.662420 kubelet[2672]: I1031 01:45:00.660936 2672 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 01:45:00.662420 kubelet[2672]: I1031 01:45:00.661003 2672 policy_none.go:49] "None policy: Start" Oct 31 01:45:00.662420 kubelet[2672]: I1031 01:45:00.661016 2672 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 31 01:45:00.662420 kubelet[2672]: I1031 01:45:00.661034 2672 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 31 01:45:00.662420 kubelet[2672]: I1031 01:45:00.661174 2672 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 31 01:45:00.662420 kubelet[2672]: I1031 01:45:00.661192 2672 policy_none.go:47] "Start" Oct 31 01:45:00.674447 kubelet[2672]: E1031 01:45:00.673632 2672 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 01:45:00.674447 kubelet[2672]: I1031 01:45:00.673909 2672 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 01:45:00.674447 kubelet[2672]: I1031 01:45:00.673927 2672 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 01:45:00.674447 kubelet[2672]: I1031 01:45:00.674377 2672 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 01:45:00.676374 kubelet[2672]: E1031 01:45:00.676345 2672 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 01:45:00.679293 kubelet[2672]: I1031 01:45:00.678598 2672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:00.691608 kubelet[2672]: I1031 01:45:00.687471 2672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:00.693299 kubelet[2672]: I1031 01:45:00.693262 2672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:00.714376 kubelet[2672]: I1031 01:45:00.714323 2672 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 31 01:45:00.714796 kubelet[2672]: I1031 01:45:00.714770 2672 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 31 01:45:00.716547 kubelet[2672]: I1031 01:45:00.716286 2672 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 31 01:45:00.716547 kubelet[2672]: E1031 01:45:00.716354 2672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-n5tpq.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:00.812249 kubelet[2672]: I1031 01:45:00.811378 2672 kubelet_node_status.go:75] "Attempting to register node" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:00.825655 kubelet[2672]: I1031 01:45:00.823664 2672 kubelet_node_status.go:124] "Node was previously registered" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:00.825655 kubelet[2672]: I1031 01:45:00.823786 2672 kubelet_node_status.go:78] "Successfully registered node" node="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:00.841450 kubelet[2672]: I1031 01:45:00.840966 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bc8044976732589488b269ca52b70897-k8s-certs\") pod \"kube-apiserver-srv-n5tpq.gb1.brightbox.com\" (UID: \"bc8044976732589488b269ca52b70897\") " pod="kube-system/kube-apiserver-srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:00.841450 kubelet[2672]: I1031 01:45:00.841046 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9bf5f65bdf0b1c939c98b7c5240b448a-ca-certs\") pod \"kube-controller-manager-srv-n5tpq.gb1.brightbox.com\" (UID: \"9bf5f65bdf0b1c939c98b7c5240b448a\") " pod="kube-system/kube-controller-manager-srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:00.841450 kubelet[2672]: I1031 01:45:00.841097 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9bf5f65bdf0b1c939c98b7c5240b448a-flexvolume-dir\") pod \"kube-controller-manager-srv-n5tpq.gb1.brightbox.com\" (UID: \"9bf5f65bdf0b1c939c98b7c5240b448a\") " pod="kube-system/kube-controller-manager-srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:00.841450 kubelet[2672]: I1031 01:45:00.841134 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9bf5f65bdf0b1c939c98b7c5240b448a-k8s-certs\") pod \"kube-controller-manager-srv-n5tpq.gb1.brightbox.com\" (UID: \"9bf5f65bdf0b1c939c98b7c5240b448a\") " pod="kube-system/kube-controller-manager-srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:00.841450 kubelet[2672]: I1031 01:45:00.841165 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9bf5f65bdf0b1c939c98b7c5240b448a-kubeconfig\") pod \"kube-controller-manager-srv-n5tpq.gb1.brightbox.com\" (UID: \"9bf5f65bdf0b1c939c98b7c5240b448a\") " pod="kube-system/kube-controller-manager-srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:00.841928 kubelet[2672]: I1031 01:45:00.841196 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9bf5f65bdf0b1c939c98b7c5240b448a-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-n5tpq.gb1.brightbox.com\" (UID: \"9bf5f65bdf0b1c939c98b7c5240b448a\") " pod="kube-system/kube-controller-manager-srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:00.841928 kubelet[2672]: I1031 01:45:00.841230 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bc8044976732589488b269ca52b70897-ca-certs\") pod \"kube-apiserver-srv-n5tpq.gb1.brightbox.com\" (UID: \"bc8044976732589488b269ca52b70897\") " pod="kube-system/kube-apiserver-srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:00.841928 kubelet[2672]: I1031 01:45:00.841260 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bc8044976732589488b269ca52b70897-usr-share-ca-certificates\") pod \"kube-apiserver-srv-n5tpq.gb1.brightbox.com\" (UID: \"bc8044976732589488b269ca52b70897\") " pod="kube-system/kube-apiserver-srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:00.841928 kubelet[2672]: I1031 01:45:00.841293 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e66832db63e893b00338c2bc0ae01429-kubeconfig\") pod \"kube-scheduler-srv-n5tpq.gb1.brightbox.com\" (UID: \"e66832db63e893b00338c2bc0ae01429\") " pod="kube-system/kube-scheduler-srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:01.469117 kubelet[2672]: I1031 01:45:01.469029 2672 apiserver.go:52] "Watching apiserver" Oct 31 01:45:01.536346 kubelet[2672]: I1031 01:45:01.536299 2672 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 31 01:45:01.615142 kubelet[2672]: I1031 01:45:01.614931 2672 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:01.645695 kubelet[2672]: I1031 01:45:01.644719 2672 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Oct 31 01:45:01.645695 kubelet[2672]: E1031 01:45:01.644811 2672 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-n5tpq.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:01.687610 kubelet[2672]: I1031 01:45:01.686809 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-n5tpq.gb1.brightbox.com" podStartSLOduration=3.686771732 podStartE2EDuration="3.686771732s" podCreationTimestamp="2025-10-31 01:44:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:45:01.666633152 +0000 UTC m=+1.365400923" watchObservedRunningTime="2025-10-31 01:45:01.686771732 +0000 UTC m=+1.385539499" Oct 31 01:45:01.707253 kubelet[2672]: I1031 01:45:01.705778 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-n5tpq.gb1.brightbox.com" podStartSLOduration=1.705760332 podStartE2EDuration="1.705760332s" podCreationTimestamp="2025-10-31 01:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:45:01.704867521 +0000 UTC m=+1.403635294" watchObservedRunningTime="2025-10-31 01:45:01.705760332 +0000 UTC m=+1.404528099" Oct 31 01:45:01.707253 kubelet[2672]: I1031 01:45:01.705937 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-n5tpq.gb1.brightbox.com" podStartSLOduration=1.705930295 podStartE2EDuration="1.705930295s" podCreationTimestamp="2025-10-31 01:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:45:01.68702928 +0000 UTC m=+1.385797067" watchObservedRunningTime="2025-10-31 01:45:01.705930295 +0000 UTC m=+1.404698065" Oct 31 01:45:05.369593 kubelet[2672]: I1031 01:45:05.369426 2672 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 01:45:05.371133 containerd[1499]: time="2025-10-31T01:45:05.370895959Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 01:45:05.371945 kubelet[2672]: I1031 01:45:05.371236 2672 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 01:45:06.424366 systemd[1]: Created slice kubepods-besteffort-pod00e75741_4a21_4a3d_9427_31231b0ea8d4.slice - libcontainer container kubepods-besteffort-pod00e75741_4a21_4a3d_9427_31231b0ea8d4.slice. Oct 31 01:45:06.483271 kubelet[2672]: I1031 01:45:06.482977 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/00e75741-4a21-4a3d-9427-31231b0ea8d4-kube-proxy\") pod \"kube-proxy-z9z6z\" (UID: \"00e75741-4a21-4a3d-9427-31231b0ea8d4\") " pod="kube-system/kube-proxy-z9z6z" Oct 31 01:45:06.483271 kubelet[2672]: I1031 01:45:06.483072 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00e75741-4a21-4a3d-9427-31231b0ea8d4-xtables-lock\") pod \"kube-proxy-z9z6z\" (UID: \"00e75741-4a21-4a3d-9427-31231b0ea8d4\") " pod="kube-system/kube-proxy-z9z6z" Oct 31 01:45:06.483271 kubelet[2672]: I1031 01:45:06.483106 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00e75741-4a21-4a3d-9427-31231b0ea8d4-lib-modules\") pod \"kube-proxy-z9z6z\" (UID: \"00e75741-4a21-4a3d-9427-31231b0ea8d4\") " pod="kube-system/kube-proxy-z9z6z" Oct 31 01:45:06.483271 kubelet[2672]: I1031 01:45:06.483136 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gptg7\" (UniqueName: \"kubernetes.io/projected/00e75741-4a21-4a3d-9427-31231b0ea8d4-kube-api-access-gptg7\") pod \"kube-proxy-z9z6z\" (UID: \"00e75741-4a21-4a3d-9427-31231b0ea8d4\") " pod="kube-system/kube-proxy-z9z6z" Oct 31 01:45:06.548999 kubelet[2672]: E1031 01:45:06.547882 2672 status_manager.go:1018] "Failed to get status for pod" err="pods \"tigera-operator-65cdcdfd6d-hlc8h\" is forbidden: User \"system:node:srv-n5tpq.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-n5tpq.gb1.brightbox.com' and this object" podUID="1f8dbb90-03a2-4d86-be8c-9baf8f27263c" pod="tigera-operator/tigera-operator-65cdcdfd6d-hlc8h" Oct 31 01:45:06.548999 kubelet[2672]: E1031 01:45:06.548064 2672 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:srv-n5tpq.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-n5tpq.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kubernetes-services-endpoint\"" type="*v1.ConfigMap" Oct 31 01:45:06.548999 kubelet[2672]: E1031 01:45:06.548194 2672 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-n5tpq.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'srv-n5tpq.gb1.brightbox.com' and this object" logger="UnhandledError" reflector="object-\"tigera-operator\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Oct 31 01:45:06.554273 systemd[1]: Created slice kubepods-besteffort-pod1f8dbb90_03a2_4d86_be8c_9baf8f27263c.slice - libcontainer container kubepods-besteffort-pod1f8dbb90_03a2_4d86_be8c_9baf8f27263c.slice. Oct 31 01:45:06.584265 kubelet[2672]: I1031 01:45:06.583687 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nxxt\" (UniqueName: \"kubernetes.io/projected/1f8dbb90-03a2-4d86-be8c-9baf8f27263c-kube-api-access-5nxxt\") pod \"tigera-operator-65cdcdfd6d-hlc8h\" (UID: \"1f8dbb90-03a2-4d86-be8c-9baf8f27263c\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-hlc8h" Oct 31 01:45:06.584265 kubelet[2672]: I1031 01:45:06.583829 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1f8dbb90-03a2-4d86-be8c-9baf8f27263c-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-hlc8h\" (UID: \"1f8dbb90-03a2-4d86-be8c-9baf8f27263c\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-hlc8h" Oct 31 01:45:06.743609 containerd[1499]: time="2025-10-31T01:45:06.742820342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z9z6z,Uid:00e75741-4a21-4a3d-9427-31231b0ea8d4,Namespace:kube-system,Attempt:0,}" Oct 31 01:45:06.792490 containerd[1499]: time="2025-10-31T01:45:06.791690791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:45:06.792490 containerd[1499]: time="2025-10-31T01:45:06.791801671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:45:06.792490 containerd[1499]: time="2025-10-31T01:45:06.791888574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:06.792490 containerd[1499]: time="2025-10-31T01:45:06.792112713Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:06.837876 systemd[1]: Started cri-containerd-787b3afa2f9bdee553f94127bb1f41a10b0219d80e4003dec72fd2f57c828e98.scope - libcontainer container 787b3afa2f9bdee553f94127bb1f41a10b0219d80e4003dec72fd2f57c828e98. Oct 31 01:45:06.887362 containerd[1499]: time="2025-10-31T01:45:06.887079053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z9z6z,Uid:00e75741-4a21-4a3d-9427-31231b0ea8d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"787b3afa2f9bdee553f94127bb1f41a10b0219d80e4003dec72fd2f57c828e98\"" Oct 31 01:45:06.898002 containerd[1499]: time="2025-10-31T01:45:06.897943137Z" level=info msg="CreateContainer within sandbox \"787b3afa2f9bdee553f94127bb1f41a10b0219d80e4003dec72fd2f57c828e98\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 01:45:06.921803 containerd[1499]: time="2025-10-31T01:45:06.921735128Z" level=info msg="CreateContainer within sandbox \"787b3afa2f9bdee553f94127bb1f41a10b0219d80e4003dec72fd2f57c828e98\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6f7653277ab0caa4322b14f58362a3f50db689444252f32042960bebd3499e44\"" Oct 31 01:45:06.923679 containerd[1499]: time="2025-10-31T01:45:06.923406094Z" level=info msg="StartContainer for \"6f7653277ab0caa4322b14f58362a3f50db689444252f32042960bebd3499e44\"" Oct 31 01:45:06.971920 systemd[1]: Started cri-containerd-6f7653277ab0caa4322b14f58362a3f50db689444252f32042960bebd3499e44.scope - libcontainer container 6f7653277ab0caa4322b14f58362a3f50db689444252f32042960bebd3499e44. Oct 31 01:45:07.031043 containerd[1499]: time="2025-10-31T01:45:07.030676009Z" level=info msg="StartContainer for \"6f7653277ab0caa4322b14f58362a3f50db689444252f32042960bebd3499e44\" returns successfully" Oct 31 01:45:07.603719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2344680963.mount: Deactivated successfully. Oct 31 01:45:07.699709 kubelet[2672]: E1031 01:45:07.699662 2672 projected.go:291] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Oct 31 01:45:07.700844 kubelet[2672]: E1031 01:45:07.699820 2672 projected.go:196] Error preparing data for projected volume kube-api-access-5nxxt for pod tigera-operator/tigera-operator-65cdcdfd6d-hlc8h: failed to sync configmap cache: timed out waiting for the condition Oct 31 01:45:07.700844 kubelet[2672]: E1031 01:45:07.699989 2672 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1f8dbb90-03a2-4d86-be8c-9baf8f27263c-kube-api-access-5nxxt podName:1f8dbb90-03a2-4d86-be8c-9baf8f27263c nodeName:}" failed. No retries permitted until 2025-10-31 01:45:08.199955169 +0000 UTC m=+7.898722936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5nxxt" (UniqueName: "kubernetes.io/projected/1f8dbb90-03a2-4d86-be8c-9baf8f27263c-kube-api-access-5nxxt") pod "tigera-operator-65cdcdfd6d-hlc8h" (UID: "1f8dbb90-03a2-4d86-be8c-9baf8f27263c") : failed to sync configmap cache: timed out waiting for the condition Oct 31 01:45:07.712223 kubelet[2672]: I1031 01:45:07.711989 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z9z6z" podStartSLOduration=1.7119669210000001 podStartE2EDuration="1.711966921s" podCreationTimestamp="2025-10-31 01:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:45:07.688888757 +0000 UTC m=+7.387656531" watchObservedRunningTime="2025-10-31 01:45:07.711966921 +0000 UTC m=+7.410734684" Oct 31 01:45:08.364658 containerd[1499]: time="2025-10-31T01:45:08.364560491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-hlc8h,Uid:1f8dbb90-03a2-4d86-be8c-9baf8f27263c,Namespace:tigera-operator,Attempt:0,}" Oct 31 01:45:08.407096 containerd[1499]: time="2025-10-31T01:45:08.406758482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:45:08.407096 containerd[1499]: time="2025-10-31T01:45:08.406904707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:45:08.408162 containerd[1499]: time="2025-10-31T01:45:08.406928942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:08.408659 containerd[1499]: time="2025-10-31T01:45:08.408491682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:08.451797 systemd[1]: Started cri-containerd-c385629ce324c15a68d7d6fa3e6a8fee61093c0caafc003cded4b01bc2b52ddc.scope - libcontainer container c385629ce324c15a68d7d6fa3e6a8fee61093c0caafc003cded4b01bc2b52ddc. Oct 31 01:45:08.521073 containerd[1499]: time="2025-10-31T01:45:08.521023702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-hlc8h,Uid:1f8dbb90-03a2-4d86-be8c-9baf8f27263c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c385629ce324c15a68d7d6fa3e6a8fee61093c0caafc003cded4b01bc2b52ddc\"" Oct 31 01:45:08.527422 containerd[1499]: time="2025-10-31T01:45:08.526446257Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 01:45:11.517643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount27272.mount: Deactivated successfully. Oct 31 01:45:14.422631 containerd[1499]: time="2025-10-31T01:45:14.421703014Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:45:14.423232 containerd[1499]: time="2025-10-31T01:45:14.422955228Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 31 01:45:14.424334 containerd[1499]: time="2025-10-31T01:45:14.423873199Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:45:14.427611 containerd[1499]: time="2025-10-31T01:45:14.426902735Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:45:14.428613 containerd[1499]: time="2025-10-31T01:45:14.428137918Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 5.901631226s" Oct 31 01:45:14.428613 containerd[1499]: time="2025-10-31T01:45:14.428185710Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 31 01:45:14.438666 containerd[1499]: time="2025-10-31T01:45:14.438496875Z" level=info msg="CreateContainer within sandbox \"c385629ce324c15a68d7d6fa3e6a8fee61093c0caafc003cded4b01bc2b52ddc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 01:45:14.457989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4199306190.mount: Deactivated successfully. Oct 31 01:45:14.460431 containerd[1499]: time="2025-10-31T01:45:14.460272596Z" level=info msg="CreateContainer within sandbox \"c385629ce324c15a68d7d6fa3e6a8fee61093c0caafc003cded4b01bc2b52ddc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"326b0deba072f3e6977f2430391c9a84961483382bffc4f2f827237fe310cd79\"" Oct 31 01:45:14.462823 containerd[1499]: time="2025-10-31T01:45:14.460920346Z" level=info msg="StartContainer for \"326b0deba072f3e6977f2430391c9a84961483382bffc4f2f827237fe310cd79\"" Oct 31 01:45:14.509764 systemd[1]: run-containerd-runc-k8s.io-326b0deba072f3e6977f2430391c9a84961483382bffc4f2f827237fe310cd79-runc.vD9feQ.mount: Deactivated successfully. Oct 31 01:45:14.525906 systemd[1]: Started cri-containerd-326b0deba072f3e6977f2430391c9a84961483382bffc4f2f827237fe310cd79.scope - libcontainer container 326b0deba072f3e6977f2430391c9a84961483382bffc4f2f827237fe310cd79. Oct 31 01:45:14.578081 containerd[1499]: time="2025-10-31T01:45:14.578025577Z" level=info msg="StartContainer for \"326b0deba072f3e6977f2430391c9a84961483382bffc4f2f827237fe310cd79\" returns successfully" Oct 31 01:45:14.689279 kubelet[2672]: I1031 01:45:14.686918 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-hlc8h" podStartSLOduration=2.78298346 podStartE2EDuration="8.686900731s" podCreationTimestamp="2025-10-31 01:45:06 +0000 UTC" firstStartedPulling="2025-10-31 01:45:08.525706849 +0000 UTC m=+8.224474611" lastFinishedPulling="2025-10-31 01:45:14.429624124 +0000 UTC m=+14.128391882" observedRunningTime="2025-10-31 01:45:14.686671136 +0000 UTC m=+14.385438922" watchObservedRunningTime="2025-10-31 01:45:14.686900731 +0000 UTC m=+14.385668500" Oct 31 01:45:22.059272 sudo[1774]: pam_unix(sudo:session): session closed for user root Oct 31 01:45:22.214636 sshd[1771]: pam_unix(sshd:session): session closed for user core Oct 31 01:45:22.222660 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Oct 31 01:45:22.223666 systemd[1]: sshd@8-10.230.44.66:22-147.75.109.163:48104.service: Deactivated successfully. Oct 31 01:45:22.229044 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 01:45:22.231009 systemd[1]: session-11.scope: Consumed 8.057s CPU time, 144.4M memory peak, 0B memory swap peak. Oct 31 01:45:22.237215 systemd-logind[1485]: Removed session 11. Oct 31 01:45:28.571185 systemd[1]: Created slice kubepods-besteffort-pod33283318_3804_4919_a1c6_1628757c4a7d.slice - libcontainer container kubepods-besteffort-pod33283318_3804_4919_a1c6_1628757c4a7d.slice. Oct 31 01:45:28.637584 kubelet[2672]: I1031 01:45:28.636982 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-925dc\" (UniqueName: \"kubernetes.io/projected/33283318-3804-4919-a1c6-1628757c4a7d-kube-api-access-925dc\") pod \"calico-typha-655b87cd78-skg5m\" (UID: \"33283318-3804-4919-a1c6-1628757c4a7d\") " pod="calico-system/calico-typha-655b87cd78-skg5m" Oct 31 01:45:28.637584 kubelet[2672]: I1031 01:45:28.637080 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33283318-3804-4919-a1c6-1628757c4a7d-tigera-ca-bundle\") pod \"calico-typha-655b87cd78-skg5m\" (UID: \"33283318-3804-4919-a1c6-1628757c4a7d\") " pod="calico-system/calico-typha-655b87cd78-skg5m" Oct 31 01:45:28.637584 kubelet[2672]: I1031 01:45:28.637112 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/33283318-3804-4919-a1c6-1628757c4a7d-typha-certs\") pod \"calico-typha-655b87cd78-skg5m\" (UID: \"33283318-3804-4919-a1c6-1628757c4a7d\") " pod="calico-system/calico-typha-655b87cd78-skg5m" Oct 31 01:45:28.826376 systemd[1]: Created slice kubepods-besteffort-pod9c3137c1_1848_4b69_88f6_b0d663728689.slice - libcontainer container kubepods-besteffort-pod9c3137c1_1848_4b69_88f6_b0d663728689.slice. Oct 31 01:45:28.839615 kubelet[2672]: I1031 01:45:28.838660 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9c3137c1-1848-4b69-88f6-b0d663728689-xtables-lock\") pod \"calico-node-rlfcl\" (UID: \"9c3137c1-1848-4b69-88f6-b0d663728689\") " pod="calico-system/calico-node-rlfcl" Oct 31 01:45:28.840070 kubelet[2672]: I1031 01:45:28.839886 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9c3137c1-1848-4b69-88f6-b0d663728689-cni-net-dir\") pod \"calico-node-rlfcl\" (UID: \"9c3137c1-1848-4b69-88f6-b0d663728689\") " pod="calico-system/calico-node-rlfcl" Oct 31 01:45:28.840070 kubelet[2672]: I1031 01:45:28.840010 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9c3137c1-1848-4b69-88f6-b0d663728689-cni-log-dir\") pod \"calico-node-rlfcl\" (UID: \"9c3137c1-1848-4b69-88f6-b0d663728689\") " pod="calico-system/calico-node-rlfcl" Oct 31 01:45:28.840378 kubelet[2672]: I1031 01:45:28.840233 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9c3137c1-1848-4b69-88f6-b0d663728689-flexvol-driver-host\") pod \"calico-node-rlfcl\" (UID: \"9c3137c1-1848-4b69-88f6-b0d663728689\") " pod="calico-system/calico-node-rlfcl" Oct 31 01:45:28.840378 kubelet[2672]: I1031 01:45:28.840333 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9c3137c1-1848-4b69-88f6-b0d663728689-cni-bin-dir\") pod \"calico-node-rlfcl\" (UID: \"9c3137c1-1848-4b69-88f6-b0d663728689\") " pod="calico-system/calico-node-rlfcl" Oct 31 01:45:28.840822 kubelet[2672]: I1031 01:45:28.840651 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bms4t\" (UniqueName: \"kubernetes.io/projected/9c3137c1-1848-4b69-88f6-b0d663728689-kube-api-access-bms4t\") pod \"calico-node-rlfcl\" (UID: \"9c3137c1-1848-4b69-88f6-b0d663728689\") " pod="calico-system/calico-node-rlfcl" Oct 31 01:45:28.840822 kubelet[2672]: I1031 01:45:28.840708 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c3137c1-1848-4b69-88f6-b0d663728689-tigera-ca-bundle\") pod \"calico-node-rlfcl\" (UID: \"9c3137c1-1848-4b69-88f6-b0d663728689\") " pod="calico-system/calico-node-rlfcl" Oct 31 01:45:28.841156 kubelet[2672]: I1031 01:45:28.840986 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9c3137c1-1848-4b69-88f6-b0d663728689-var-run-calico\") pod \"calico-node-rlfcl\" (UID: \"9c3137c1-1848-4b69-88f6-b0d663728689\") " pod="calico-system/calico-node-rlfcl" Oct 31 01:45:28.841156 kubelet[2672]: I1031 01:45:28.841079 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9c3137c1-1848-4b69-88f6-b0d663728689-var-lib-calico\") pod \"calico-node-rlfcl\" (UID: \"9c3137c1-1848-4b69-88f6-b0d663728689\") " pod="calico-system/calico-node-rlfcl" Oct 31 01:45:28.841506 kubelet[2672]: I1031 01:45:28.841126 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9c3137c1-1848-4b69-88f6-b0d663728689-node-certs\") pod \"calico-node-rlfcl\" (UID: \"9c3137c1-1848-4b69-88f6-b0d663728689\") " pod="calico-system/calico-node-rlfcl" Oct 31 01:45:28.841506 kubelet[2672]: I1031 01:45:28.841386 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9c3137c1-1848-4b69-88f6-b0d663728689-policysync\") pod \"calico-node-rlfcl\" (UID: \"9c3137c1-1848-4b69-88f6-b0d663728689\") " pod="calico-system/calico-node-rlfcl" Oct 31 01:45:28.841735 kubelet[2672]: I1031 01:45:28.841486 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9c3137c1-1848-4b69-88f6-b0d663728689-lib-modules\") pod \"calico-node-rlfcl\" (UID: \"9c3137c1-1848-4b69-88f6-b0d663728689\") " pod="calico-system/calico-node-rlfcl" Oct 31 01:45:28.881745 containerd[1499]: time="2025-10-31T01:45:28.881681838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-655b87cd78-skg5m,Uid:33283318-3804-4919-a1c6-1628757c4a7d,Namespace:calico-system,Attempt:0,}" Oct 31 01:45:28.960696 containerd[1499]: time="2025-10-31T01:45:28.954771594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:45:28.960696 containerd[1499]: time="2025-10-31T01:45:28.954882779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:45:28.960696 containerd[1499]: time="2025-10-31T01:45:28.954901768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:28.960696 containerd[1499]: time="2025-10-31T01:45:28.955107974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:28.967134 kubelet[2672]: E1031 01:45:28.967078 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:28.971621 kubelet[2672]: W1031 01:45:28.968746 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:28.971621 kubelet[2672]: E1031 01:45:28.968825 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:28.991214 kubelet[2672]: E1031 01:45:28.991170 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:28.992876 kubelet[2672]: W1031 01:45:28.992766 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:28.992876 kubelet[2672]: E1031 01:45:28.992816 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.065693 systemd[1]: Started cri-containerd-12050e78e4f67a3e42426b8edd2b8a90c5c542372a09575a9330b30092d66b68.scope - libcontainer container 12050e78e4f67a3e42426b8edd2b8a90c5c542372a09575a9330b30092d66b68. Oct 31 01:45:29.140915 containerd[1499]: time="2025-10-31T01:45:29.140842131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rlfcl,Uid:9c3137c1-1848-4b69-88f6-b0d663728689,Namespace:calico-system,Attempt:0,}" Oct 31 01:45:29.215292 containerd[1499]: time="2025-10-31T01:45:29.213106785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:45:29.215292 containerd[1499]: time="2025-10-31T01:45:29.213213029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:45:29.215292 containerd[1499]: time="2025-10-31T01:45:29.213233813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:29.215292 containerd[1499]: time="2025-10-31T01:45:29.213402496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:29.235328 kubelet[2672]: E1031 01:45:29.235169 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:45:29.247072 kubelet[2672]: E1031 01:45:29.237384 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.247072 kubelet[2672]: W1031 01:45:29.237413 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.247072 kubelet[2672]: E1031 01:45:29.237437 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.247072 kubelet[2672]: E1031 01:45:29.242657 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.247072 kubelet[2672]: W1031 01:45:29.242685 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.247072 kubelet[2672]: E1031 01:45:29.242709 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.247072 kubelet[2672]: E1031 01:45:29.243043 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.247072 kubelet[2672]: W1031 01:45:29.243057 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.247072 kubelet[2672]: E1031 01:45:29.243073 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.247072 kubelet[2672]: E1031 01:45:29.244896 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.247768 kubelet[2672]: W1031 01:45:29.244913 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.247768 kubelet[2672]: E1031 01:45:29.244931 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.247768 kubelet[2672]: E1031 01:45:29.246886 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.247768 kubelet[2672]: W1031 01:45:29.246915 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.247768 kubelet[2672]: E1031 01:45:29.246933 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.250070 kubelet[2672]: E1031 01:45:29.248677 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.250070 kubelet[2672]: W1031 01:45:29.248702 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.250070 kubelet[2672]: E1031 01:45:29.248724 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.252130 kubelet[2672]: E1031 01:45:29.250921 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.252130 kubelet[2672]: W1031 01:45:29.250949 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.252571 kubelet[2672]: E1031 01:45:29.252294 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.253022 kubelet[2672]: E1031 01:45:29.253001 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.253758 kubelet[2672]: W1031 01:45:29.253326 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.253758 kubelet[2672]: E1031 01:45:29.253366 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.256795 kubelet[2672]: E1031 01:45:29.255278 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.256795 kubelet[2672]: W1031 01:45:29.255312 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.256795 kubelet[2672]: E1031 01:45:29.255330 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.257989 kubelet[2672]: E1031 01:45:29.257832 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.257989 kubelet[2672]: W1031 01:45:29.257855 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.257989 kubelet[2672]: E1031 01:45:29.257873 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.258533 kubelet[2672]: E1031 01:45:29.258420 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.263986 kubelet[2672]: W1031 01:45:29.263634 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.263986 kubelet[2672]: E1031 01:45:29.263698 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.263986 kubelet[2672]: E1031 01:45:29.266641 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.263986 kubelet[2672]: W1031 01:45:29.266659 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.263986 kubelet[2672]: E1031 01:45:29.266676 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.270329 kubelet[2672]: E1031 01:45:29.269056 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.270329 kubelet[2672]: W1031 01:45:29.269073 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.270329 kubelet[2672]: E1031 01:45:29.269095 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.270329 kubelet[2672]: E1031 01:45:29.269801 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.270329 kubelet[2672]: W1031 01:45:29.269816 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.270329 kubelet[2672]: E1031 01:45:29.269834 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.280132 kubelet[2672]: E1031 01:45:29.271971 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.280132 kubelet[2672]: W1031 01:45:29.271986 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.280132 kubelet[2672]: E1031 01:45:29.272001 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.280132 kubelet[2672]: E1031 01:45:29.277823 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.280132 kubelet[2672]: W1031 01:45:29.277844 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.280132 kubelet[2672]: E1031 01:45:29.277880 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.283461 kubelet[2672]: E1031 01:45:29.280855 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.283461 kubelet[2672]: W1031 01:45:29.280872 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.283461 kubelet[2672]: E1031 01:45:29.283009 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.284129 kubelet[2672]: E1031 01:45:29.284101 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.284129 kubelet[2672]: W1031 01:45:29.284124 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.284231 kubelet[2672]: E1031 01:45:29.284142 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.287789 kubelet[2672]: E1031 01:45:29.287760 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.287891 kubelet[2672]: W1031 01:45:29.287795 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.287891 kubelet[2672]: E1031 01:45:29.287816 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.289965 kubelet[2672]: E1031 01:45:29.289003 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.289965 kubelet[2672]: W1031 01:45:29.289023 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.289965 kubelet[2672]: E1031 01:45:29.289040 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.297855 kubelet[2672]: E1031 01:45:29.297130 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.297855 kubelet[2672]: W1031 01:45:29.297163 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.297855 kubelet[2672]: E1031 01:45:29.297196 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.297855 kubelet[2672]: I1031 01:45:29.297255 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn5sd\" (UniqueName: \"kubernetes.io/projected/b91990ed-b519-4003-921b-695c5958edac-kube-api-access-cn5sd\") pod \"csi-node-driver-lvbwj\" (UID: \"b91990ed-b519-4003-921b-695c5958edac\") " pod="calico-system/csi-node-driver-lvbwj" Oct 31 01:45:29.297855 kubelet[2672]: E1031 01:45:29.300046 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.297855 kubelet[2672]: W1031 01:45:29.300077 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.297855 kubelet[2672]: E1031 01:45:29.300095 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.297855 kubelet[2672]: I1031 01:45:29.301385 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b91990ed-b519-4003-921b-695c5958edac-socket-dir\") pod \"csi-node-driver-lvbwj\" (UID: \"b91990ed-b519-4003-921b-695c5958edac\") " pod="calico-system/csi-node-driver-lvbwj" Oct 31 01:45:29.317633 kubelet[2672]: E1031 01:45:29.306060 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.317633 kubelet[2672]: W1031 01:45:29.306086 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.317633 kubelet[2672]: E1031 01:45:29.306109 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.317633 kubelet[2672]: E1031 01:45:29.312164 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.317633 kubelet[2672]: W1031 01:45:29.312187 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.317633 kubelet[2672]: E1031 01:45:29.312213 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.317633 kubelet[2672]: E1031 01:45:29.314054 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.317633 kubelet[2672]: W1031 01:45:29.314126 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.317633 kubelet[2672]: E1031 01:45:29.314877 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.325172 kubelet[2672]: I1031 01:45:29.315704 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b91990ed-b519-4003-921b-695c5958edac-kubelet-dir\") pod \"csi-node-driver-lvbwj\" (UID: \"b91990ed-b519-4003-921b-695c5958edac\") " pod="calico-system/csi-node-driver-lvbwj" Oct 31 01:45:29.325172 kubelet[2672]: E1031 01:45:29.317787 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.325172 kubelet[2672]: W1031 01:45:29.317804 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.325172 kubelet[2672]: E1031 01:45:29.317822 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.325172 kubelet[2672]: E1031 01:45:29.322987 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.325172 kubelet[2672]: W1031 01:45:29.323008 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.325172 kubelet[2672]: E1031 01:45:29.323031 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.332609 kubelet[2672]: E1031 01:45:29.329134 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.332609 kubelet[2672]: W1031 01:45:29.329167 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.332609 kubelet[2672]: E1031 01:45:29.329198 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.334005 kubelet[2672]: I1031 01:45:29.333109 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b91990ed-b519-4003-921b-695c5958edac-registration-dir\") pod \"csi-node-driver-lvbwj\" (UID: \"b91990ed-b519-4003-921b-695c5958edac\") " pod="calico-system/csi-node-driver-lvbwj" Oct 31 01:45:29.334005 kubelet[2672]: E1031 01:45:29.333210 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.334005 kubelet[2672]: W1031 01:45:29.333229 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.334005 kubelet[2672]: E1031 01:45:29.333250 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.340218 kubelet[2672]: E1031 01:45:29.336682 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.340218 kubelet[2672]: W1031 01:45:29.336700 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.340218 kubelet[2672]: E1031 01:45:29.336723 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.337122 systemd[1]: Started cri-containerd-fa697adf4d164031a3ba8b7efa9cd06c3bc068082adbbcc8c3deb8b91c36586b.scope - libcontainer container fa697adf4d164031a3ba8b7efa9cd06c3bc068082adbbcc8c3deb8b91c36586b. Oct 31 01:45:29.343302 kubelet[2672]: E1031 01:45:29.340571 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.343302 kubelet[2672]: W1031 01:45:29.340861 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.343302 kubelet[2672]: E1031 01:45:29.340883 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.343302 kubelet[2672]: I1031 01:45:29.341189 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b91990ed-b519-4003-921b-695c5958edac-varrun\") pod \"csi-node-driver-lvbwj\" (UID: \"b91990ed-b519-4003-921b-695c5958edac\") " pod="calico-system/csi-node-driver-lvbwj" Oct 31 01:45:29.346846 kubelet[2672]: E1031 01:45:29.344647 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.346846 kubelet[2672]: W1031 01:45:29.344667 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.346846 kubelet[2672]: E1031 01:45:29.344712 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.346846 kubelet[2672]: E1031 01:45:29.345835 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.348148 kubelet[2672]: W1031 01:45:29.348024 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.348148 kubelet[2672]: E1031 01:45:29.348068 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.354655 kubelet[2672]: E1031 01:45:29.352639 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.354655 kubelet[2672]: W1031 01:45:29.352671 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.354655 kubelet[2672]: E1031 01:45:29.352696 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.354655 kubelet[2672]: E1031 01:45:29.354376 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.354655 kubelet[2672]: W1031 01:45:29.354493 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.354655 kubelet[2672]: E1031 01:45:29.354516 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.387007 containerd[1499]: time="2025-10-31T01:45:29.386809295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-655b87cd78-skg5m,Uid:33283318-3804-4919-a1c6-1628757c4a7d,Namespace:calico-system,Attempt:0,} returns sandbox id \"12050e78e4f67a3e42426b8edd2b8a90c5c542372a09575a9330b30092d66b68\"" Oct 31 01:45:29.392449 containerd[1499]: time="2025-10-31T01:45:29.392406030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 01:45:29.444660 kubelet[2672]: E1031 01:45:29.443765 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.444660 kubelet[2672]: W1031 01:45:29.443806 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.444660 kubelet[2672]: E1031 01:45:29.443840 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.448401 kubelet[2672]: E1031 01:45:29.445121 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.448401 kubelet[2672]: W1031 01:45:29.445141 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.448401 kubelet[2672]: E1031 01:45:29.445158 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.448401 kubelet[2672]: E1031 01:45:29.445720 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.448401 kubelet[2672]: W1031 01:45:29.445735 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.448401 kubelet[2672]: E1031 01:45:29.445754 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.448401 kubelet[2672]: E1031 01:45:29.446117 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.448401 kubelet[2672]: W1031 01:45:29.446132 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.448401 kubelet[2672]: E1031 01:45:29.446147 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.448401 kubelet[2672]: E1031 01:45:29.446992 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.449190 kubelet[2672]: W1031 01:45:29.447009 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.449190 kubelet[2672]: E1031 01:45:29.447026 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.449190 kubelet[2672]: E1031 01:45:29.447853 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.449190 kubelet[2672]: W1031 01:45:29.447867 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.449190 kubelet[2672]: E1031 01:45:29.447883 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.449190 kubelet[2672]: E1031 01:45:29.448187 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.449190 kubelet[2672]: W1031 01:45:29.448201 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.449190 kubelet[2672]: E1031 01:45:29.448256 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.449190 kubelet[2672]: E1031 01:45:29.448558 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.449190 kubelet[2672]: W1031 01:45:29.448572 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.452734 kubelet[2672]: E1031 01:45:29.448621 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.452734 kubelet[2672]: E1031 01:45:29.448891 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.452734 kubelet[2672]: W1031 01:45:29.448906 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.452734 kubelet[2672]: E1031 01:45:29.448925 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.452734 kubelet[2672]: E1031 01:45:29.449202 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.452734 kubelet[2672]: W1031 01:45:29.449216 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.452734 kubelet[2672]: E1031 01:45:29.449231 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.452734 kubelet[2672]: E1031 01:45:29.449905 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.452734 kubelet[2672]: W1031 01:45:29.449931 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.452734 kubelet[2672]: E1031 01:45:29.450105 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.456454 kubelet[2672]: E1031 01:45:29.450940 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.456454 kubelet[2672]: W1031 01:45:29.450954 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.456454 kubelet[2672]: E1031 01:45:29.451107 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.456454 kubelet[2672]: E1031 01:45:29.451417 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.456454 kubelet[2672]: W1031 01:45:29.451430 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.456454 kubelet[2672]: E1031 01:45:29.451463 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.456454 kubelet[2672]: E1031 01:45:29.452912 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.456454 kubelet[2672]: W1031 01:45:29.452927 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.456454 kubelet[2672]: E1031 01:45:29.452945 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.456454 kubelet[2672]: E1031 01:45:29.453480 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.457112 kubelet[2672]: W1031 01:45:29.453494 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.457112 kubelet[2672]: E1031 01:45:29.453664 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.457112 kubelet[2672]: E1031 01:45:29.454888 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.457112 kubelet[2672]: W1031 01:45:29.454902 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.457112 kubelet[2672]: E1031 01:45:29.454918 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.457112 kubelet[2672]: E1031 01:45:29.455305 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.457112 kubelet[2672]: W1031 01:45:29.455319 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.457112 kubelet[2672]: E1031 01:45:29.455344 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.457112 kubelet[2672]: E1031 01:45:29.456462 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.457112 kubelet[2672]: W1031 01:45:29.456476 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.459351 kubelet[2672]: E1031 01:45:29.458567 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.459351 kubelet[2672]: E1031 01:45:29.459202 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.459351 kubelet[2672]: W1031 01:45:29.459218 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.459351 kubelet[2672]: E1031 01:45:29.459234 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.461194 kubelet[2672]: E1031 01:45:29.459750 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.461194 kubelet[2672]: W1031 01:45:29.459771 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.461194 kubelet[2672]: E1031 01:45:29.459788 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.461194 kubelet[2672]: E1031 01:45:29.460106 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.461194 kubelet[2672]: W1031 01:45:29.460120 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.461194 kubelet[2672]: E1031 01:45:29.460135 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.461194 kubelet[2672]: E1031 01:45:29.460466 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.461194 kubelet[2672]: W1031 01:45:29.460480 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.461194 kubelet[2672]: E1031 01:45:29.460495 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.461194 kubelet[2672]: E1031 01:45:29.460840 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.462879 kubelet[2672]: W1031 01:45:29.460855 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.462879 kubelet[2672]: E1031 01:45:29.460869 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.462879 kubelet[2672]: E1031 01:45:29.462066 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.462879 kubelet[2672]: W1031 01:45:29.462092 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.462879 kubelet[2672]: E1031 01:45:29.462108 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.462879 kubelet[2672]: E1031 01:45:29.462518 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.462879 kubelet[2672]: W1031 01:45:29.462531 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.462879 kubelet[2672]: E1031 01:45:29.462626 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:29.470772 containerd[1499]: time="2025-10-31T01:45:29.470706185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rlfcl,Uid:9c3137c1-1848-4b69-88f6-b0d663728689,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa697adf4d164031a3ba8b7efa9cd06c3bc068082adbbcc8c3deb8b91c36586b\"" Oct 31 01:45:29.487365 kubelet[2672]: E1031 01:45:29.487295 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:29.487365 kubelet[2672]: W1031 01:45:29.487339 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:29.487365 kubelet[2672]: E1031 01:45:29.487373 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:30.578275 kubelet[2672]: E1031 01:45:30.577468 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:45:31.338674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1560665101.mount: Deactivated successfully. Oct 31 01:45:32.578944 kubelet[2672]: E1031 01:45:32.578410 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:45:32.778660 containerd[1499]: time="2025-10-31T01:45:32.777812119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:45:32.780838 containerd[1499]: time="2025-10-31T01:45:32.780773219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 31 01:45:32.786605 containerd[1499]: time="2025-10-31T01:45:32.786496570Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:45:32.797761 containerd[1499]: time="2025-10-31T01:45:32.797670531Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:45:32.799441 containerd[1499]: time="2025-10-31T01:45:32.798891797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.4052206s" Oct 31 01:45:32.799441 containerd[1499]: time="2025-10-31T01:45:32.798938568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 31 01:45:32.802087 containerd[1499]: time="2025-10-31T01:45:32.802053674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 01:45:32.824909 containerd[1499]: time="2025-10-31T01:45:32.824829649Z" level=info msg="CreateContainer within sandbox \"12050e78e4f67a3e42426b8edd2b8a90c5c542372a09575a9330b30092d66b68\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 01:45:32.857019 containerd[1499]: time="2025-10-31T01:45:32.856961809Z" level=info msg="CreateContainer within sandbox \"12050e78e4f67a3e42426b8edd2b8a90c5c542372a09575a9330b30092d66b68\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9565440fa990e2051df3b1ba587610b2e4ee25fc7ef2eb39c371bd07e3cfce92\"" Oct 31 01:45:32.859186 containerd[1499]: time="2025-10-31T01:45:32.859152298Z" level=info msg="StartContainer for \"9565440fa990e2051df3b1ba587610b2e4ee25fc7ef2eb39c371bd07e3cfce92\"" Oct 31 01:45:32.923826 systemd[1]: Started cri-containerd-9565440fa990e2051df3b1ba587610b2e4ee25fc7ef2eb39c371bd07e3cfce92.scope - libcontainer container 9565440fa990e2051df3b1ba587610b2e4ee25fc7ef2eb39c371bd07e3cfce92. Oct 31 01:45:32.997225 containerd[1499]: time="2025-10-31T01:45:32.995078658Z" level=info msg="StartContainer for \"9565440fa990e2051df3b1ba587610b2e4ee25fc7ef2eb39c371bd07e3cfce92\" returns successfully" Oct 31 01:45:33.733536 kubelet[2672]: I1031 01:45:33.733423 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-655b87cd78-skg5m" podStartSLOduration=2.324401167 podStartE2EDuration="5.733400096s" podCreationTimestamp="2025-10-31 01:45:28 +0000 UTC" firstStartedPulling="2025-10-31 01:45:29.391786828 +0000 UTC m=+29.090554584" lastFinishedPulling="2025-10-31 01:45:32.800785745 +0000 UTC m=+32.499553513" observedRunningTime="2025-10-31 01:45:33.732551829 +0000 UTC m=+33.431319611" watchObservedRunningTime="2025-10-31 01:45:33.733400096 +0000 UTC m=+33.432167864" Oct 31 01:45:33.810209 systemd[1]: run-containerd-runc-k8s.io-9565440fa990e2051df3b1ba587610b2e4ee25fc7ef2eb39c371bd07e3cfce92-runc.ZTaaYt.mount: Deactivated successfully. Oct 31 01:45:33.819373 kubelet[2672]: E1031 01:45:33.819111 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.819373 kubelet[2672]: W1031 01:45:33.819148 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.819373 kubelet[2672]: E1031 01:45:33.819200 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.820212 kubelet[2672]: E1031 01:45:33.820033 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.820212 kubelet[2672]: W1031 01:45:33.820052 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.820212 kubelet[2672]: E1031 01:45:33.820068 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.820735 kubelet[2672]: E1031 01:45:33.820496 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.820735 kubelet[2672]: W1031 01:45:33.820527 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.820735 kubelet[2672]: E1031 01:45:33.820544 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.820986 kubelet[2672]: E1031 01:45:33.820967 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.821081 kubelet[2672]: W1031 01:45:33.821061 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.821179 kubelet[2672]: E1031 01:45:33.821156 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.821747 kubelet[2672]: E1031 01:45:33.821571 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.821747 kubelet[2672]: W1031 01:45:33.821614 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.821747 kubelet[2672]: E1031 01:45:33.821629 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.821978 kubelet[2672]: E1031 01:45:33.821959 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.822073 kubelet[2672]: W1031 01:45:33.822053 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.822176 kubelet[2672]: E1031 01:45:33.822147 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.822765 kubelet[2672]: E1031 01:45:33.822554 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.822765 kubelet[2672]: W1031 01:45:33.822572 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.822765 kubelet[2672]: E1031 01:45:33.822634 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.823322 kubelet[2672]: E1031 01:45:33.823042 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.823322 kubelet[2672]: W1031 01:45:33.823056 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.823322 kubelet[2672]: E1031 01:45:33.823071 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.823733 kubelet[2672]: E1031 01:45:33.823547 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.823733 kubelet[2672]: W1031 01:45:33.823565 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.823733 kubelet[2672]: E1031 01:45:33.823605 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.823981 kubelet[2672]: E1031 01:45:33.823962 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.824214 kubelet[2672]: W1031 01:45:33.824069 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.824214 kubelet[2672]: E1031 01:45:33.824093 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.824411 kubelet[2672]: E1031 01:45:33.824392 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.824517 kubelet[2672]: W1031 01:45:33.824486 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.824648 kubelet[2672]: E1031 01:45:33.824628 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.825176 kubelet[2672]: E1031 01:45:33.825017 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.825176 kubelet[2672]: W1031 01:45:33.825035 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.825176 kubelet[2672]: E1031 01:45:33.825053 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.825434 kubelet[2672]: E1031 01:45:33.825416 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.825604 kubelet[2672]: W1031 01:45:33.825554 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.825837 kubelet[2672]: E1031 01:45:33.825692 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.826136 kubelet[2672]: E1031 01:45:33.825987 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.826136 kubelet[2672]: W1031 01:45:33.826005 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.826136 kubelet[2672]: E1031 01:45:33.826020 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.826438 kubelet[2672]: E1031 01:45:33.826420 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.826546 kubelet[2672]: W1031 01:45:33.826526 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.826797 kubelet[2672]: E1031 01:45:33.826720 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.891238 kubelet[2672]: E1031 01:45:33.891180 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.891238 kubelet[2672]: W1031 01:45:33.891230 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.891632 kubelet[2672]: E1031 01:45:33.891265 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.891785 kubelet[2672]: E1031 01:45:33.891747 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.891785 kubelet[2672]: W1031 01:45:33.891783 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.892186 kubelet[2672]: E1031 01:45:33.891801 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.892397 kubelet[2672]: E1031 01:45:33.892363 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.892503 kubelet[2672]: W1031 01:45:33.892476 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.892730 kubelet[2672]: E1031 01:45:33.892628 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.893329 kubelet[2672]: E1031 01:45:33.893153 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.893329 kubelet[2672]: W1031 01:45:33.893168 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.893329 kubelet[2672]: E1031 01:45:33.893184 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.894015 kubelet[2672]: E1031 01:45:33.893825 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.894015 kubelet[2672]: W1031 01:45:33.893844 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.894015 kubelet[2672]: E1031 01:45:33.893860 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.894768 kubelet[2672]: E1031 01:45:33.894564 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.894768 kubelet[2672]: W1031 01:45:33.894605 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.894768 kubelet[2672]: E1031 01:45:33.894624 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.895220 kubelet[2672]: E1031 01:45:33.894941 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.895220 kubelet[2672]: W1031 01:45:33.894955 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.895220 kubelet[2672]: E1031 01:45:33.894971 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.895517 kubelet[2672]: E1031 01:45:33.895476 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.895869 kubelet[2672]: W1031 01:45:33.895663 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.895869 kubelet[2672]: E1031 01:45:33.895717 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.896412 kubelet[2672]: E1031 01:45:33.896233 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.896412 kubelet[2672]: W1031 01:45:33.896253 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.896412 kubelet[2672]: E1031 01:45:33.896283 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.896874 kubelet[2672]: E1031 01:45:33.896814 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.896874 kubelet[2672]: W1031 01:45:33.896832 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.896874 kubelet[2672]: E1031 01:45:33.896851 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.897546 kubelet[2672]: E1031 01:45:33.897390 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.897546 kubelet[2672]: W1031 01:45:33.897407 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.897546 kubelet[2672]: E1031 01:45:33.897421 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.898458 kubelet[2672]: E1031 01:45:33.898041 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.898458 kubelet[2672]: W1031 01:45:33.898059 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.898458 kubelet[2672]: E1031 01:45:33.898075 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.899005 kubelet[2672]: E1031 01:45:33.898985 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.899104 kubelet[2672]: W1031 01:45:33.899084 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.899346 kubelet[2672]: E1031 01:45:33.899197 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.899543 kubelet[2672]: E1031 01:45:33.899522 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.899818 kubelet[2672]: W1031 01:45:33.899657 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.899818 kubelet[2672]: E1031 01:45:33.899682 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.900192 kubelet[2672]: E1031 01:45:33.900155 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.900408 kubelet[2672]: W1031 01:45:33.900271 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.900408 kubelet[2672]: E1031 01:45:33.900295 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.901013 kubelet[2672]: E1031 01:45:33.900841 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.901013 kubelet[2672]: W1031 01:45:33.900870 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.901013 kubelet[2672]: E1031 01:45:33.900887 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.901842 kubelet[2672]: E1031 01:45:33.901421 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.901842 kubelet[2672]: W1031 01:45:33.901438 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.901842 kubelet[2672]: E1031 01:45:33.901454 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:33.902143 kubelet[2672]: E1031 01:45:33.902080 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:33.902302 kubelet[2672]: W1031 01:45:33.902280 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:33.902404 kubelet[2672]: E1031 01:45:33.902384 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.543527 containerd[1499]: time="2025-10-31T01:45:34.542268711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:45:34.543527 containerd[1499]: time="2025-10-31T01:45:34.543440204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 31 01:45:34.544426 containerd[1499]: time="2025-10-31T01:45:34.544125749Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:45:34.546534 containerd[1499]: time="2025-10-31T01:45:34.546484194Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:45:34.547866 containerd[1499]: time="2025-10-31T01:45:34.547828574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.745581357s" Oct 31 01:45:34.548057 containerd[1499]: time="2025-10-31T01:45:34.548019602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 31 01:45:34.554011 containerd[1499]: time="2025-10-31T01:45:34.553974622Z" level=info msg="CreateContainer within sandbox \"fa697adf4d164031a3ba8b7efa9cd06c3bc068082adbbcc8c3deb8b91c36586b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 01:45:34.579280 kubelet[2672]: E1031 01:45:34.579015 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:45:34.581315 containerd[1499]: time="2025-10-31T01:45:34.580843232Z" level=info msg="CreateContainer within sandbox \"fa697adf4d164031a3ba8b7efa9cd06c3bc068082adbbcc8c3deb8b91c36586b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0add771d837d7e46aac4bbdbb0705dfe72fc10c03b6590c3e6d5eefbd764cf48\"" Oct 31 01:45:34.582300 containerd[1499]: time="2025-10-31T01:45:34.582236379Z" level=info msg="StartContainer for \"0add771d837d7e46aac4bbdbb0705dfe72fc10c03b6590c3e6d5eefbd764cf48\"" Oct 31 01:45:34.678837 systemd[1]: Started cri-containerd-0add771d837d7e46aac4bbdbb0705dfe72fc10c03b6590c3e6d5eefbd764cf48.scope - libcontainer container 0add771d837d7e46aac4bbdbb0705dfe72fc10c03b6590c3e6d5eefbd764cf48. Oct 31 01:45:34.723989 containerd[1499]: time="2025-10-31T01:45:34.723881217Z" level=info msg="StartContainer for \"0add771d837d7e46aac4bbdbb0705dfe72fc10c03b6590c3e6d5eefbd764cf48\" returns successfully" Oct 31 01:45:34.727401 kubelet[2672]: I1031 01:45:34.726252 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 01:45:34.735393 kubelet[2672]: E1031 01:45:34.735119 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:34.735393 kubelet[2672]: W1031 01:45:34.735169 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:34.735393 kubelet[2672]: E1031 01:45:34.735220 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.736878 kubelet[2672]: E1031 01:45:34.736169 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:34.736878 kubelet[2672]: W1031 01:45:34.736203 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:34.736878 kubelet[2672]: E1031 01:45:34.736221 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.737248 kubelet[2672]: E1031 01:45:34.737220 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:34.737313 kubelet[2672]: W1031 01:45:34.737249 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:34.737313 kubelet[2672]: E1031 01:45:34.737267 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.737637 kubelet[2672]: E1031 01:45:34.737613 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:34.737637 kubelet[2672]: W1031 01:45:34.737633 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:34.737767 kubelet[2672]: E1031 01:45:34.737649 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.737998 kubelet[2672]: E1031 01:45:34.737969 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:34.738062 kubelet[2672]: W1031 01:45:34.737992 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:34.738062 kubelet[2672]: E1031 01:45:34.738027 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.738369 kubelet[2672]: E1031 01:45:34.738329 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:34.738369 kubelet[2672]: W1031 01:45:34.738363 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:34.738513 kubelet[2672]: E1031 01:45:34.738379 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.738786 kubelet[2672]: E1031 01:45:34.738766 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:34.738853 kubelet[2672]: W1031 01:45:34.738804 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:34.738853 kubelet[2672]: E1031 01:45:34.738824 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.739153 kubelet[2672]: E1031 01:45:34.739131 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:34.739153 kubelet[2672]: W1031 01:45:34.739150 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:34.739269 kubelet[2672]: E1031 01:45:34.739165 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.739503 kubelet[2672]: E1031 01:45:34.739476 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:34.739612 kubelet[2672]: W1031 01:45:34.739503 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:34.739612 kubelet[2672]: E1031 01:45:34.739519 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.739888 kubelet[2672]: E1031 01:45:34.739859 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:34.739888 kubelet[2672]: W1031 01:45:34.739884 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:34.739995 kubelet[2672]: E1031 01:45:34.739903 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.740208 kubelet[2672]: E1031 01:45:34.740188 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:34.740208 kubelet[2672]: W1031 01:45:34.740206 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:34.740307 kubelet[2672]: E1031 01:45:34.740221 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.740549 kubelet[2672]: E1031 01:45:34.740520 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:34.740549 kubelet[2672]: W1031 01:45:34.740538 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:34.740688 kubelet[2672]: E1031 01:45:34.740552 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.740915 kubelet[2672]: E1031 01:45:34.740895 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:34.740915 kubelet[2672]: W1031 01:45:34.740913 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:34.741019 kubelet[2672]: E1031 01:45:34.740927 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.741277 kubelet[2672]: E1031 01:45:34.741257 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:34.741277 kubelet[2672]: W1031 01:45:34.741276 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:34.741381 kubelet[2672]: E1031 01:45:34.741291 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.741706 kubelet[2672]: E1031 01:45:34.741685 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 01:45:34.741774 kubelet[2672]: W1031 01:45:34.741704 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 01:45:34.741774 kubelet[2672]: E1031 01:45:34.741729 2672 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 01:45:34.759040 systemd[1]: cri-containerd-0add771d837d7e46aac4bbdbb0705dfe72fc10c03b6590c3e6d5eefbd764cf48.scope: Deactivated successfully. Oct 31 01:45:34.790963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0add771d837d7e46aac4bbdbb0705dfe72fc10c03b6590c3e6d5eefbd764cf48-rootfs.mount: Deactivated successfully. Oct 31 01:45:34.821667 containerd[1499]: time="2025-10-31T01:45:34.794175547Z" level=info msg="shim disconnected" id=0add771d837d7e46aac4bbdbb0705dfe72fc10c03b6590c3e6d5eefbd764cf48 namespace=k8s.io Oct 31 01:45:34.821667 containerd[1499]: time="2025-10-31T01:45:34.821555035Z" level=warning msg="cleaning up after shim disconnected" id=0add771d837d7e46aac4bbdbb0705dfe72fc10c03b6590c3e6d5eefbd764cf48 namespace=k8s.io Oct 31 01:45:34.821871 containerd[1499]: time="2025-10-31T01:45:34.821669394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 01:45:35.731933 containerd[1499]: time="2025-10-31T01:45:35.731644181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 01:45:36.579620 kubelet[2672]: E1031 01:45:36.579539 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:45:38.579662 kubelet[2672]: E1031 01:45:38.579072 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:45:39.299902 systemd[1]: Started sshd@9-10.230.44.66:22-45.140.17.124:62072.service - OpenSSH per-connection server daemon (45.140.17.124:62072). Oct 31 01:45:40.578979 kubelet[2672]: E1031 01:45:40.578910 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:45:40.701762 containerd[1499]: time="2025-10-31T01:45:40.701693512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:45:40.722054 containerd[1499]: time="2025-10-31T01:45:40.721977299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 31 01:45:40.725195 containerd[1499]: time="2025-10-31T01:45:40.724379846Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:45:40.730961 containerd[1499]: time="2025-10-31T01:45:40.730920738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:45:40.732411 containerd[1499]: time="2025-10-31T01:45:40.732376256Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.000667972s" Oct 31 01:45:40.732957 containerd[1499]: time="2025-10-31T01:45:40.732540598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 31 01:45:40.738153 containerd[1499]: time="2025-10-31T01:45:40.738006621Z" level=info msg="CreateContainer within sandbox \"fa697adf4d164031a3ba8b7efa9cd06c3bc068082adbbcc8c3deb8b91c36586b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 01:45:40.770000 containerd[1499]: time="2025-10-31T01:45:40.769839773Z" level=info msg="CreateContainer within sandbox \"fa697adf4d164031a3ba8b7efa9cd06c3bc068082adbbcc8c3deb8b91c36586b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"397aa75b539a61890a70334b482b93a2c827b0a8c532e854219da3918d7a5e21\"" Oct 31 01:45:40.772438 containerd[1499]: time="2025-10-31T01:45:40.772394906Z" level=info msg="StartContainer for \"397aa75b539a61890a70334b482b93a2c827b0a8c532e854219da3918d7a5e21\"" Oct 31 01:45:40.843101 systemd[1]: Started cri-containerd-397aa75b539a61890a70334b482b93a2c827b0a8c532e854219da3918d7a5e21.scope - libcontainer container 397aa75b539a61890a70334b482b93a2c827b0a8c532e854219da3918d7a5e21. Oct 31 01:45:40.899935 containerd[1499]: time="2025-10-31T01:45:40.899750435Z" level=info msg="StartContainer for \"397aa75b539a61890a70334b482b93a2c827b0a8c532e854219da3918d7a5e21\" returns successfully" Oct 31 01:45:41.903063 systemd[1]: cri-containerd-397aa75b539a61890a70334b482b93a2c827b0a8c532e854219da3918d7a5e21.scope: Deactivated successfully. Oct 31 01:45:41.941817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-397aa75b539a61890a70334b482b93a2c827b0a8c532e854219da3918d7a5e21-rootfs.mount: Deactivated successfully. Oct 31 01:45:41.948008 containerd[1499]: time="2025-10-31T01:45:41.947691051Z" level=info msg="shim disconnected" id=397aa75b539a61890a70334b482b93a2c827b0a8c532e854219da3918d7a5e21 namespace=k8s.io Oct 31 01:45:41.948008 containerd[1499]: time="2025-10-31T01:45:41.947774538Z" level=warning msg="cleaning up after shim disconnected" id=397aa75b539a61890a70334b482b93a2c827b0a8c532e854219da3918d7a5e21 namespace=k8s.io Oct 31 01:45:41.948008 containerd[1499]: time="2025-10-31T01:45:41.947789336Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 31 01:45:42.013830 kubelet[2672]: I1031 01:45:42.013775 2672 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 31 01:45:42.076420 systemd[1]: Created slice kubepods-besteffort-pod22cf324a_372c_4445_b54c_2bbf176ec24f.slice - libcontainer container kubepods-besteffort-pod22cf324a_372c_4445_b54c_2bbf176ec24f.slice. Oct 31 01:45:42.100987 systemd[1]: Created slice kubepods-besteffort-podb0d8846e_dd18_434c_b179_e3c2878ecf3f.slice - libcontainer container kubepods-besteffort-podb0d8846e_dd18_434c_b179_e3c2878ecf3f.slice. Oct 31 01:45:42.127912 systemd[1]: Created slice kubepods-burstable-pode995549a_d4b6_43b7_9c52_4c9c14a4dcdf.slice - libcontainer container kubepods-burstable-pode995549a_d4b6_43b7_9c52_4c9c14a4dcdf.slice. Oct 31 01:45:42.138544 systemd[1]: Created slice kubepods-burstable-podb865c705_77c3_44e2_b527_f7fc482e79fd.slice - libcontainer container kubepods-burstable-podb865c705_77c3_44e2_b527_f7fc482e79fd.slice. Oct 31 01:45:42.150303 systemd[1]: Created slice kubepods-besteffort-podf712c016_8a6e_4625_aab4_a80c982f13bc.slice - libcontainer container kubepods-besteffort-podf712c016_8a6e_4625_aab4_a80c982f13bc.slice. Oct 31 01:45:42.160068 systemd[1]: Created slice kubepods-besteffort-pod7238a7c0_ff8f_443f_ab69_f2ee0be198c2.slice - libcontainer container kubepods-besteffort-pod7238a7c0_ff8f_443f_ab69_f2ee0be198c2.slice. Oct 31 01:45:42.167676 kubelet[2672]: I1031 01:45:42.167637 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbvcc\" (UniqueName: \"kubernetes.io/projected/b865c705-77c3-44e2-b527-f7fc482e79fd-kube-api-access-vbvcc\") pod \"coredns-66bc5c9577-zlj9c\" (UID: \"b865c705-77c3-44e2-b527-f7fc482e79fd\") " pod="kube-system/coredns-66bc5c9577-zlj9c" Oct 31 01:45:42.167676 kubelet[2672]: I1031 01:45:42.167688 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngwxl\" (UniqueName: \"kubernetes.io/projected/7238a7c0-ff8f-443f-ab69-f2ee0be198c2-kube-api-access-ngwxl\") pod \"goldmane-7c778bb748-tgfh4\" (UID: \"7238a7c0-ff8f-443f-ab69-f2ee0be198c2\") " pod="calico-system/goldmane-7c778bb748-tgfh4" Oct 31 01:45:42.167676 kubelet[2672]: I1031 01:45:42.167717 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f4abe04b-b171-45a2-9c26-8c077d6bf990-calico-apiserver-certs\") pod \"calico-apiserver-6b58b8b6b-r89xp\" (UID: \"f4abe04b-b171-45a2-9c26-8c077d6bf990\") " pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" Oct 31 01:45:42.169991 kubelet[2672]: I1031 01:45:42.167749 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/22cf324a-372c-4445-b54c-2bbf176ec24f-whisker-backend-key-pair\") pod \"whisker-558d6757c7-7cf2w\" (UID: \"22cf324a-372c-4445-b54c-2bbf176ec24f\") " pod="calico-system/whisker-558d6757c7-7cf2w" Oct 31 01:45:42.169991 kubelet[2672]: I1031 01:45:42.167788 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7238a7c0-ff8f-443f-ab69-f2ee0be198c2-config\") pod \"goldmane-7c778bb748-tgfh4\" (UID: \"7238a7c0-ff8f-443f-ab69-f2ee0be198c2\") " pod="calico-system/goldmane-7c778bb748-tgfh4" Oct 31 01:45:42.169991 kubelet[2672]: I1031 01:45:42.167823 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22cf324a-372c-4445-b54c-2bbf176ec24f-whisker-ca-bundle\") pod \"whisker-558d6757c7-7cf2w\" (UID: \"22cf324a-372c-4445-b54c-2bbf176ec24f\") " pod="calico-system/whisker-558d6757c7-7cf2w" Oct 31 01:45:42.169991 kubelet[2672]: I1031 01:45:42.167854 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrqt6\" (UniqueName: \"kubernetes.io/projected/22cf324a-372c-4445-b54c-2bbf176ec24f-kube-api-access-rrqt6\") pod \"whisker-558d6757c7-7cf2w\" (UID: \"22cf324a-372c-4445-b54c-2bbf176ec24f\") " pod="calico-system/whisker-558d6757c7-7cf2w" Oct 31 01:45:42.169991 kubelet[2672]: I1031 01:45:42.167881 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv97g\" (UniqueName: \"kubernetes.io/projected/e995549a-d4b6-43b7-9c52-4c9c14a4dcdf-kube-api-access-gv97g\") pod \"coredns-66bc5c9577-f9xmz\" (UID: \"e995549a-d4b6-43b7-9c52-4c9c14a4dcdf\") " pod="kube-system/coredns-66bc5c9577-f9xmz" Oct 31 01:45:42.170296 kubelet[2672]: I1031 01:45:42.167909 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b865c705-77c3-44e2-b527-f7fc482e79fd-config-volume\") pod \"coredns-66bc5c9577-zlj9c\" (UID: \"b865c705-77c3-44e2-b527-f7fc482e79fd\") " pod="kube-system/coredns-66bc5c9577-zlj9c" Oct 31 01:45:42.170296 kubelet[2672]: I1031 01:45:42.167944 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f712c016-8a6e-4625-aab4-a80c982f13bc-calico-apiserver-certs\") pod \"calico-apiserver-6b58b8b6b-6wxlj\" (UID: \"f712c016-8a6e-4625-aab4-a80c982f13bc\") " pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" Oct 31 01:45:42.170296 kubelet[2672]: I1031 01:45:42.167993 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b0d8846e-dd18-434c-b179-e3c2878ecf3f-tigera-ca-bundle\") pod \"calico-kube-controllers-84c75ff6b-nf76t\" (UID: \"b0d8846e-dd18-434c-b179-e3c2878ecf3f\") " pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" Oct 31 01:45:42.170296 kubelet[2672]: I1031 01:45:42.168022 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7238a7c0-ff8f-443f-ab69-f2ee0be198c2-goldmane-key-pair\") pod \"goldmane-7c778bb748-tgfh4\" (UID: \"7238a7c0-ff8f-443f-ab69-f2ee0be198c2\") " pod="calico-system/goldmane-7c778bb748-tgfh4" Oct 31 01:45:42.170296 kubelet[2672]: I1031 01:45:42.168066 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqd6r\" (UniqueName: \"kubernetes.io/projected/b0d8846e-dd18-434c-b179-e3c2878ecf3f-kube-api-access-wqd6r\") pod \"calico-kube-controllers-84c75ff6b-nf76t\" (UID: \"b0d8846e-dd18-434c-b179-e3c2878ecf3f\") " pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" Oct 31 01:45:42.170573 kubelet[2672]: I1031 01:45:42.168096 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jnnv\" (UniqueName: \"kubernetes.io/projected/f4abe04b-b171-45a2-9c26-8c077d6bf990-kube-api-access-8jnnv\") pod \"calico-apiserver-6b58b8b6b-r89xp\" (UID: \"f4abe04b-b171-45a2-9c26-8c077d6bf990\") " pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" Oct 31 01:45:42.170573 kubelet[2672]: I1031 01:45:42.168121 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hp9p\" (UniqueName: \"kubernetes.io/projected/f712c016-8a6e-4625-aab4-a80c982f13bc-kube-api-access-2hp9p\") pod \"calico-apiserver-6b58b8b6b-6wxlj\" (UID: \"f712c016-8a6e-4625-aab4-a80c982f13bc\") " pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" Oct 31 01:45:42.170573 kubelet[2672]: I1031 01:45:42.168149 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e995549a-d4b6-43b7-9c52-4c9c14a4dcdf-config-volume\") pod \"coredns-66bc5c9577-f9xmz\" (UID: \"e995549a-d4b6-43b7-9c52-4c9c14a4dcdf\") " pod="kube-system/coredns-66bc5c9577-f9xmz" Oct 31 01:45:42.170573 kubelet[2672]: I1031 01:45:42.168176 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7238a7c0-ff8f-443f-ab69-f2ee0be198c2-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-tgfh4\" (UID: \"7238a7c0-ff8f-443f-ab69-f2ee0be198c2\") " pod="calico-system/goldmane-7c778bb748-tgfh4" Oct 31 01:45:42.174985 systemd[1]: Created slice kubepods-besteffort-podf4abe04b_b171_45a2_9c26_8c077d6bf990.slice - libcontainer container kubepods-besteffort-podf4abe04b_b171_45a2_9c26_8c077d6bf990.slice. Oct 31 01:45:42.389501 containerd[1499]: time="2025-10-31T01:45:42.389259040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-558d6757c7-7cf2w,Uid:22cf324a-372c-4445-b54c-2bbf176ec24f,Namespace:calico-system,Attempt:0,}" Oct 31 01:45:42.411609 containerd[1499]: time="2025-10-31T01:45:42.411107100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c75ff6b-nf76t,Uid:b0d8846e-dd18-434c-b179-e3c2878ecf3f,Namespace:calico-system,Attempt:0,}" Oct 31 01:45:42.451357 containerd[1499]: time="2025-10-31T01:45:42.451100558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlj9c,Uid:b865c705-77c3-44e2-b527-f7fc482e79fd,Namespace:kube-system,Attempt:0,}" Oct 31 01:45:42.456975 containerd[1499]: time="2025-10-31T01:45:42.456880230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f9xmz,Uid:e995549a-d4b6-43b7-9c52-4c9c14a4dcdf,Namespace:kube-system,Attempt:0,}" Oct 31 01:45:42.470030 containerd[1499]: time="2025-10-31T01:45:42.469865744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b58b8b6b-6wxlj,Uid:f712c016-8a6e-4625-aab4-a80c982f13bc,Namespace:calico-apiserver,Attempt:0,}" Oct 31 01:45:42.487683 sshd[3485]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.140.17.124 user=root Oct 31 01:45:42.491701 containerd[1499]: time="2025-10-31T01:45:42.491659276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b58b8b6b-r89xp,Uid:f4abe04b-b171-45a2-9c26-8c077d6bf990,Namespace:calico-apiserver,Attempt:0,}" Oct 31 01:45:42.492258 containerd[1499]: time="2025-10-31T01:45:42.492225307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tgfh4,Uid:7238a7c0-ff8f-443f-ab69-f2ee0be198c2,Namespace:calico-system,Attempt:0,}" Oct 31 01:45:42.589050 systemd[1]: Created slice kubepods-besteffort-podb91990ed_b519_4003_921b_695c5958edac.slice - libcontainer container kubepods-besteffort-podb91990ed_b519_4003_921b_695c5958edac.slice. Oct 31 01:45:42.600055 containerd[1499]: time="2025-10-31T01:45:42.599944816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvbwj,Uid:b91990ed-b519-4003-921b-695c5958edac,Namespace:calico-system,Attempt:0,}" Oct 31 01:45:42.800004 containerd[1499]: time="2025-10-31T01:45:42.799829322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 01:45:42.899290 containerd[1499]: time="2025-10-31T01:45:42.899115850Z" level=error msg="Failed to destroy network for sandbox \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:42.927608 containerd[1499]: time="2025-10-31T01:45:42.927256176Z" level=error msg="encountered an error cleaning up failed sandbox \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:42.928336 containerd[1499]: time="2025-10-31T01:45:42.928284970Z" level=error msg="Failed to destroy network for sandbox \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:42.928663 containerd[1499]: time="2025-10-31T01:45:42.928616758Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-558d6757c7-7cf2w,Uid:22cf324a-372c-4445-b54c-2bbf176ec24f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:42.929295 containerd[1499]: time="2025-10-31T01:45:42.929260808Z" level=error msg="encountered an error cleaning up failed sandbox \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:42.929550 containerd[1499]: time="2025-10-31T01:45:42.929389069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b58b8b6b-6wxlj,Uid:f712c016-8a6e-4625-aab4-a80c982f13bc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:42.961855 containerd[1499]: time="2025-10-31T01:45:42.961791157Z" level=error msg="Failed to destroy network for sandbox \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:42.965091 kubelet[2672]: E1031 01:45:42.962114 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:42.965091 kubelet[2672]: E1031 01:45:42.962230 2672 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" Oct 31 01:45:42.965091 kubelet[2672]: E1031 01:45:42.962268 2672 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" Oct 31 01:45:42.965378 kubelet[2672]: E1031 01:45:42.962365 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b58b8b6b-6wxlj_calico-apiserver(f712c016-8a6e-4625-aab4-a80c982f13bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b58b8b6b-6wxlj_calico-apiserver(f712c016-8a6e-4625-aab4-a80c982f13bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" podUID="f712c016-8a6e-4625-aab4-a80c982f13bc" Oct 31 01:45:42.965378 kubelet[2672]: E1031 01:45:42.962646 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:42.965378 kubelet[2672]: E1031 01:45:42.962700 2672 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-558d6757c7-7cf2w" Oct 31 01:45:42.972502 kubelet[2672]: E1031 01:45:42.962768 2672 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-558d6757c7-7cf2w" Oct 31 01:45:42.972502 kubelet[2672]: E1031 01:45:42.962819 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-558d6757c7-7cf2w_calico-system(22cf324a-372c-4445-b54c-2bbf176ec24f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-558d6757c7-7cf2w_calico-system(22cf324a-372c-4445-b54c-2bbf176ec24f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-558d6757c7-7cf2w" podUID="22cf324a-372c-4445-b54c-2bbf176ec24f" Oct 31 01:45:42.979398 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5-shm.mount: Deactivated successfully. Oct 31 01:45:42.985787 containerd[1499]: time="2025-10-31T01:45:42.985399759Z" level=error msg="encountered an error cleaning up failed sandbox \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:42.986009 containerd[1499]: time="2025-10-31T01:45:42.985971914Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tgfh4,Uid:7238a7c0-ff8f-443f-ab69-f2ee0be198c2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:42.989198 kubelet[2672]: E1031 01:45:42.989135 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:42.989382 kubelet[2672]: E1031 01:45:42.989208 2672 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tgfh4" Oct 31 01:45:42.989382 kubelet[2672]: E1031 01:45:42.989238 2672 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-tgfh4" Oct 31 01:45:42.989382 kubelet[2672]: E1031 01:45:42.989340 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-tgfh4_calico-system(7238a7c0-ff8f-443f-ab69-f2ee0be198c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-tgfh4_calico-system(7238a7c0-ff8f-443f-ab69-f2ee0be198c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-tgfh4" podUID="7238a7c0-ff8f-443f-ab69-f2ee0be198c2" Oct 31 01:45:42.995436 containerd[1499]: time="2025-10-31T01:45:42.995263368Z" level=error msg="Failed to destroy network for sandbox \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:42.999404 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35-shm.mount: Deactivated successfully. Oct 31 01:45:43.004497 containerd[1499]: time="2025-10-31T01:45:43.004438789Z" level=error msg="encountered an error cleaning up failed sandbox \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.004684 containerd[1499]: time="2025-10-31T01:45:43.004533742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c75ff6b-nf76t,Uid:b0d8846e-dd18-434c-b179-e3c2878ecf3f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.006209 kubelet[2672]: E1031 01:45:43.006088 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.006503 kubelet[2672]: E1031 01:45:43.006187 2672 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" Oct 31 01:45:43.006503 kubelet[2672]: E1031 01:45:43.006391 2672 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" Oct 31 01:45:43.006929 kubelet[2672]: E1031 01:45:43.006482 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84c75ff6b-nf76t_calico-system(b0d8846e-dd18-434c-b179-e3c2878ecf3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84c75ff6b-nf76t_calico-system(b0d8846e-dd18-434c-b179-e3c2878ecf3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" podUID="b0d8846e-dd18-434c-b179-e3c2878ecf3f" Oct 31 01:45:43.020913 containerd[1499]: time="2025-10-31T01:45:43.020728397Z" level=error msg="Failed to destroy network for sandbox \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.023624 containerd[1499]: time="2025-10-31T01:45:43.021139381Z" level=error msg="Failed to destroy network for sandbox \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.024152 containerd[1499]: time="2025-10-31T01:45:43.024089498Z" level=error msg="encountered an error cleaning up failed sandbox \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.024884 containerd[1499]: time="2025-10-31T01:45:43.024692831Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlj9c,Uid:b865c705-77c3-44e2-b527-f7fc482e79fd,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.025197 containerd[1499]: time="2025-10-31T01:45:43.024610732Z" level=error msg="encountered an error cleaning up failed sandbox \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.025197 containerd[1499]: time="2025-10-31T01:45:43.025048325Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b58b8b6b-r89xp,Uid:f4abe04b-b171-45a2-9c26-8c077d6bf990,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.026032 kubelet[2672]: E1031 01:45:43.025387 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.026032 kubelet[2672]: E1031 01:45:43.025471 2672 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" Oct 31 01:45:43.026032 kubelet[2672]: E1031 01:45:43.025501 2672 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" Oct 31 01:45:43.025890 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c-shm.mount: Deactivated successfully. Oct 31 01:45:43.032674 kubelet[2672]: E1031 01:45:43.025570 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b58b8b6b-r89xp_calico-apiserver(f4abe04b-b171-45a2-9c26-8c077d6bf990)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b58b8b6b-r89xp_calico-apiserver(f4abe04b-b171-45a2-9c26-8c077d6bf990)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" podUID="f4abe04b-b171-45a2-9c26-8c077d6bf990" Oct 31 01:45:43.032674 kubelet[2672]: E1031 01:45:43.027283 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.032674 kubelet[2672]: E1031 01:45:43.027355 2672 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zlj9c" Oct 31 01:45:43.026524 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265-shm.mount: Deactivated successfully. Oct 31 01:45:43.033536 kubelet[2672]: E1031 01:45:43.027380 2672 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-zlj9c" Oct 31 01:45:43.033536 kubelet[2672]: E1031 01:45:43.027429 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-zlj9c_kube-system(b865c705-77c3-44e2-b527-f7fc482e79fd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-zlj9c_kube-system(b865c705-77c3-44e2-b527-f7fc482e79fd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zlj9c" podUID="b865c705-77c3-44e2-b527-f7fc482e79fd" Oct 31 01:45:43.045371 containerd[1499]: time="2025-10-31T01:45:43.045274211Z" level=error msg="Failed to destroy network for sandbox \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.045517 containerd[1499]: time="2025-10-31T01:45:43.045274212Z" level=error msg="Failed to destroy network for sandbox \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.047117 containerd[1499]: time="2025-10-31T01:45:43.047078541Z" level=error msg="encountered an error cleaning up failed sandbox \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.047201 containerd[1499]: time="2025-10-31T01:45:43.047148581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f9xmz,Uid:e995549a-d4b6-43b7-9c52-4c9c14a4dcdf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.048835 kubelet[2672]: E1031 01:45:43.048789 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.048976 kubelet[2672]: E1031 01:45:43.048872 2672 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-f9xmz" Oct 31 01:45:43.048976 kubelet[2672]: E1031 01:45:43.048926 2672 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-f9xmz" Oct 31 01:45:43.049665 containerd[1499]: time="2025-10-31T01:45:43.049355735Z" level=error msg="encountered an error cleaning up failed sandbox \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.049665 containerd[1499]: time="2025-10-31T01:45:43.049408234Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvbwj,Uid:b91990ed-b519-4003-921b-695c5958edac,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.049513 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43-shm.mount: Deactivated successfully. Oct 31 01:45:43.050756 kubelet[2672]: E1031 01:45:43.050305 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-f9xmz_kube-system(e995549a-d4b6-43b7-9c52-4c9c14a4dcdf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-f9xmz_kube-system(e995549a-d4b6-43b7-9c52-4c9c14a4dcdf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-f9xmz" podUID="e995549a-d4b6-43b7-9c52-4c9c14a4dcdf" Oct 31 01:45:43.052927 kubelet[2672]: E1031 01:45:43.051093 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.052927 kubelet[2672]: E1031 01:45:43.051150 2672 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lvbwj" Oct 31 01:45:43.052927 kubelet[2672]: E1031 01:45:43.051174 2672 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lvbwj" Oct 31 01:45:43.053094 kubelet[2672]: E1031 01:45:43.051251 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lvbwj_calico-system(b91990ed-b519-4003-921b-695c5958edac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lvbwj_calico-system(b91990ed-b519-4003-921b-695c5958edac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:45:43.797628 kubelet[2672]: I1031 01:45:43.797330 2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Oct 31 01:45:43.803312 kubelet[2672]: I1031 01:45:43.802549 2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Oct 31 01:45:43.803941 containerd[1499]: time="2025-10-31T01:45:43.803649727Z" level=info msg="StopPodSandbox for \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\"" Oct 31 01:45:43.806409 containerd[1499]: time="2025-10-31T01:45:43.806377330Z" level=info msg="Ensure that sandbox f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265 in task-service has been cleanup successfully" Oct 31 01:45:43.807676 containerd[1499]: time="2025-10-31T01:45:43.807124620Z" level=info msg="StopPodSandbox for \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\"" Oct 31 01:45:43.807676 containerd[1499]: time="2025-10-31T01:45:43.807549232Z" level=info msg="Ensure that sandbox 2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57 in task-service has been cleanup successfully" Oct 31 01:45:43.811636 kubelet[2672]: I1031 01:45:43.811039 2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Oct 31 01:45:43.813984 containerd[1499]: time="2025-10-31T01:45:43.813947197Z" level=info msg="StopPodSandbox for \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\"" Oct 31 01:45:43.814229 containerd[1499]: time="2025-10-31T01:45:43.814198857Z" level=info msg="Ensure that sandbox 03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f in task-service has been cleanup successfully" Oct 31 01:45:43.816194 kubelet[2672]: I1031 01:45:43.816161 2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Oct 31 01:45:43.819465 containerd[1499]: time="2025-10-31T01:45:43.819349218Z" level=info msg="StopPodSandbox for \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\"" Oct 31 01:45:43.819864 containerd[1499]: time="2025-10-31T01:45:43.819793350Z" level=info msg="Ensure that sandbox ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76 in task-service has been cleanup successfully" Oct 31 01:45:43.820697 kubelet[2672]: I1031 01:45:43.820664 2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Oct 31 01:45:43.825260 containerd[1499]: time="2025-10-31T01:45:43.824716536Z" level=info msg="StopPodSandbox for \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\"" Oct 31 01:45:43.825260 containerd[1499]: time="2025-10-31T01:45:43.824982861Z" level=info msg="Ensure that sandbox 646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c in task-service has been cleanup successfully" Oct 31 01:45:43.829098 kubelet[2672]: I1031 01:45:43.829068 2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Oct 31 01:45:43.830803 containerd[1499]: time="2025-10-31T01:45:43.830668388Z" level=info msg="StopPodSandbox for \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\"" Oct 31 01:45:43.831216 containerd[1499]: time="2025-10-31T01:45:43.830870254Z" level=info msg="Ensure that sandbox a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43 in task-service has been cleanup successfully" Oct 31 01:45:43.845057 kubelet[2672]: I1031 01:45:43.844989 2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Oct 31 01:45:43.851365 containerd[1499]: time="2025-10-31T01:45:43.850534138Z" level=info msg="StopPodSandbox for \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\"" Oct 31 01:45:43.852952 kubelet[2672]: I1031 01:45:43.852362 2672 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Oct 31 01:45:43.856452 containerd[1499]: time="2025-10-31T01:45:43.856375439Z" level=info msg="StopPodSandbox for \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\"" Oct 31 01:45:43.856668 containerd[1499]: time="2025-10-31T01:45:43.856633848Z" level=info msg="Ensure that sandbox 77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5 in task-service has been cleanup successfully" Oct 31 01:45:43.856943 containerd[1499]: time="2025-10-31T01:45:43.856820522Z" level=info msg="Ensure that sandbox f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35 in task-service has been cleanup successfully" Oct 31 01:45:43.941517 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76-shm.mount: Deactivated successfully. Oct 31 01:45:43.953885 containerd[1499]: time="2025-10-31T01:45:43.953825404Z" level=error msg="StopPodSandbox for \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\" failed" error="failed to destroy network for sandbox \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.954498 kubelet[2672]: E1031 01:45:43.954401 2672 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Oct 31 01:45:43.954640 kubelet[2672]: E1031 01:45:43.954533 2672 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f"} Oct 31 01:45:43.954640 kubelet[2672]: E1031 01:45:43.954625 2672 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22cf324a-372c-4445-b54c-2bbf176ec24f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:45:43.954943 kubelet[2672]: E1031 01:45:43.954667 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22cf324a-372c-4445-b54c-2bbf176ec24f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-558d6757c7-7cf2w" podUID="22cf324a-372c-4445-b54c-2bbf176ec24f" Oct 31 01:45:43.962558 containerd[1499]: time="2025-10-31T01:45:43.962282836Z" level=error msg="StopPodSandbox for \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\" failed" error="failed to destroy network for sandbox \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.964073 kubelet[2672]: E1031 01:45:43.962558 2672 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Oct 31 01:45:43.964073 kubelet[2672]: E1031 01:45:43.963074 2672 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265"} Oct 31 01:45:43.964073 kubelet[2672]: E1031 01:45:43.963119 2672 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b865c705-77c3-44e2-b527-f7fc482e79fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:45:43.964073 kubelet[2672]: E1031 01:45:43.963160 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b865c705-77c3-44e2-b527-f7fc482e79fd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-zlj9c" podUID="b865c705-77c3-44e2-b527-f7fc482e79fd" Oct 31 01:45:43.969557 containerd[1499]: time="2025-10-31T01:45:43.968696377Z" level=error msg="StopPodSandbox for \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\" failed" error="failed to destroy network for sandbox \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.969718 kubelet[2672]: E1031 01:45:43.969138 2672 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Oct 31 01:45:43.969718 kubelet[2672]: E1031 01:45:43.969187 2672 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76"} Oct 31 01:45:43.969718 kubelet[2672]: E1031 01:45:43.969220 2672 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b91990ed-b519-4003-921b-695c5958edac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:45:43.969718 kubelet[2672]: E1031 01:45:43.969280 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b91990ed-b519-4003-921b-695c5958edac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:45:43.997096 containerd[1499]: time="2025-10-31T01:45:43.996760781Z" level=error msg="StopPodSandbox for \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\" failed" error="failed to destroy network for sandbox \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:43.997452 kubelet[2672]: E1031 01:45:43.997370 2672 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Oct 31 01:45:43.997633 kubelet[2672]: E1031 01:45:43.997451 2672 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35"} Oct 31 01:45:43.997633 kubelet[2672]: E1031 01:45:43.997493 2672 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b0d8846e-dd18-434c-b179-e3c2878ecf3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:45:43.997633 kubelet[2672]: E1031 01:45:43.997534 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b0d8846e-dd18-434c-b179-e3c2878ecf3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" podUID="b0d8846e-dd18-434c-b179-e3c2878ecf3f" Oct 31 01:45:44.007769 containerd[1499]: time="2025-10-31T01:45:44.007712694Z" level=error msg="StopPodSandbox for \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\" failed" error="failed to destroy network for sandbox \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:44.008424 kubelet[2672]: E1031 01:45:44.008162 2672 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Oct 31 01:45:44.008424 kubelet[2672]: E1031 01:45:44.008243 2672 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c"} Oct 31 01:45:44.008424 kubelet[2672]: E1031 01:45:44.008286 2672 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f4abe04b-b171-45a2-9c26-8c077d6bf990\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:45:44.008424 kubelet[2672]: E1031 01:45:44.008359 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f4abe04b-b171-45a2-9c26-8c077d6bf990\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" podUID="f4abe04b-b171-45a2-9c26-8c077d6bf990" Oct 31 01:45:44.009022 containerd[1499]: time="2025-10-31T01:45:44.008740534Z" level=error msg="StopPodSandbox for \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\" failed" error="failed to destroy network for sandbox \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:44.010150 kubelet[2672]: E1031 01:45:44.009778 2672 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Oct 31 01:45:44.010150 kubelet[2672]: E1031 01:45:44.009830 2672 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57"} Oct 31 01:45:44.010150 kubelet[2672]: E1031 01:45:44.009865 2672 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f712c016-8a6e-4625-aab4-a80c982f13bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:45:44.010150 kubelet[2672]: E1031 01:45:44.009897 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f712c016-8a6e-4625-aab4-a80c982f13bc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" podUID="f712c016-8a6e-4625-aab4-a80c982f13bc" Oct 31 01:45:44.018395 containerd[1499]: time="2025-10-31T01:45:44.017719715Z" level=error msg="StopPodSandbox for \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\" failed" error="failed to destroy network for sandbox \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:44.018395 containerd[1499]: time="2025-10-31T01:45:44.018329813Z" level=error msg="StopPodSandbox for \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\" failed" error="failed to destroy network for sandbox \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 01:45:44.019090 kubelet[2672]: E1031 01:45:44.018014 2672 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Oct 31 01:45:44.019090 kubelet[2672]: E1031 01:45:44.018089 2672 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5"} Oct 31 01:45:44.019090 kubelet[2672]: E1031 01:45:44.018132 2672 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7238a7c0-ff8f-443f-ab69-f2ee0be198c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:45:44.019090 kubelet[2672]: E1031 01:45:44.018168 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7238a7c0-ff8f-443f-ab69-f2ee0be198c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-tgfh4" podUID="7238a7c0-ff8f-443f-ab69-f2ee0be198c2" Oct 31 01:45:44.019719 kubelet[2672]: E1031 01:45:44.019681 2672 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Oct 31 01:45:44.019948 kubelet[2672]: E1031 01:45:44.019727 2672 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43"} Oct 31 01:45:44.019948 kubelet[2672]: E1031 01:45:44.019760 2672 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e995549a-d4b6-43b7-9c52-4c9c14a4dcdf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Oct 31 01:45:44.019948 kubelet[2672]: E1031 01:45:44.019824 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e995549a-d4b6-43b7-9c52-4c9c14a4dcdf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-f9xmz" podUID="e995549a-d4b6-43b7-9c52-4c9c14a4dcdf" Oct 31 01:45:44.155711 sshd[3418]: PAM: Permission denied for root from 45.140.17.124 Oct 31 01:45:44.629934 sshd[3418]: Connection reset by authenticating user root 45.140.17.124 port 62072 [preauth] Oct 31 01:45:44.634883 systemd[1]: sshd@9-10.230.44.66:22-45.140.17.124:62072.service: Deactivated successfully. Oct 31 01:45:44.741100 systemd[1]: Started sshd@10-10.230.44.66:22-45.140.17.124:53828.service - OpenSSH per-connection server daemon (45.140.17.124:53828). Oct 31 01:45:46.307978 sshd[3827]: Invalid user telecomadmin from 45.140.17.124 port 53828 Oct 31 01:45:46.809890 sshd[3829]: pam_faillock(sshd:auth): User unknown Oct 31 01:45:46.812180 sshd[3827]: Postponed keyboard-interactive for invalid user telecomadmin from 45.140.17.124 port 53828 ssh2 [preauth] Oct 31 01:45:47.302646 sshd[3829]: pam_unix(sshd:auth): check pass; user unknown Oct 31 01:45:47.302710 sshd[3829]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.140.17.124 Oct 31 01:45:47.303490 sshd[3829]: pam_faillock(sshd:auth): User unknown Oct 31 01:45:49.854082 sshd[3827]: PAM: Permission denied for illegal user telecomadmin from 45.140.17.124 Oct 31 01:45:49.857706 sshd[3827]: Failed keyboard-interactive/pam for invalid user telecomadmin from 45.140.17.124 port 53828 ssh2 Oct 31 01:45:50.251012 sshd[3827]: Connection reset by invalid user telecomadmin 45.140.17.124 port 53828 [preauth] Oct 31 01:45:50.256009 systemd[1]: sshd@10-10.230.44.66:22-45.140.17.124:53828.service: Deactivated successfully. Oct 31 01:45:50.343993 systemd[1]: Started sshd@11-10.230.44.66:22-45.140.17.124:32722.service - OpenSSH per-connection server daemon (45.140.17.124:32722). Oct 31 01:45:51.994203 sshd[3837]: Invalid user admin from 45.140.17.124 port 32722 Oct 31 01:45:52.449255 sshd[3839]: pam_faillock(sshd:auth): User unknown Oct 31 01:45:52.453096 sshd[3837]: Postponed keyboard-interactive for invalid user admin from 45.140.17.124 port 32722 ssh2 [preauth] Oct 31 01:45:52.703220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677304334.mount: Deactivated successfully. Oct 31 01:45:52.805675 containerd[1499]: time="2025-10-31T01:45:52.780441954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 31 01:45:52.806303 containerd[1499]: time="2025-10-31T01:45:52.805707794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:45:52.852980 containerd[1499]: time="2025-10-31T01:45:52.852905751Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:45:52.855443 containerd[1499]: time="2025-10-31T01:45:52.855008814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 01:45:52.857603 containerd[1499]: time="2025-10-31T01:45:52.857544341Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 10.05443688s" Oct 31 01:45:52.857738 containerd[1499]: time="2025-10-31T01:45:52.857709266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 31 01:45:52.923573 containerd[1499]: time="2025-10-31T01:45:52.923497987Z" level=info msg="CreateContainer within sandbox \"fa697adf4d164031a3ba8b7efa9cd06c3bc068082adbbcc8c3deb8b91c36586b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 01:45:52.982400 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount345438447.mount: Deactivated successfully. Oct 31 01:45:53.015468 containerd[1499]: time="2025-10-31T01:45:53.015345579Z" level=info msg="CreateContainer within sandbox \"fa697adf4d164031a3ba8b7efa9cd06c3bc068082adbbcc8c3deb8b91c36586b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e583f86e7bf9b46c2aa0ddb201feafec74abac7f40303c2fec25b991ce7c8b19\"" Oct 31 01:45:53.016637 containerd[1499]: time="2025-10-31T01:45:53.016602717Z" level=info msg="StartContainer for \"e583f86e7bf9b46c2aa0ddb201feafec74abac7f40303c2fec25b991ce7c8b19\"" Oct 31 01:45:53.163271 systemd[1]: Started cri-containerd-e583f86e7bf9b46c2aa0ddb201feafec74abac7f40303c2fec25b991ce7c8b19.scope - libcontainer container e583f86e7bf9b46c2aa0ddb201feafec74abac7f40303c2fec25b991ce7c8b19. Oct 31 01:45:53.238106 containerd[1499]: time="2025-10-31T01:45:53.237967769Z" level=info msg="StartContainer for \"e583f86e7bf9b46c2aa0ddb201feafec74abac7f40303c2fec25b991ce7c8b19\" returns successfully" Oct 31 01:45:53.358156 sshd[3839]: pam_unix(sshd:auth): check pass; user unknown Oct 31 01:45:53.358204 sshd[3839]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.140.17.124 Oct 31 01:45:53.359116 sshd[3839]: pam_faillock(sshd:auth): User unknown Oct 31 01:45:53.576744 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 01:45:53.577874 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 01:45:53.893555 containerd[1499]: time="2025-10-31T01:45:53.891995330Z" level=info msg="StopPodSandbox for \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\"" Oct 31 01:45:54.110506 kubelet[2672]: I1031 01:45:54.107814 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rlfcl" podStartSLOduration=2.700839938 podStartE2EDuration="26.092773995s" podCreationTimestamp="2025-10-31 01:45:28 +0000 UTC" firstStartedPulling="2025-10-31 01:45:29.472937402 +0000 UTC m=+29.171705157" lastFinishedPulling="2025-10-31 01:45:52.864871446 +0000 UTC m=+52.563639214" observedRunningTime="2025-10-31 01:45:53.954145766 +0000 UTC m=+53.652913546" watchObservedRunningTime="2025-10-31 01:45:54.092773995 +0000 UTC m=+53.791541765" Oct 31 01:45:54.340274 containerd[1499]: 2025-10-31 01:45:54.092 [INFO][3902] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Oct 31 01:45:54.340274 containerd[1499]: 2025-10-31 01:45:54.094 [INFO][3902] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" iface="eth0" netns="/var/run/netns/cni-8f311637-13ae-bab3-5f67-2da6fa3844bd" Oct 31 01:45:54.340274 containerd[1499]: 2025-10-31 01:45:54.096 [INFO][3902] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" iface="eth0" netns="/var/run/netns/cni-8f311637-13ae-bab3-5f67-2da6fa3844bd" Oct 31 01:45:54.340274 containerd[1499]: 2025-10-31 01:45:54.097 [INFO][3902] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" iface="eth0" netns="/var/run/netns/cni-8f311637-13ae-bab3-5f67-2da6fa3844bd" Oct 31 01:45:54.340274 containerd[1499]: 2025-10-31 01:45:54.098 [INFO][3902] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Oct 31 01:45:54.340274 containerd[1499]: 2025-10-31 01:45:54.098 [INFO][3902] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Oct 31 01:45:54.340274 containerd[1499]: 2025-10-31 01:45:54.310 [INFO][3916] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" HandleID="k8s-pod-network.03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Workload="srv--n5tpq.gb1.brightbox.com-k8s-whisker--558d6757c7--7cf2w-eth0" Oct 31 01:45:54.340274 containerd[1499]: 2025-10-31 01:45:54.312 [INFO][3916] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:45:54.340274 containerd[1499]: 2025-10-31 01:45:54.312 [INFO][3916] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:45:54.340274 containerd[1499]: 2025-10-31 01:45:54.329 [WARNING][3916] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" HandleID="k8s-pod-network.03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Workload="srv--n5tpq.gb1.brightbox.com-k8s-whisker--558d6757c7--7cf2w-eth0" Oct 31 01:45:54.340274 containerd[1499]: 2025-10-31 01:45:54.330 [INFO][3916] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" HandleID="k8s-pod-network.03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Workload="srv--n5tpq.gb1.brightbox.com-k8s-whisker--558d6757c7--7cf2w-eth0" Oct 31 01:45:54.340274 containerd[1499]: 2025-10-31 01:45:54.332 [INFO][3916] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:45:54.340274 containerd[1499]: 2025-10-31 01:45:54.335 [INFO][3902] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Oct 31 01:45:54.348538 systemd[1]: run-netns-cni\x2d8f311637\x2d13ae\x2dbab3\x2d5f67\x2d2da6fa3844bd.mount: Deactivated successfully. Oct 31 01:45:54.353654 containerd[1499]: time="2025-10-31T01:45:54.353598781Z" level=info msg="TearDown network for sandbox \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\" successfully" Oct 31 01:45:54.353995 containerd[1499]: time="2025-10-31T01:45:54.353844291Z" level=info msg="StopPodSandbox for \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\" returns successfully" Oct 31 01:45:54.475682 kubelet[2672]: I1031 01:45:54.475353 2672 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrqt6\" (UniqueName: \"kubernetes.io/projected/22cf324a-372c-4445-b54c-2bbf176ec24f-kube-api-access-rrqt6\") pod \"22cf324a-372c-4445-b54c-2bbf176ec24f\" (UID: \"22cf324a-372c-4445-b54c-2bbf176ec24f\") " Oct 31 01:45:54.475682 kubelet[2672]: I1031 01:45:54.475449 2672 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/22cf324a-372c-4445-b54c-2bbf176ec24f-whisker-backend-key-pair\") pod \"22cf324a-372c-4445-b54c-2bbf176ec24f\" (UID: \"22cf324a-372c-4445-b54c-2bbf176ec24f\") " Oct 31 01:45:54.475682 kubelet[2672]: I1031 01:45:54.475506 2672 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22cf324a-372c-4445-b54c-2bbf176ec24f-whisker-ca-bundle\") pod \"22cf324a-372c-4445-b54c-2bbf176ec24f\" (UID: \"22cf324a-372c-4445-b54c-2bbf176ec24f\") " Oct 31 01:45:54.514737 kubelet[2672]: I1031 01:45:54.510264 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22cf324a-372c-4445-b54c-2bbf176ec24f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "22cf324a-372c-4445-b54c-2bbf176ec24f" (UID: "22cf324a-372c-4445-b54c-2bbf176ec24f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 01:45:54.516602 kubelet[2672]: I1031 01:45:54.515937 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22cf324a-372c-4445-b54c-2bbf176ec24f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "22cf324a-372c-4445-b54c-2bbf176ec24f" (UID: "22cf324a-372c-4445-b54c-2bbf176ec24f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 01:45:54.522946 kubelet[2672]: I1031 01:45:54.522912 2672 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22cf324a-372c-4445-b54c-2bbf176ec24f-kube-api-access-rrqt6" (OuterVolumeSpecName: "kube-api-access-rrqt6") pod "22cf324a-372c-4445-b54c-2bbf176ec24f" (UID: "22cf324a-372c-4445-b54c-2bbf176ec24f"). InnerVolumeSpecName "kube-api-access-rrqt6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 01:45:54.525408 systemd[1]: var-lib-kubelet-pods-22cf324a\x2d372c\x2d4445\x2db54c\x2d2bbf176ec24f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 01:45:54.532460 systemd[1]: var-lib-kubelet-pods-22cf324a\x2d372c\x2d4445\x2db54c\x2d2bbf176ec24f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drrqt6.mount: Deactivated successfully. Oct 31 01:45:54.578661 kubelet[2672]: I1031 01:45:54.576969 2672 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rrqt6\" (UniqueName: \"kubernetes.io/projected/22cf324a-372c-4445-b54c-2bbf176ec24f-kube-api-access-rrqt6\") on node \"srv-n5tpq.gb1.brightbox.com\" DevicePath \"\"" Oct 31 01:45:54.578661 kubelet[2672]: I1031 01:45:54.577033 2672 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/22cf324a-372c-4445-b54c-2bbf176ec24f-whisker-backend-key-pair\") on node \"srv-n5tpq.gb1.brightbox.com\" DevicePath \"\"" Oct 31 01:45:54.578661 kubelet[2672]: I1031 01:45:54.577075 2672 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22cf324a-372c-4445-b54c-2bbf176ec24f-whisker-ca-bundle\") on node \"srv-n5tpq.gb1.brightbox.com\" DevicePath \"\"" Oct 31 01:45:54.581220 containerd[1499]: time="2025-10-31T01:45:54.580770969Z" level=info msg="StopPodSandbox for \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\"" Oct 31 01:45:54.635751 systemd[1]: Removed slice kubepods-besteffort-pod22cf324a_372c_4445_b54c_2bbf176ec24f.slice - libcontainer container kubepods-besteffort-pod22cf324a_372c_4445_b54c_2bbf176ec24f.slice. Oct 31 01:45:54.740279 containerd[1499]: 2025-10-31 01:45:54.691 [INFO][3960] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Oct 31 01:45:54.740279 containerd[1499]: 2025-10-31 01:45:54.691 [INFO][3960] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" iface="eth0" netns="/var/run/netns/cni-8b80f649-2a8a-7043-99db-08881ad8aa7d" Oct 31 01:45:54.740279 containerd[1499]: 2025-10-31 01:45:54.691 [INFO][3960] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" iface="eth0" netns="/var/run/netns/cni-8b80f649-2a8a-7043-99db-08881ad8aa7d" Oct 31 01:45:54.740279 containerd[1499]: 2025-10-31 01:45:54.692 [INFO][3960] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" iface="eth0" netns="/var/run/netns/cni-8b80f649-2a8a-7043-99db-08881ad8aa7d" Oct 31 01:45:54.740279 containerd[1499]: 2025-10-31 01:45:54.692 [INFO][3960] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Oct 31 01:45:54.740279 containerd[1499]: 2025-10-31 01:45:54.693 [INFO][3960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Oct 31 01:45:54.740279 containerd[1499]: 2025-10-31 01:45:54.722 [INFO][3967] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" HandleID="k8s-pod-network.2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:45:54.740279 containerd[1499]: 2025-10-31 01:45:54.722 [INFO][3967] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:45:54.740279 containerd[1499]: 2025-10-31 01:45:54.722 [INFO][3967] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:45:54.740279 containerd[1499]: 2025-10-31 01:45:54.731 [WARNING][3967] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" HandleID="k8s-pod-network.2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:45:54.740279 containerd[1499]: 2025-10-31 01:45:54.731 [INFO][3967] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" HandleID="k8s-pod-network.2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:45:54.740279 containerd[1499]: 2025-10-31 01:45:54.734 [INFO][3967] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:45:54.740279 containerd[1499]: 2025-10-31 01:45:54.737 [INFO][3960] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Oct 31 01:45:54.743353 containerd[1499]: time="2025-10-31T01:45:54.742684504Z" level=info msg="TearDown network for sandbox \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\" successfully" Oct 31 01:45:54.743353 containerd[1499]: time="2025-10-31T01:45:54.742721625Z" level=info msg="StopPodSandbox for \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\" returns successfully" Oct 31 01:45:54.744189 systemd[1]: run-netns-cni\x2d8b80f649\x2d2a8a\x2d7043\x2d99db\x2d08881ad8aa7d.mount: Deactivated successfully. Oct 31 01:45:54.749279 containerd[1499]: time="2025-10-31T01:45:54.749212108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b58b8b6b-6wxlj,Uid:f712c016-8a6e-4625-aab4-a80c982f13bc,Namespace:calico-apiserver,Attempt:1,}" Oct 31 01:45:54.993635 systemd-networkd[1438]: caliad248fdecf6: Link UP Oct 31 01:45:54.995244 systemd-networkd[1438]: caliad248fdecf6: Gained carrier Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.810 [INFO][3978] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.827 [INFO][3978] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0 calico-apiserver-6b58b8b6b- calico-apiserver f712c016-8a6e-4625-aab4-a80c982f13bc 896 0 2025-10-31 01:45:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b58b8b6b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-n5tpq.gb1.brightbox.com calico-apiserver-6b58b8b6b-6wxlj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliad248fdecf6 [] [] }} ContainerID="6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-6wxlj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-" Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.827 [INFO][3978] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-6wxlj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.872 [INFO][3986] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" HandleID="k8s-pod-network.6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.872 [INFO][3986] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" HandleID="k8s-pod-network.6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-n5tpq.gb1.brightbox.com", "pod":"calico-apiserver-6b58b8b6b-6wxlj", "timestamp":"2025-10-31 01:45:54.872017213 +0000 UTC"}, Hostname:"srv-n5tpq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.872 [INFO][3986] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.872 [INFO][3986] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.872 [INFO][3986] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n5tpq.gb1.brightbox.com' Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.887 [INFO][3986] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.902 [INFO][3986] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.910 [INFO][3986] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.912 [INFO][3986] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.915 [INFO][3986] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.915 [INFO][3986] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.918 [INFO][3986] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414 Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.928 [INFO][3986] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.942 [INFO][3986] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.193/26] block=192.168.82.192/26 handle="k8s-pod-network.6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.943 [INFO][3986] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.193/26] handle="k8s-pod-network.6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.943 [INFO][3986] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:45:55.062210 containerd[1499]: 2025-10-31 01:45:54.943 [INFO][3986] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.193/26] IPv6=[] ContainerID="6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" HandleID="k8s-pod-network.6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:45:55.067305 containerd[1499]: 2025-10-31 01:45:54.948 [INFO][3978] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-6wxlj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0", GenerateName:"calico-apiserver-6b58b8b6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f712c016-8a6e-4625-aab4-a80c982f13bc", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b58b8b6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6b58b8b6b-6wxlj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad248fdecf6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:45:55.067305 containerd[1499]: 2025-10-31 01:45:54.948 [INFO][3978] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.193/32] ContainerID="6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-6wxlj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:45:55.067305 containerd[1499]: 2025-10-31 01:45:54.948 [INFO][3978] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliad248fdecf6 ContainerID="6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-6wxlj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:45:55.067305 containerd[1499]: 2025-10-31 01:45:54.992 [INFO][3978] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-6wxlj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:45:55.067305 containerd[1499]: 2025-10-31 01:45:55.002 [INFO][3978] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-6wxlj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0", GenerateName:"calico-apiserver-6b58b8b6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f712c016-8a6e-4625-aab4-a80c982f13bc", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b58b8b6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414", Pod:"calico-apiserver-6b58b8b6b-6wxlj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad248fdecf6", MAC:"2a:ac:00:ea:11:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:45:55.067305 containerd[1499]: 2025-10-31 01:45:55.046 [INFO][3978] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-6wxlj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:45:55.067709 sshd[3837]: PAM: Permission denied for illegal user admin from 45.140.17.124 Oct 31 01:45:55.067709 sshd[3837]: Failed keyboard-interactive/pam for invalid user admin from 45.140.17.124 port 32722 ssh2 Oct 31 01:45:55.141403 containerd[1499]: time="2025-10-31T01:45:55.141041515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:45:55.144017 containerd[1499]: time="2025-10-31T01:45:55.143951840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:45:55.144496 containerd[1499]: time="2025-10-31T01:45:55.144064618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:55.147684 containerd[1499]: time="2025-10-31T01:45:55.146526934Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:55.187805 systemd[1]: Started cri-containerd-6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414.scope - libcontainer container 6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414. Oct 31 01:45:55.247275 systemd[1]: Created slice kubepods-besteffort-pod9911312f_9ce5_498d_99df_b48b4eafeab7.slice - libcontainer container kubepods-besteffort-pod9911312f_9ce5_498d_99df_b48b4eafeab7.slice. Oct 31 01:45:55.390543 kubelet[2672]: I1031 01:45:55.390461 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldpxp\" (UniqueName: \"kubernetes.io/projected/9911312f-9ce5-498d-99df-b48b4eafeab7-kube-api-access-ldpxp\") pod \"whisker-67bdd86bbf-pj5dz\" (UID: \"9911312f-9ce5-498d-99df-b48b4eafeab7\") " pod="calico-system/whisker-67bdd86bbf-pj5dz" Oct 31 01:45:55.391156 kubelet[2672]: I1031 01:45:55.390574 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9911312f-9ce5-498d-99df-b48b4eafeab7-whisker-ca-bundle\") pod \"whisker-67bdd86bbf-pj5dz\" (UID: \"9911312f-9ce5-498d-99df-b48b4eafeab7\") " pod="calico-system/whisker-67bdd86bbf-pj5dz" Oct 31 01:45:55.391156 kubelet[2672]: I1031 01:45:55.390735 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9911312f-9ce5-498d-99df-b48b4eafeab7-whisker-backend-key-pair\") pod \"whisker-67bdd86bbf-pj5dz\" (UID: \"9911312f-9ce5-498d-99df-b48b4eafeab7\") " pod="calico-system/whisker-67bdd86bbf-pj5dz" Oct 31 01:45:55.470193 containerd[1499]: time="2025-10-31T01:45:55.469546747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b58b8b6b-6wxlj,Uid:f712c016-8a6e-4625-aab4-a80c982f13bc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414\"" Oct 31 01:45:55.474561 containerd[1499]: time="2025-10-31T01:45:55.474522826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:45:55.559189 containerd[1499]: time="2025-10-31T01:45:55.558126442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67bdd86bbf-pj5dz,Uid:9911312f-9ce5-498d-99df-b48b4eafeab7,Namespace:calico-system,Attempt:0,}" Oct 31 01:45:55.586028 containerd[1499]: time="2025-10-31T01:45:55.585491153Z" level=info msg="StopPodSandbox for \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\"" Oct 31 01:45:55.587875 containerd[1499]: time="2025-10-31T01:45:55.587075865Z" level=info msg="StopPodSandbox for \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\"" Oct 31 01:45:55.636823 sshd[3837]: Connection reset by invalid user admin 45.140.17.124 port 32722 [preauth] Oct 31 01:45:55.641509 systemd[1]: sshd@11-10.230.44.66:22-45.140.17.124:32722.service: Deactivated successfully. Oct 31 01:45:55.742330 systemd[1]: Started sshd@12-10.230.44.66:22-45.140.17.124:32772.service - OpenSSH per-connection server daemon (45.140.17.124:32772). Oct 31 01:45:55.784612 containerd[1499]: time="2025-10-31T01:45:55.784373934Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:45:55.799414 containerd[1499]: time="2025-10-31T01:45:55.799213014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:45:55.851451 containerd[1499]: time="2025-10-31T01:45:55.801220435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:45:55.859304 kubelet[2672]: E1031 01:45:55.852002 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:45:55.860669 kubelet[2672]: E1031 01:45:55.859910 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:45:55.893123 kubelet[2672]: E1031 01:45:55.878920 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b58b8b6b-6wxlj_calico-apiserver(f712c016-8a6e-4625-aab4-a80c982f13bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:45:55.893123 kubelet[2672]: E1031 01:45:55.892937 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" podUID="f712c016-8a6e-4625-aab4-a80c982f13bc" Oct 31 01:45:55.982649 containerd[1499]: 2025-10-31 01:45:55.801 [INFO][4118] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Oct 31 01:45:55.982649 containerd[1499]: 2025-10-31 01:45:55.812 [INFO][4118] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" iface="eth0" netns="/var/run/netns/cni-f704abee-b149-39f7-266e-a0603c61b162" Oct 31 01:45:55.982649 containerd[1499]: 2025-10-31 01:45:55.820 [INFO][4118] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" iface="eth0" netns="/var/run/netns/cni-f704abee-b149-39f7-266e-a0603c61b162" Oct 31 01:45:55.982649 containerd[1499]: 2025-10-31 01:45:55.830 [INFO][4118] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" iface="eth0" netns="/var/run/netns/cni-f704abee-b149-39f7-266e-a0603c61b162" Oct 31 01:45:55.982649 containerd[1499]: 2025-10-31 01:45:55.830 [INFO][4118] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Oct 31 01:45:55.982649 containerd[1499]: 2025-10-31 01:45:55.830 [INFO][4118] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Oct 31 01:45:55.982649 containerd[1499]: 2025-10-31 01:45:55.914 [INFO][4168] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" HandleID="k8s-pod-network.f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:45:55.982649 containerd[1499]: 2025-10-31 01:45:55.914 [INFO][4168] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:45:55.982649 containerd[1499]: 2025-10-31 01:45:55.914 [INFO][4168] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:45:55.982649 containerd[1499]: 2025-10-31 01:45:55.936 [WARNING][4168] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" HandleID="k8s-pod-network.f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:45:55.982649 containerd[1499]: 2025-10-31 01:45:55.936 [INFO][4168] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" HandleID="k8s-pod-network.f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:45:55.982649 containerd[1499]: 2025-10-31 01:45:55.941 [INFO][4168] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:45:55.982649 containerd[1499]: 2025-10-31 01:45:55.960 [INFO][4118] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Oct 31 01:45:55.991749 containerd[1499]: time="2025-10-31T01:45:55.988321178Z" level=info msg="TearDown network for sandbox \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\" successfully" Oct 31 01:45:55.991749 containerd[1499]: time="2025-10-31T01:45:55.988370877Z" level=info msg="StopPodSandbox for \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\" returns successfully" Oct 31 01:45:55.989385 systemd[1]: run-netns-cni\x2df704abee\x2db149\x2d39f7\x2d266e\x2da0603c61b162.mount: Deactivated successfully. Oct 31 01:45:55.998671 kubelet[2672]: E1031 01:45:55.997571 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" podUID="f712c016-8a6e-4625-aab4-a80c982f13bc" Oct 31 01:45:56.002690 containerd[1499]: time="2025-10-31T01:45:56.002636423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c75ff6b-nf76t,Uid:b0d8846e-dd18-434c-b179-e3c2878ecf3f,Namespace:calico-system,Attempt:1,}" Oct 31 01:45:56.162891 kubelet[2672]: I1031 01:45:56.161246 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 01:45:56.253364 systemd-networkd[1438]: calidb33cc7e9fd: Link UP Oct 31 01:45:56.259758 systemd-networkd[1438]: calidb33cc7e9fd: Gained carrier Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:55.710 [INFO][4083] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:55.816 [INFO][4083] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n5tpq.gb1.brightbox.com-k8s-whisker--67bdd86bbf--pj5dz-eth0 whisker-67bdd86bbf- calico-system 9911312f-9ce5-498d-99df-b48b4eafeab7 914 0 2025-10-31 01:45:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:67bdd86bbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-n5tpq.gb1.brightbox.com whisker-67bdd86bbf-pj5dz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calidb33cc7e9fd [] [] }} ContainerID="151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" Namespace="calico-system" Pod="whisker-67bdd86bbf-pj5dz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-whisker--67bdd86bbf--pj5dz-" Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:55.818 [INFO][4083] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" Namespace="calico-system" Pod="whisker-67bdd86bbf-pj5dz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-whisker--67bdd86bbf--pj5dz-eth0" Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.076 [INFO][4173] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" HandleID="k8s-pod-network.151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" Workload="srv--n5tpq.gb1.brightbox.com-k8s-whisker--67bdd86bbf--pj5dz-eth0" Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.076 [INFO][4173] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" HandleID="k8s-pod-network.151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" Workload="srv--n5tpq.gb1.brightbox.com-k8s-whisker--67bdd86bbf--pj5dz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103680), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-n5tpq.gb1.brightbox.com", "pod":"whisker-67bdd86bbf-pj5dz", "timestamp":"2025-10-31 01:45:56.076213217 +0000 UTC"}, Hostname:"srv-n5tpq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.076 [INFO][4173] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.076 [INFO][4173] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.076 [INFO][4173] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n5tpq.gb1.brightbox.com' Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.122 [INFO][4173] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.147 [INFO][4173] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.165 [INFO][4173] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.170 [INFO][4173] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.176 [INFO][4173] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.176 [INFO][4173] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.179 [INFO][4173] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.190 [INFO][4173] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.219 [INFO][4173] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.194/26] block=192.168.82.192/26 handle="k8s-pod-network.151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.220 [INFO][4173] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.194/26] handle="k8s-pod-network.151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.220 [INFO][4173] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:45:56.294532 containerd[1499]: 2025-10-31 01:45:56.220 [INFO][4173] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.194/26] IPv6=[] ContainerID="151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" HandleID="k8s-pod-network.151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" Workload="srv--n5tpq.gb1.brightbox.com-k8s-whisker--67bdd86bbf--pj5dz-eth0" Oct 31 01:45:56.301287 containerd[1499]: 2025-10-31 01:45:56.225 [INFO][4083] cni-plugin/k8s.go 418: Populated endpoint ContainerID="151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" Namespace="calico-system" Pod="whisker-67bdd86bbf-pj5dz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-whisker--67bdd86bbf--pj5dz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-whisker--67bdd86bbf--pj5dz-eth0", GenerateName:"whisker-67bdd86bbf-", Namespace:"calico-system", SelfLink:"", UID:"9911312f-9ce5-498d-99df-b48b4eafeab7", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67bdd86bbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"", Pod:"whisker-67bdd86bbf-pj5dz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb33cc7e9fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:45:56.301287 containerd[1499]: 2025-10-31 01:45:56.227 [INFO][4083] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.194/32] ContainerID="151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" Namespace="calico-system" Pod="whisker-67bdd86bbf-pj5dz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-whisker--67bdd86bbf--pj5dz-eth0" Oct 31 01:45:56.301287 containerd[1499]: 2025-10-31 01:45:56.228 [INFO][4083] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb33cc7e9fd ContainerID="151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" Namespace="calico-system" Pod="whisker-67bdd86bbf-pj5dz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-whisker--67bdd86bbf--pj5dz-eth0" Oct 31 01:45:56.301287 containerd[1499]: 2025-10-31 01:45:56.263 [INFO][4083] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" Namespace="calico-system" Pod="whisker-67bdd86bbf-pj5dz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-whisker--67bdd86bbf--pj5dz-eth0" Oct 31 01:45:56.301287 containerd[1499]: 2025-10-31 01:45:56.265 [INFO][4083] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" Namespace="calico-system" Pod="whisker-67bdd86bbf-pj5dz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-whisker--67bdd86bbf--pj5dz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-whisker--67bdd86bbf--pj5dz-eth0", GenerateName:"whisker-67bdd86bbf-", Namespace:"calico-system", SelfLink:"", UID:"9911312f-9ce5-498d-99df-b48b4eafeab7", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"67bdd86bbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb", Pod:"whisker-67bdd86bbf-pj5dz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calidb33cc7e9fd", MAC:"9e:dd:80:3f:92:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:45:56.301287 containerd[1499]: 2025-10-31 01:45:56.289 [INFO][4083] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb" Namespace="calico-system" Pod="whisker-67bdd86bbf-pj5dz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-whisker--67bdd86bbf--pj5dz-eth0" Oct 31 01:45:56.341081 containerd[1499]: 2025-10-31 01:45:55.911 [INFO][4117] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Oct 31 01:45:56.341081 containerd[1499]: 2025-10-31 01:45:55.917 [INFO][4117] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" iface="eth0" netns="/var/run/netns/cni-058da023-1624-d190-2392-3c67428c40f0" Oct 31 01:45:56.341081 containerd[1499]: 2025-10-31 01:45:55.918 [INFO][4117] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" iface="eth0" netns="/var/run/netns/cni-058da023-1624-d190-2392-3c67428c40f0" Oct 31 01:45:56.341081 containerd[1499]: 2025-10-31 01:45:55.919 [INFO][4117] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" iface="eth0" netns="/var/run/netns/cni-058da023-1624-d190-2392-3c67428c40f0" Oct 31 01:45:56.341081 containerd[1499]: 2025-10-31 01:45:55.921 [INFO][4117] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Oct 31 01:45:56.341081 containerd[1499]: 2025-10-31 01:45:55.924 [INFO][4117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Oct 31 01:45:56.341081 containerd[1499]: 2025-10-31 01:45:56.268 [INFO][4186] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" HandleID="k8s-pod-network.77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Workload="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:45:56.341081 containerd[1499]: 2025-10-31 01:45:56.268 [INFO][4186] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:45:56.341081 containerd[1499]: 2025-10-31 01:45:56.268 [INFO][4186] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:45:56.341081 containerd[1499]: 2025-10-31 01:45:56.290 [WARNING][4186] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" HandleID="k8s-pod-network.77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Workload="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:45:56.341081 containerd[1499]: 2025-10-31 01:45:56.290 [INFO][4186] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" HandleID="k8s-pod-network.77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Workload="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:45:56.341081 containerd[1499]: 2025-10-31 01:45:56.304 [INFO][4186] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:45:56.341081 containerd[1499]: 2025-10-31 01:45:56.325 [INFO][4117] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Oct 31 01:45:56.342717 containerd[1499]: time="2025-10-31T01:45:56.342670397Z" level=info msg="TearDown network for sandbox \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\" successfully" Oct 31 01:45:56.342717 containerd[1499]: time="2025-10-31T01:45:56.342715812Z" level=info msg="StopPodSandbox for \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\" returns successfully" Oct 31 01:45:56.348057 containerd[1499]: time="2025-10-31T01:45:56.346915357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tgfh4,Uid:7238a7c0-ff8f-443f-ab69-f2ee0be198c2,Namespace:calico-system,Attempt:1,}" Oct 31 01:45:56.411624 containerd[1499]: time="2025-10-31T01:45:56.411387267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:45:56.413262 containerd[1499]: time="2025-10-31T01:45:56.411503446Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:45:56.413262 containerd[1499]: time="2025-10-31T01:45:56.411538313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:56.413262 containerd[1499]: time="2025-10-31T01:45:56.411734555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:56.415899 systemd-networkd[1438]: caliad248fdecf6: Gained IPv6LL Oct 31 01:45:56.490076 systemd[1]: Started cri-containerd-151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb.scope - libcontainer container 151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb. Oct 31 01:45:56.584821 containerd[1499]: time="2025-10-31T01:45:56.584669184Z" level=info msg="StopPodSandbox for \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\"" Oct 31 01:45:56.596322 kubelet[2672]: I1031 01:45:56.596273 2672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22cf324a-372c-4445-b54c-2bbf176ec24f" path="/var/lib/kubelet/pods/22cf324a-372c-4445-b54c-2bbf176ec24f/volumes" Oct 31 01:45:56.633999 systemd-networkd[1438]: cali2558d318b44: Link UP Oct 31 01:45:56.638088 systemd-networkd[1438]: cali2558d318b44: Gained carrier Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.296 [INFO][4209] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.354 [INFO][4209] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0 calico-kube-controllers-84c75ff6b- calico-system b0d8846e-dd18-434c-b179-e3c2878ecf3f 921 0 2025-10-31 01:45:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84c75ff6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-n5tpq.gb1.brightbox.com calico-kube-controllers-84c75ff6b-nf76t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2558d318b44 [] [] }} ContainerID="4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" Namespace="calico-system" Pod="calico-kube-controllers-84c75ff6b-nf76t" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-" Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.355 [INFO][4209] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" Namespace="calico-system" Pod="calico-kube-controllers-84c75ff6b-nf76t" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.521 [INFO][4268] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" HandleID="k8s-pod-network.4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.524 [INFO][4268] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" HandleID="k8s-pod-network.4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00060c270), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-n5tpq.gb1.brightbox.com", "pod":"calico-kube-controllers-84c75ff6b-nf76t", "timestamp":"2025-10-31 01:45:56.521522785 +0000 UTC"}, Hostname:"srv-n5tpq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.524 [INFO][4268] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.524 [INFO][4268] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.524 [INFO][4268] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n5tpq.gb1.brightbox.com' Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.543 [INFO][4268] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.550 [INFO][4268] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.559 [INFO][4268] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.562 [INFO][4268] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.566 [INFO][4268] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.566 [INFO][4268] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.569 [INFO][4268] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779 Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.575 [INFO][4268] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.604 [INFO][4268] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.195/26] block=192.168.82.192/26 handle="k8s-pod-network.4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.604 [INFO][4268] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.195/26] handle="k8s-pod-network.4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.604 [INFO][4268] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:45:56.685335 containerd[1499]: 2025-10-31 01:45:56.605 [INFO][4268] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.195/26] IPv6=[] ContainerID="4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" HandleID="k8s-pod-network.4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:45:56.690131 containerd[1499]: 2025-10-31 01:45:56.614 [INFO][4209] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" Namespace="calico-system" Pod="calico-kube-controllers-84c75ff6b-nf76t" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0", GenerateName:"calico-kube-controllers-84c75ff6b-", Namespace:"calico-system", SelfLink:"", UID:"b0d8846e-dd18-434c-b179-e3c2878ecf3f", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84c75ff6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-84c75ff6b-nf76t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2558d318b44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:45:56.690131 containerd[1499]: 2025-10-31 01:45:56.614 [INFO][4209] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.195/32] ContainerID="4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" Namespace="calico-system" Pod="calico-kube-controllers-84c75ff6b-nf76t" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:45:56.690131 containerd[1499]: 2025-10-31 01:45:56.614 [INFO][4209] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2558d318b44 ContainerID="4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" Namespace="calico-system" Pod="calico-kube-controllers-84c75ff6b-nf76t" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:45:56.690131 containerd[1499]: 2025-10-31 01:45:56.641 [INFO][4209] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" Namespace="calico-system" Pod="calico-kube-controllers-84c75ff6b-nf76t" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:45:56.690131 containerd[1499]: 2025-10-31 01:45:56.647 [INFO][4209] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" Namespace="calico-system" Pod="calico-kube-controllers-84c75ff6b-nf76t" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0", GenerateName:"calico-kube-controllers-84c75ff6b-", Namespace:"calico-system", SelfLink:"", UID:"b0d8846e-dd18-434c-b179-e3c2878ecf3f", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84c75ff6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779", Pod:"calico-kube-controllers-84c75ff6b-nf76t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2558d318b44", MAC:"2e:f1:1c:c5:e2:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:45:56.690131 containerd[1499]: 2025-10-31 01:45:56.674 [INFO][4209] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779" Namespace="calico-system" Pod="calico-kube-controllers-84c75ff6b-nf76t" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:45:56.739053 containerd[1499]: time="2025-10-31T01:45:56.738987234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67bdd86bbf-pj5dz,Uid:9911312f-9ce5-498d-99df-b48b4eafeab7,Namespace:calico-system,Attempt:0,} returns sandbox id \"151a277f92c5ba73215d58cf2ccdc3851e10bdc84b0dde2223b9853bcff963bb\"" Oct 31 01:45:56.749214 containerd[1499]: time="2025-10-31T01:45:56.749167441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:45:56.761109 systemd[1]: run-netns-cni\x2d058da023\x2d1624\x2dd190\x2d2392\x2d3c67428c40f0.mount: Deactivated successfully. Oct 31 01:45:56.834563 containerd[1499]: time="2025-10-31T01:45:56.833699088Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:45:56.834563 containerd[1499]: time="2025-10-31T01:45:56.833883120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:45:56.834563 containerd[1499]: time="2025-10-31T01:45:56.833935121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:56.835509 containerd[1499]: time="2025-10-31T01:45:56.834969881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:56.857006 systemd-networkd[1438]: cali433de113dde: Link UP Oct 31 01:45:56.861996 systemd-networkd[1438]: cali433de113dde: Gained carrier Oct 31 01:45:56.910591 systemd[1]: Started cri-containerd-4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779.scope - libcontainer container 4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779. Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.505 [INFO][4260] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.543 [INFO][4260] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0 goldmane-7c778bb748- calico-system 7238a7c0-ff8f-443f-ab69-f2ee0be198c2 924 0 2025-10-31 01:45:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-n5tpq.gb1.brightbox.com goldmane-7c778bb748-tgfh4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali433de113dde [] [] }} ContainerID="ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" Namespace="calico-system" Pod="goldmane-7c778bb748-tgfh4" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-" Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.544 [INFO][4260] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" Namespace="calico-system" Pod="goldmane-7c778bb748-tgfh4" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.631 [INFO][4312] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" HandleID="k8s-pod-network.ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" Workload="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.636 [INFO][4312] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" HandleID="k8s-pod-network.ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" Workload="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000384350), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-n5tpq.gb1.brightbox.com", "pod":"goldmane-7c778bb748-tgfh4", "timestamp":"2025-10-31 01:45:56.631530513 +0000 UTC"}, Hostname:"srv-n5tpq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.639 [INFO][4312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.639 [INFO][4312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.640 [INFO][4312] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n5tpq.gb1.brightbox.com' Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.689 [INFO][4312] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.722 [INFO][4312] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.747 [INFO][4312] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.765 [INFO][4312] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.780 [INFO][4312] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.780 [INFO][4312] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.791 [INFO][4312] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15 Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.818 [INFO][4312] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.836 [INFO][4312] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.196/26] block=192.168.82.192/26 handle="k8s-pod-network.ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.836 [INFO][4312] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.196/26] handle="k8s-pod-network.ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.836 [INFO][4312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:45:56.931077 containerd[1499]: 2025-10-31 01:45:56.836 [INFO][4312] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.196/26] IPv6=[] ContainerID="ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" HandleID="k8s-pod-network.ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" Workload="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:45:56.934313 containerd[1499]: 2025-10-31 01:45:56.846 [INFO][4260] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" Namespace="calico-system" Pod="goldmane-7c778bb748-tgfh4" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"7238a7c0-ff8f-443f-ab69-f2ee0be198c2", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-7c778bb748-tgfh4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali433de113dde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:45:56.934313 containerd[1499]: 2025-10-31 01:45:56.846 [INFO][4260] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.196/32] ContainerID="ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" Namespace="calico-system" Pod="goldmane-7c778bb748-tgfh4" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:45:56.934313 containerd[1499]: 2025-10-31 01:45:56.846 [INFO][4260] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali433de113dde ContainerID="ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" Namespace="calico-system" Pod="goldmane-7c778bb748-tgfh4" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:45:56.934313 containerd[1499]: 2025-10-31 01:45:56.856 [INFO][4260] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" Namespace="calico-system" Pod="goldmane-7c778bb748-tgfh4" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:45:56.934313 containerd[1499]: 2025-10-31 01:45:56.858 [INFO][4260] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" Namespace="calico-system" Pod="goldmane-7c778bb748-tgfh4" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"7238a7c0-ff8f-443f-ab69-f2ee0be198c2", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15", Pod:"goldmane-7c778bb748-tgfh4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali433de113dde", MAC:"56:35:fa:6a:a8:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:45:56.934313 containerd[1499]: 2025-10-31 01:45:56.895 [INFO][4260] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15" Namespace="calico-system" Pod="goldmane-7c778bb748-tgfh4" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:45:56.964210 kubelet[2672]: E1031 01:45:56.963059 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" podUID="f712c016-8a6e-4625-aab4-a80c982f13bc" Oct 31 01:45:57.010243 containerd[1499]: time="2025-10-31T01:45:57.009686409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:45:57.010243 containerd[1499]: time="2025-10-31T01:45:57.009800232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:45:57.010243 containerd[1499]: time="2025-10-31T01:45:57.009824229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:57.010243 containerd[1499]: time="2025-10-31T01:45:57.009994709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:57.061935 systemd[1]: Started cri-containerd-ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15.scope - libcontainer container ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15. Oct 31 01:45:57.076412 containerd[1499]: 2025-10-31 01:45:56.860 [INFO][4327] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Oct 31 01:45:57.076412 containerd[1499]: 2025-10-31 01:45:56.860 [INFO][4327] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" iface="eth0" netns="/var/run/netns/cni-29a2b094-f755-740a-e4d8-87d3ccd7c0a0" Oct 31 01:45:57.076412 containerd[1499]: 2025-10-31 01:45:56.860 [INFO][4327] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" iface="eth0" netns="/var/run/netns/cni-29a2b094-f755-740a-e4d8-87d3ccd7c0a0" Oct 31 01:45:57.076412 containerd[1499]: 2025-10-31 01:45:56.865 [INFO][4327] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" iface="eth0" netns="/var/run/netns/cni-29a2b094-f755-740a-e4d8-87d3ccd7c0a0" Oct 31 01:45:57.076412 containerd[1499]: 2025-10-31 01:45:56.866 [INFO][4327] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Oct 31 01:45:57.076412 containerd[1499]: 2025-10-31 01:45:56.866 [INFO][4327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Oct 31 01:45:57.076412 containerd[1499]: 2025-10-31 01:45:57.017 [INFO][4374] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" HandleID="k8s-pod-network.646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:45:57.076412 containerd[1499]: 2025-10-31 01:45:57.017 [INFO][4374] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:45:57.076412 containerd[1499]: 2025-10-31 01:45:57.017 [INFO][4374] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:45:57.076412 containerd[1499]: 2025-10-31 01:45:57.046 [WARNING][4374] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" HandleID="k8s-pod-network.646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:45:57.076412 containerd[1499]: 2025-10-31 01:45:57.046 [INFO][4374] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" HandleID="k8s-pod-network.646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:45:57.076412 containerd[1499]: 2025-10-31 01:45:57.059 [INFO][4374] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:45:57.076412 containerd[1499]: 2025-10-31 01:45:57.063 [INFO][4327] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Oct 31 01:45:57.078821 containerd[1499]: time="2025-10-31T01:45:57.077253627Z" level=info msg="TearDown network for sandbox \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\" successfully" Oct 31 01:45:57.078821 containerd[1499]: time="2025-10-31T01:45:57.077301305Z" level=info msg="StopPodSandbox for \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\" returns successfully" Oct 31 01:45:57.083211 containerd[1499]: time="2025-10-31T01:45:57.082945512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b58b8b6b-r89xp,Uid:f4abe04b-b171-45a2-9c26-8c077d6bf990,Namespace:calico-apiserver,Attempt:1,}" Oct 31 01:45:57.126980 containerd[1499]: time="2025-10-31T01:45:57.126681596Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:45:57.129998 containerd[1499]: time="2025-10-31T01:45:57.129762083Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:45:57.130122 kubelet[2672]: E1031 01:45:57.130061 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:45:57.130193 kubelet[2672]: E1031 01:45:57.130124 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:45:57.130660 kubelet[2672]: E1031 01:45:57.130239 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-67bdd86bbf-pj5dz_calico-system(9911312f-9ce5-498d-99df-b48b4eafeab7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:45:57.130820 containerd[1499]: time="2025-10-31T01:45:57.129860208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 01:45:57.134044 containerd[1499]: time="2025-10-31T01:45:57.133923574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:45:57.319851 containerd[1499]: time="2025-10-31T01:45:57.319800671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84c75ff6b-nf76t,Uid:b0d8846e-dd18-434c-b179-e3c2878ecf3f,Namespace:calico-system,Attempt:1,} returns sandbox id \"4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779\"" Oct 31 01:45:57.354482 systemd-networkd[1438]: cali137f0055ae0: Link UP Oct 31 01:45:57.357233 systemd-networkd[1438]: cali137f0055ae0: Gained carrier Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.153 [INFO][4430] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.176 [INFO][4430] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0 calico-apiserver-6b58b8b6b- calico-apiserver f4abe04b-b171-45a2-9c26-8c077d6bf990 947 0 2025-10-31 01:45:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b58b8b6b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-n5tpq.gb1.brightbox.com calico-apiserver-6b58b8b6b-r89xp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali137f0055ae0 [] [] }} ContainerID="9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-r89xp" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-" Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.176 [INFO][4430] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-r89xp" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.231 [INFO][4445] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" HandleID="k8s-pod-network.9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.231 [INFO][4445] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" HandleID="k8s-pod-network.9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-n5tpq.gb1.brightbox.com", "pod":"calico-apiserver-6b58b8b6b-r89xp", "timestamp":"2025-10-31 01:45:57.231181278 +0000 UTC"}, Hostname:"srv-n5tpq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.231 [INFO][4445] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.231 [INFO][4445] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.231 [INFO][4445] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n5tpq.gb1.brightbox.com' Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.245 [INFO][4445] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.261 [INFO][4445] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.296 [INFO][4445] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.299 [INFO][4445] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.309 [INFO][4445] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.310 [INFO][4445] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.313 [INFO][4445] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68 Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.330 [INFO][4445] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.341 [INFO][4445] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.197/26] block=192.168.82.192/26 handle="k8s-pod-network.9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.342 [INFO][4445] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.197/26] handle="k8s-pod-network.9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.342 [INFO][4445] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:45:57.378967 containerd[1499]: 2025-10-31 01:45:57.342 [INFO][4445] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.197/26] IPv6=[] ContainerID="9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" HandleID="k8s-pod-network.9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:45:57.381442 containerd[1499]: 2025-10-31 01:45:57.347 [INFO][4430] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-r89xp" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0", GenerateName:"calico-apiserver-6b58b8b6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4abe04b-b171-45a2-9c26-8c077d6bf990", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b58b8b6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6b58b8b6b-r89xp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali137f0055ae0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:45:57.381442 containerd[1499]: 2025-10-31 01:45:57.348 [INFO][4430] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.197/32] ContainerID="9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-r89xp" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:45:57.381442 containerd[1499]: 2025-10-31 01:45:57.348 [INFO][4430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali137f0055ae0 ContainerID="9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-r89xp" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:45:57.381442 containerd[1499]: 2025-10-31 01:45:57.353 [INFO][4430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-r89xp" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:45:57.381442 containerd[1499]: 2025-10-31 01:45:57.354 [INFO][4430] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-r89xp" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0", GenerateName:"calico-apiserver-6b58b8b6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4abe04b-b171-45a2-9c26-8c077d6bf990", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b58b8b6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68", Pod:"calico-apiserver-6b58b8b6b-r89xp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali137f0055ae0", MAC:"32:c4:52:27:9b:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:45:57.381442 containerd[1499]: 2025-10-31 01:45:57.368 [INFO][4430] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68" Namespace="calico-apiserver" Pod="calico-apiserver-6b58b8b6b-r89xp" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:45:57.420311 containerd[1499]: time="2025-10-31T01:45:57.420259326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-tgfh4,Uid:7238a7c0-ff8f-443f-ab69-f2ee0be198c2,Namespace:calico-system,Attempt:1,} returns sandbox id \"ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15\"" Oct 31 01:45:57.459708 containerd[1499]: time="2025-10-31T01:45:57.459197867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:45:57.459708 containerd[1499]: time="2025-10-31T01:45:57.459299198Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:45:57.459708 containerd[1499]: time="2025-10-31T01:45:57.459316803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:57.460684 containerd[1499]: time="2025-10-31T01:45:57.460429923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:57.483924 containerd[1499]: time="2025-10-31T01:45:57.483865395Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:45:57.485904 containerd[1499]: time="2025-10-31T01:45:57.485702251Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:45:57.486370 containerd[1499]: time="2025-10-31T01:45:57.486313850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 01:45:57.488625 kubelet[2672]: E1031 01:45:57.488541 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:45:57.488724 kubelet[2672]: E1031 01:45:57.488638 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:45:57.489086 kubelet[2672]: E1031 01:45:57.488853 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-67bdd86bbf-pj5dz_calico-system(9911312f-9ce5-498d-99df-b48b4eafeab7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:45:57.489086 kubelet[2672]: E1031 01:45:57.488923 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67bdd86bbf-pj5dz" podUID="9911312f-9ce5-498d-99df-b48b4eafeab7" Oct 31 01:45:57.490844 containerd[1499]: time="2025-10-31T01:45:57.490525638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:45:57.514843 systemd[1]: Started cri-containerd-9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68.scope - libcontainer container 9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68. Oct 31 01:45:57.593128 sshd[4153]: Invalid user admin from 45.140.17.124 port 32772 Oct 31 01:45:57.635900 containerd[1499]: time="2025-10-31T01:45:57.635552564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b58b8b6b-r89xp,Uid:f4abe04b-b171-45a2-9c26-8c077d6bf990,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68\"" Oct 31 01:45:57.694833 systemd-networkd[1438]: calidb33cc7e9fd: Gained IPv6LL Oct 31 01:45:57.756423 systemd[1]: run-netns-cni\x2d29a2b094\x2df755\x2d740a\x2de4d8\x2d87d3ccd7c0a0.mount: Deactivated successfully. Oct 31 01:45:57.810145 containerd[1499]: time="2025-10-31T01:45:57.810066066Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:45:57.814493 containerd[1499]: time="2025-10-31T01:45:57.811052561Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:45:57.814728 containerd[1499]: time="2025-10-31T01:45:57.811257441Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 01:45:57.815347 kubelet[2672]: E1031 01:45:57.814767 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:45:57.815347 kubelet[2672]: E1031 01:45:57.814826 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:45:57.815347 kubelet[2672]: E1031 01:45:57.815047 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-84c75ff6b-nf76t_calico-system(b0d8846e-dd18-434c-b179-e3c2878ecf3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:45:57.815347 kubelet[2672]: E1031 01:45:57.815103 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" podUID="b0d8846e-dd18-434c-b179-e3c2878ecf3f" Oct 31 01:45:57.818078 containerd[1499]: time="2025-10-31T01:45:57.815638905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:45:57.886903 systemd-networkd[1438]: cali433de113dde: Gained IPv6LL Oct 31 01:45:57.966639 kernel: bpftool[4549]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Oct 31 01:45:57.985025 kubelet[2672]: E1031 01:45:57.984510 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" podUID="b0d8846e-dd18-434c-b179-e3c2878ecf3f" Oct 31 01:45:57.986651 kubelet[2672]: E1031 01:45:57.986571 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67bdd86bbf-pj5dz" podUID="9911312f-9ce5-498d-99df-b48b4eafeab7" Oct 31 01:45:58.036424 sshd[4550]: pam_faillock(sshd:auth): User unknown Oct 31 01:45:58.042691 sshd[4153]: Postponed keyboard-interactive for invalid user admin from 45.140.17.124 port 32772 ssh2 [preauth] Oct 31 01:45:58.140677 containerd[1499]: time="2025-10-31T01:45:58.140147482Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:45:58.142552 containerd[1499]: time="2025-10-31T01:45:58.142392659Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:45:58.142552 containerd[1499]: time="2025-10-31T01:45:58.142477483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 01:45:58.144315 kubelet[2672]: E1031 01:45:58.143155 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:45:58.144315 kubelet[2672]: E1031 01:45:58.143249 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:45:58.144315 kubelet[2672]: E1031 01:45:58.143544 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tgfh4_calico-system(7238a7c0-ff8f-443f-ab69-f2ee0be198c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:45:58.143724 systemd-networkd[1438]: cali2558d318b44: Gained IPv6LL Oct 31 01:45:58.146066 containerd[1499]: time="2025-10-31T01:45:58.145144855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:45:58.147164 kubelet[2672]: E1031 01:45:58.145543 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgfh4" podUID="7238a7c0-ff8f-443f-ab69-f2ee0be198c2" Oct 31 01:45:58.369323 systemd-networkd[1438]: vxlan.calico: Link UP Oct 31 01:45:58.369335 systemd-networkd[1438]: vxlan.calico: Gained carrier Oct 31 01:45:58.470858 containerd[1499]: time="2025-10-31T01:45:58.469988395Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:45:58.472961 containerd[1499]: time="2025-10-31T01:45:58.471929492Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:45:58.472961 containerd[1499]: time="2025-10-31T01:45:58.472042317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:45:58.473074 kubelet[2672]: E1031 01:45:58.472232 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:45:58.473074 kubelet[2672]: E1031 01:45:58.472291 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:45:58.473074 kubelet[2672]: E1031 01:45:58.472425 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b58b8b6b-r89xp_calico-apiserver(f4abe04b-b171-45a2-9c26-8c077d6bf990): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:45:58.473074 kubelet[2672]: E1031 01:45:58.472476 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" podUID="f4abe04b-b171-45a2-9c26-8c077d6bf990" Oct 31 01:45:58.541553 sshd[4550]: pam_unix(sshd:auth): check pass; user unknown Oct 31 01:45:58.541622 sshd[4550]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.140.17.124 Oct 31 01:45:58.542894 sshd[4550]: pam_faillock(sshd:auth): User unknown Oct 31 01:45:58.581872 containerd[1499]: time="2025-10-31T01:45:58.581820170Z" level=info msg="StopPodSandbox for \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\"" Oct 31 01:45:58.735377 containerd[1499]: 2025-10-31 01:45:58.670 [INFO][4605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Oct 31 01:45:58.735377 containerd[1499]: 2025-10-31 01:45:58.671 [INFO][4605] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" iface="eth0" netns="/var/run/netns/cni-e940c01e-e04d-1388-32f1-0f050d224d71" Oct 31 01:45:58.735377 containerd[1499]: 2025-10-31 01:45:58.672 [INFO][4605] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" iface="eth0" netns="/var/run/netns/cni-e940c01e-e04d-1388-32f1-0f050d224d71" Oct 31 01:45:58.735377 containerd[1499]: 2025-10-31 01:45:58.673 [INFO][4605] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" iface="eth0" netns="/var/run/netns/cni-e940c01e-e04d-1388-32f1-0f050d224d71" Oct 31 01:45:58.735377 containerd[1499]: 2025-10-31 01:45:58.673 [INFO][4605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Oct 31 01:45:58.735377 containerd[1499]: 2025-10-31 01:45:58.673 [INFO][4605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Oct 31 01:45:58.735377 containerd[1499]: 2025-10-31 01:45:58.716 [INFO][4612] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" HandleID="k8s-pod-network.ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Workload="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:45:58.735377 containerd[1499]: 2025-10-31 01:45:58.716 [INFO][4612] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:45:58.735377 containerd[1499]: 2025-10-31 01:45:58.716 [INFO][4612] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:45:58.735377 containerd[1499]: 2025-10-31 01:45:58.728 [WARNING][4612] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" HandleID="k8s-pod-network.ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Workload="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:45:58.735377 containerd[1499]: 2025-10-31 01:45:58.728 [INFO][4612] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" HandleID="k8s-pod-network.ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Workload="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:45:58.735377 containerd[1499]: 2025-10-31 01:45:58.731 [INFO][4612] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:45:58.735377 containerd[1499]: 2025-10-31 01:45:58.733 [INFO][4605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Oct 31 01:45:58.739510 containerd[1499]: time="2025-10-31T01:45:58.736136625Z" level=info msg="TearDown network for sandbox \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\" successfully" Oct 31 01:45:58.739510 containerd[1499]: time="2025-10-31T01:45:58.736172706Z" level=info msg="StopPodSandbox for \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\" returns successfully" Oct 31 01:45:58.740925 systemd[1]: run-netns-cni\x2de940c01e\x2de04d\x2d1388\x2d32f1\x2d0f050d224d71.mount: Deactivated successfully. Oct 31 01:45:58.744614 containerd[1499]: time="2025-10-31T01:45:58.744464468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvbwj,Uid:b91990ed-b519-4003-921b-695c5958edac,Namespace:calico-system,Attempt:1,}" Oct 31 01:45:58.978397 systemd-networkd[1438]: calif78d27c5179: Link UP Oct 31 01:45:58.981121 systemd-networkd[1438]: calif78d27c5179: Gained carrier Oct 31 01:45:58.994887 kubelet[2672]: E1031 01:45:58.992847 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" podUID="f4abe04b-b171-45a2-9c26-8c077d6bf990" Oct 31 01:45:58.994887 kubelet[2672]: E1031 01:45:58.994746 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgfh4" podUID="7238a7c0-ff8f-443f-ab69-f2ee0be198c2" Oct 31 01:45:59.003394 kubelet[2672]: E1031 01:45:59.003295 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" podUID="b0d8846e-dd18-434c-b179-e3c2878ecf3f" Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.837 [INFO][4621] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0 csi-node-driver- calico-system b91990ed-b519-4003-921b-695c5958edac 985 0 2025-10-31 01:45:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-n5tpq.gb1.brightbox.com csi-node-driver-lvbwj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif78d27c5179 [] [] }} ContainerID="cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" Namespace="calico-system" Pod="csi-node-driver-lvbwj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-" Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.837 [INFO][4621] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" Namespace="calico-system" Pod="csi-node-driver-lvbwj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.899 [INFO][4644] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" HandleID="k8s-pod-network.cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" Workload="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.900 [INFO][4644] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" HandleID="k8s-pod-network.cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" Workload="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003919b0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-n5tpq.gb1.brightbox.com", "pod":"csi-node-driver-lvbwj", "timestamp":"2025-10-31 01:45:58.89975871 +0000 UTC"}, Hostname:"srv-n5tpq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.900 [INFO][4644] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.900 [INFO][4644] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.900 [INFO][4644] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n5tpq.gb1.brightbox.com' Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.912 [INFO][4644] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.919 [INFO][4644] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.928 [INFO][4644] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.931 [INFO][4644] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.936 [INFO][4644] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.936 [INFO][4644] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.938 [INFO][4644] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0 Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.950 [INFO][4644] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.967 [INFO][4644] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.198/26] block=192.168.82.192/26 handle="k8s-pod-network.cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.967 [INFO][4644] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.198/26] handle="k8s-pod-network.cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.968 [INFO][4644] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:45:59.023084 containerd[1499]: 2025-10-31 01:45:58.968 [INFO][4644] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.198/26] IPv6=[] ContainerID="cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" HandleID="k8s-pod-network.cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" Workload="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:45:59.026207 containerd[1499]: 2025-10-31 01:45:58.971 [INFO][4621] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" Namespace="calico-system" Pod="csi-node-driver-lvbwj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b91990ed-b519-4003-921b-695c5958edac", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-lvbwj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif78d27c5179", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:45:59.026207 containerd[1499]: 2025-10-31 01:45:58.971 [INFO][4621] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.198/32] ContainerID="cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" Namespace="calico-system" Pod="csi-node-driver-lvbwj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:45:59.026207 containerd[1499]: 2025-10-31 01:45:58.971 [INFO][4621] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif78d27c5179 ContainerID="cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" Namespace="calico-system" Pod="csi-node-driver-lvbwj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:45:59.026207 containerd[1499]: 2025-10-31 01:45:58.980 [INFO][4621] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" Namespace="calico-system" Pod="csi-node-driver-lvbwj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:45:59.026207 containerd[1499]: 2025-10-31 01:45:58.981 [INFO][4621] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" Namespace="calico-system" Pod="csi-node-driver-lvbwj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b91990ed-b519-4003-921b-695c5958edac", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0", Pod:"csi-node-driver-lvbwj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif78d27c5179", MAC:"a2:53:b3:c5:3e:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:45:59.026207 containerd[1499]: 2025-10-31 01:45:59.017 [INFO][4621] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0" Namespace="calico-system" Pod="csi-node-driver-lvbwj" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:45:59.090824 containerd[1499]: time="2025-10-31T01:45:59.090671312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:45:59.093736 containerd[1499]: time="2025-10-31T01:45:59.092836745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:45:59.094708 containerd[1499]: time="2025-10-31T01:45:59.094652200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:59.095721 containerd[1499]: time="2025-10-31T01:45:59.095657082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:45:59.148810 systemd[1]: Started cri-containerd-cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0.scope - libcontainer container cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0. Oct 31 01:45:59.238179 containerd[1499]: time="2025-10-31T01:45:59.237806674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lvbwj,Uid:b91990ed-b519-4003-921b-695c5958edac,Namespace:calico-system,Attempt:1,} returns sandbox id \"cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0\"" Oct 31 01:45:59.240638 containerd[1499]: time="2025-10-31T01:45:59.240286306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:45:59.358812 systemd-networkd[1438]: cali137f0055ae0: Gained IPv6LL Oct 31 01:45:59.558044 containerd[1499]: time="2025-10-31T01:45:59.557790815Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:45:59.560071 containerd[1499]: time="2025-10-31T01:45:59.559865216Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:45:59.560148 containerd[1499]: time="2025-10-31T01:45:59.559986523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 01:45:59.560750 kubelet[2672]: E1031 01:45:59.560340 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:45:59.560750 kubelet[2672]: E1031 01:45:59.560405 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:45:59.560750 kubelet[2672]: E1031 01:45:59.560529 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lvbwj_calico-system(b91990ed-b519-4003-921b-695c5958edac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:45:59.563536 containerd[1499]: time="2025-10-31T01:45:59.563137471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:45:59.578993 containerd[1499]: time="2025-10-31T01:45:59.578895832Z" level=info msg="StopPodSandbox for \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\"" Oct 31 01:45:59.579606 containerd[1499]: time="2025-10-31T01:45:59.579289291Z" level=info msg="StopPodSandbox for \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\"" Oct 31 01:45:59.786677 containerd[1499]: 2025-10-31 01:45:59.700 [INFO][4751] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Oct 31 01:45:59.786677 containerd[1499]: 2025-10-31 01:45:59.701 [INFO][4751] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" iface="eth0" netns="/var/run/netns/cni-790b76b2-fbb9-bf0e-2da6-aa0c35ca2c04" Oct 31 01:45:59.786677 containerd[1499]: 2025-10-31 01:45:59.702 [INFO][4751] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" iface="eth0" netns="/var/run/netns/cni-790b76b2-fbb9-bf0e-2da6-aa0c35ca2c04" Oct 31 01:45:59.786677 containerd[1499]: 2025-10-31 01:45:59.702 [INFO][4751] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" iface="eth0" netns="/var/run/netns/cni-790b76b2-fbb9-bf0e-2da6-aa0c35ca2c04" Oct 31 01:45:59.786677 containerd[1499]: 2025-10-31 01:45:59.702 [INFO][4751] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Oct 31 01:45:59.786677 containerd[1499]: 2025-10-31 01:45:59.702 [INFO][4751] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Oct 31 01:45:59.786677 containerd[1499]: 2025-10-31 01:45:59.760 [INFO][4767] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" HandleID="k8s-pod-network.a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:45:59.786677 containerd[1499]: 2025-10-31 01:45:59.761 [INFO][4767] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:45:59.786677 containerd[1499]: 2025-10-31 01:45:59.762 [INFO][4767] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:45:59.786677 containerd[1499]: 2025-10-31 01:45:59.773 [WARNING][4767] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" HandleID="k8s-pod-network.a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:45:59.786677 containerd[1499]: 2025-10-31 01:45:59.774 [INFO][4767] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" HandleID="k8s-pod-network.a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:45:59.786677 containerd[1499]: 2025-10-31 01:45:59.776 [INFO][4767] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:45:59.786677 containerd[1499]: 2025-10-31 01:45:59.782 [INFO][4751] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Oct 31 01:45:59.787748 containerd[1499]: time="2025-10-31T01:45:59.787384700Z" level=info msg="TearDown network for sandbox \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\" successfully" Oct 31 01:45:59.787748 containerd[1499]: time="2025-10-31T01:45:59.787428130Z" level=info msg="StopPodSandbox for \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\" returns successfully" Oct 31 01:45:59.793924 systemd[1]: run-netns-cni\x2d790b76b2\x2dfbb9\x2dbf0e\x2d2da6\x2daa0c35ca2c04.mount: Deactivated successfully. Oct 31 01:45:59.797423 containerd[1499]: time="2025-10-31T01:45:59.796374220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f9xmz,Uid:e995549a-d4b6-43b7-9c52-4c9c14a4dcdf,Namespace:kube-system,Attempt:1,}" Oct 31 01:45:59.820442 containerd[1499]: 2025-10-31 01:45:59.692 [INFO][4755] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Oct 31 01:45:59.820442 containerd[1499]: 2025-10-31 01:45:59.692 [INFO][4755] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" iface="eth0" netns="/var/run/netns/cni-23382455-9b88-e0f4-9dbb-f056756071cb" Oct 31 01:45:59.820442 containerd[1499]: 2025-10-31 01:45:59.693 [INFO][4755] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" iface="eth0" netns="/var/run/netns/cni-23382455-9b88-e0f4-9dbb-f056756071cb" Oct 31 01:45:59.820442 containerd[1499]: 2025-10-31 01:45:59.696 [INFO][4755] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" iface="eth0" netns="/var/run/netns/cni-23382455-9b88-e0f4-9dbb-f056756071cb" Oct 31 01:45:59.820442 containerd[1499]: 2025-10-31 01:45:59.697 [INFO][4755] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Oct 31 01:45:59.820442 containerd[1499]: 2025-10-31 01:45:59.699 [INFO][4755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Oct 31 01:45:59.820442 containerd[1499]: 2025-10-31 01:45:59.760 [INFO][4765] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" HandleID="k8s-pod-network.f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:45:59.820442 containerd[1499]: 2025-10-31 01:45:59.761 [INFO][4765] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:45:59.820442 containerd[1499]: 2025-10-31 01:45:59.776 [INFO][4765] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:45:59.820442 containerd[1499]: 2025-10-31 01:45:59.804 [WARNING][4765] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" HandleID="k8s-pod-network.f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:45:59.820442 containerd[1499]: 2025-10-31 01:45:59.804 [INFO][4765] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" HandleID="k8s-pod-network.f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:45:59.820442 containerd[1499]: 2025-10-31 01:45:59.807 [INFO][4765] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:45:59.820442 containerd[1499]: 2025-10-31 01:45:59.810 [INFO][4755] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Oct 31 01:45:59.823901 containerd[1499]: time="2025-10-31T01:45:59.823351871Z" level=info msg="TearDown network for sandbox \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\" successfully" Oct 31 01:45:59.823901 containerd[1499]: time="2025-10-31T01:45:59.823392093Z" level=info msg="StopPodSandbox for \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\" returns successfully" Oct 31 01:45:59.830515 systemd[1]: run-netns-cni\x2d23382455\x2d9b88\x2de0f4\x2d9dbb\x2df056756071cb.mount: Deactivated successfully. Oct 31 01:45:59.837448 containerd[1499]: time="2025-10-31T01:45:59.837230099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlj9c,Uid:b865c705-77c3-44e2-b527-f7fc482e79fd,Namespace:kube-system,Attempt:1,}" Oct 31 01:45:59.894669 containerd[1499]: time="2025-10-31T01:45:59.894610187Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:45:59.896359 containerd[1499]: time="2025-10-31T01:45:59.896167260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:45:59.896359 containerd[1499]: time="2025-10-31T01:45:59.896311515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 01:45:59.896610 kubelet[2672]: E1031 01:45:59.896481 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:45:59.896610 kubelet[2672]: E1031 01:45:59.896558 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:45:59.896797 kubelet[2672]: E1031 01:45:59.896679 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lvbwj_calico-system(b91990ed-b519-4003-921b-695c5958edac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:45:59.896797 kubelet[2672]: E1031 01:45:59.896764 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:46:00.001609 kubelet[2672]: E1031 01:46:00.001417 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:46:00.137110 systemd-networkd[1438]: cali556aa9a087f: Link UP Oct 31 01:46:00.137446 systemd-networkd[1438]: cali556aa9a087f: Gained carrier Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:45:59.957 [INFO][4788] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0 coredns-66bc5c9577- kube-system b865c705-77c3-44e2-b527-f7fc482e79fd 1016 0 2025-10-31 01:45:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-n5tpq.gb1.brightbox.com coredns-66bc5c9577-zlj9c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali556aa9a087f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" Namespace="kube-system" Pod="coredns-66bc5c9577-zlj9c" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-" Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:45:59.957 [INFO][4788] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" Namespace="kube-system" Pod="coredns-66bc5c9577-zlj9c" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.022 [INFO][4808] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" HandleID="k8s-pod-network.ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.022 [INFO][4808] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" HandleID="k8s-pod-network.ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5750), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-n5tpq.gb1.brightbox.com", "pod":"coredns-66bc5c9577-zlj9c", "timestamp":"2025-10-31 01:46:00.022168025 +0000 UTC"}, Hostname:"srv-n5tpq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.022 [INFO][4808] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.022 [INFO][4808] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.022 [INFO][4808] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n5tpq.gb1.brightbox.com' Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.062 [INFO][4808] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.076 [INFO][4808] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.091 [INFO][4808] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.094 [INFO][4808] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.097 [INFO][4808] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.098 [INFO][4808] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.100 [INFO][4808] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83 Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.110 [INFO][4808] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.120 [INFO][4808] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.199/26] block=192.168.82.192/26 handle="k8s-pod-network.ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.120 [INFO][4808] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.199/26] handle="k8s-pod-network.ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.120 [INFO][4808] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:00.167777 containerd[1499]: 2025-10-31 01:46:00.120 [INFO][4808] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.199/26] IPv6=[] ContainerID="ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" HandleID="k8s-pod-network.ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:46:00.172415 containerd[1499]: 2025-10-31 01:46:00.125 [INFO][4788] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" Namespace="kube-system" Pod="coredns-66bc5c9577-zlj9c" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b865c705-77c3-44e2-b527-f7fc482e79fd", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-zlj9c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali556aa9a087f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:00.172415 containerd[1499]: 2025-10-31 01:46:00.126 [INFO][4788] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.199/32] ContainerID="ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" Namespace="kube-system" Pod="coredns-66bc5c9577-zlj9c" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:46:00.172415 containerd[1499]: 2025-10-31 01:46:00.126 [INFO][4788] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali556aa9a087f ContainerID="ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" Namespace="kube-system" Pod="coredns-66bc5c9577-zlj9c" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:46:00.172415 containerd[1499]: 2025-10-31 01:46:00.138 [INFO][4788] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" Namespace="kube-system" Pod="coredns-66bc5c9577-zlj9c" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:46:00.172415 containerd[1499]: 2025-10-31 01:46:00.140 [INFO][4788] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" Namespace="kube-system" Pod="coredns-66bc5c9577-zlj9c" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b865c705-77c3-44e2-b527-f7fc482e79fd", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83", Pod:"coredns-66bc5c9577-zlj9c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali556aa9a087f", MAC:"da:3b:ad:c7:ac:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:00.173567 containerd[1499]: 2025-10-31 01:46:00.162 [INFO][4788] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83" Namespace="kube-system" Pod="coredns-66bc5c9577-zlj9c" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:46:00.191046 systemd-networkd[1438]: vxlan.calico: Gained IPv6LL Oct 31 01:46:00.214192 containerd[1499]: time="2025-10-31T01:46:00.213791306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:46:00.214192 containerd[1499]: time="2025-10-31T01:46:00.213887626Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:46:00.214192 containerd[1499]: time="2025-10-31T01:46:00.213965454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:46:00.215082 containerd[1499]: time="2025-10-31T01:46:00.214690968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:46:00.250732 systemd-networkd[1438]: caliacb845be54f: Link UP Oct 31 01:46:00.251153 systemd-networkd[1438]: caliacb845be54f: Gained carrier Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:45:59.954 [INFO][4779] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0 coredns-66bc5c9577- kube-system e995549a-d4b6-43b7-9c52-4c9c14a4dcdf 1017 0 2025-10-31 01:45:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-n5tpq.gb1.brightbox.com coredns-66bc5c9577-f9xmz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliacb845be54f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" Namespace="kube-system" Pod="coredns-66bc5c9577-f9xmz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-" Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:45:59.954 [INFO][4779] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" Namespace="kube-system" Pod="coredns-66bc5c9577-f9xmz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.042 [INFO][4806] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" HandleID="k8s-pod-network.ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.042 [INFO][4806] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" HandleID="k8s-pod-network.ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5a40), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-n5tpq.gb1.brightbox.com", "pod":"coredns-66bc5c9577-f9xmz", "timestamp":"2025-10-31 01:46:00.042244834 +0000 UTC"}, Hostname:"srv-n5tpq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.042 [INFO][4806] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.121 [INFO][4806] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.121 [INFO][4806] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-n5tpq.gb1.brightbox.com' Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.162 [INFO][4806] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.182 [INFO][4806] ipam/ipam.go 394: Looking up existing affinities for host host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.196 [INFO][4806] ipam/ipam.go 511: Trying affinity for 192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.202 [INFO][4806] ipam/ipam.go 158: Attempting to load block cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.206 [INFO][4806] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.82.192/26 host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.206 [INFO][4806] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.82.192/26 handle="k8s-pod-network.ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.211 [INFO][4806] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855 Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.220 [INFO][4806] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.82.192/26 handle="k8s-pod-network.ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.237 [INFO][4806] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.82.200/26] block=192.168.82.192/26 handle="k8s-pod-network.ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.237 [INFO][4806] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.82.200/26] handle="k8s-pod-network.ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" host="srv-n5tpq.gb1.brightbox.com" Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.237 [INFO][4806] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:00.295019 containerd[1499]: 2025-10-31 01:46:00.237 [INFO][4806] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.82.200/26] IPv6=[] ContainerID="ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" HandleID="k8s-pod-network.ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:46:00.297284 containerd[1499]: 2025-10-31 01:46:00.243 [INFO][4779] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" Namespace="kube-system" Pod="coredns-66bc5c9577-f9xmz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e995549a-d4b6-43b7-9c52-4c9c14a4dcdf", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"", Pod:"coredns-66bc5c9577-f9xmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliacb845be54f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:00.297284 containerd[1499]: 2025-10-31 01:46:00.244 [INFO][4779] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.200/32] ContainerID="ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" Namespace="kube-system" Pod="coredns-66bc5c9577-f9xmz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:46:00.297284 containerd[1499]: 2025-10-31 01:46:00.244 [INFO][4779] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliacb845be54f ContainerID="ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" Namespace="kube-system" Pod="coredns-66bc5c9577-f9xmz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:46:00.297284 containerd[1499]: 2025-10-31 01:46:00.248 [INFO][4779] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" Namespace="kube-system" Pod="coredns-66bc5c9577-f9xmz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:46:00.297284 containerd[1499]: 2025-10-31 01:46:00.250 [INFO][4779] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" Namespace="kube-system" Pod="coredns-66bc5c9577-f9xmz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e995549a-d4b6-43b7-9c52-4c9c14a4dcdf", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855", Pod:"coredns-66bc5c9577-f9xmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliacb845be54f", MAC:"4a:ce:f5:c7:a3:22", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:00.297657 containerd[1499]: 2025-10-31 01:46:00.286 [INFO][4779] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855" Namespace="kube-system" Pod="coredns-66bc5c9577-f9xmz" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:46:00.299566 systemd[1]: Started cri-containerd-ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83.scope - libcontainer container ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83. Oct 31 01:46:00.350241 containerd[1499]: time="2025-10-31T01:46:00.350111993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 31 01:46:00.350241 containerd[1499]: time="2025-10-31T01:46:00.350199323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 31 01:46:00.350569 containerd[1499]: time="2025-10-31T01:46:00.350222197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:46:00.350569 containerd[1499]: time="2025-10-31T01:46:00.350341840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 31 01:46:00.383816 systemd[1]: Started cri-containerd-ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855.scope - libcontainer container ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855. Oct 31 01:46:00.489432 containerd[1499]: time="2025-10-31T01:46:00.489382435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-zlj9c,Uid:b865c705-77c3-44e2-b527-f7fc482e79fd,Namespace:kube-system,Attempt:1,} returns sandbox id \"ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83\"" Oct 31 01:46:00.503604 containerd[1499]: time="2025-10-31T01:46:00.502278146Z" level=info msg="CreateContainer within sandbox \"ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 01:46:00.549010 containerd[1499]: time="2025-10-31T01:46:00.548953475Z" level=info msg="CreateContainer within sandbox \"ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"acfcc87aa5fa6cc1bf8f16efbb79b6b57092b04aff646b23f0672062e73f4e77\"" Oct 31 01:46:00.586474 containerd[1499]: time="2025-10-31T01:46:00.586273592Z" level=info msg="StartContainer for \"acfcc87aa5fa6cc1bf8f16efbb79b6b57092b04aff646b23f0672062e73f4e77\"" Oct 31 01:46:00.676655 containerd[1499]: time="2025-10-31T01:46:00.675378732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-f9xmz,Uid:e995549a-d4b6-43b7-9c52-4c9c14a4dcdf,Namespace:kube-system,Attempt:1,} returns sandbox id \"ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855\"" Oct 31 01:46:00.704800 systemd[1]: Started cri-containerd-acfcc87aa5fa6cc1bf8f16efbb79b6b57092b04aff646b23f0672062e73f4e77.scope - libcontainer container acfcc87aa5fa6cc1bf8f16efbb79b6b57092b04aff646b23f0672062e73f4e77. Oct 31 01:46:00.719566 containerd[1499]: time="2025-10-31T01:46:00.719513654Z" level=info msg="StopPodSandbox for \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\"" Oct 31 01:46:00.761392 containerd[1499]: time="2025-10-31T01:46:00.760865328Z" level=info msg="CreateContainer within sandbox \"ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 01:46:00.797348 containerd[1499]: time="2025-10-31T01:46:00.796668804Z" level=info msg="CreateContainer within sandbox \"ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e92453f773e38c7bb723f3716a33c04b5527547971c21a3bbed0b03069e7a388\"" Oct 31 01:46:00.797689 containerd[1499]: time="2025-10-31T01:46:00.797656889Z" level=info msg="StartContainer for \"e92453f773e38c7bb723f3716a33c04b5527547971c21a3bbed0b03069e7a388\"" Oct 31 01:46:00.822097 containerd[1499]: time="2025-10-31T01:46:00.822046233Z" level=info msg="StartContainer for \"acfcc87aa5fa6cc1bf8f16efbb79b6b57092b04aff646b23f0672062e73f4e77\" returns successfully" Oct 31 01:46:00.879396 systemd[1]: run-containerd-runc-k8s.io-e92453f773e38c7bb723f3716a33c04b5527547971c21a3bbed0b03069e7a388-runc.ubEViz.mount: Deactivated successfully. Oct 31 01:46:00.890827 systemd[1]: Started cri-containerd-e92453f773e38c7bb723f3716a33c04b5527547971c21a3bbed0b03069e7a388.scope - libcontainer container e92453f773e38c7bb723f3716a33c04b5527547971c21a3bbed0b03069e7a388. Oct 31 01:46:00.960419 systemd-networkd[1438]: calif78d27c5179: Gained IPv6LL Oct 31 01:46:00.988736 containerd[1499]: time="2025-10-31T01:46:00.988639315Z" level=info msg="StartContainer for \"e92453f773e38c7bb723f3716a33c04b5527547971c21a3bbed0b03069e7a388\" returns successfully" Oct 31 01:46:01.012380 containerd[1499]: 2025-10-31 01:46:00.906 [WARNING][4958] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"7238a7c0-ff8f-443f-ab69-f2ee0be198c2", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15", Pod:"goldmane-7c778bb748-tgfh4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali433de113dde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:01.012380 containerd[1499]: 2025-10-31 01:46:00.907 [INFO][4958] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Oct 31 01:46:01.012380 containerd[1499]: 2025-10-31 01:46:00.907 [INFO][4958] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" iface="eth0" netns="" Oct 31 01:46:01.012380 containerd[1499]: 2025-10-31 01:46:00.907 [INFO][4958] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Oct 31 01:46:01.012380 containerd[1499]: 2025-10-31 01:46:00.907 [INFO][4958] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Oct 31 01:46:01.012380 containerd[1499]: 2025-10-31 01:46:00.964 [INFO][5000] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" HandleID="k8s-pod-network.77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Workload="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:46:01.012380 containerd[1499]: 2025-10-31 01:46:00.964 [INFO][5000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:01.012380 containerd[1499]: 2025-10-31 01:46:00.965 [INFO][5000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:01.012380 containerd[1499]: 2025-10-31 01:46:00.992 [WARNING][5000] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" HandleID="k8s-pod-network.77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Workload="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:46:01.012380 containerd[1499]: 2025-10-31 01:46:00.992 [INFO][5000] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" HandleID="k8s-pod-network.77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Workload="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:46:01.012380 containerd[1499]: 2025-10-31 01:46:01.000 [INFO][5000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:01.012380 containerd[1499]: 2025-10-31 01:46:01.006 [INFO][4958] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Oct 31 01:46:01.012380 containerd[1499]: time="2025-10-31T01:46:01.011564288Z" level=info msg="TearDown network for sandbox \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\" successfully" Oct 31 01:46:01.012380 containerd[1499]: time="2025-10-31T01:46:01.011621237Z" level=info msg="StopPodSandbox for \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\" returns successfully" Oct 31 01:46:01.017609 containerd[1499]: time="2025-10-31T01:46:01.014592614Z" level=info msg="RemovePodSandbox for \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\"" Oct 31 01:46:01.022038 containerd[1499]: time="2025-10-31T01:46:01.021991647Z" level=info msg="Forcibly stopping sandbox \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\"" Oct 31 01:46:01.023628 kubelet[2672]: E1031 01:46:01.023547 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:46:01.070617 sshd[4153]: PAM: Permission denied for illegal user admin from 45.140.17.124 Oct 31 01:46:01.073046 sshd[4153]: Failed keyboard-interactive/pam for invalid user admin from 45.140.17.124 port 32772 ssh2 Oct 31 01:46:01.102250 kubelet[2672]: I1031 01:46:01.100962 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zlj9c" podStartSLOduration=55.100940575 podStartE2EDuration="55.100940575s" podCreationTimestamp="2025-10-31 01:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:46:01.04308441 +0000 UTC m=+60.741852191" watchObservedRunningTime="2025-10-31 01:46:01.100940575 +0000 UTC m=+60.799708381" Oct 31 01:46:01.127171 kubelet[2672]: I1031 01:46:01.127097 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-f9xmz" podStartSLOduration=55.127076426 podStartE2EDuration="55.127076426s" podCreationTimestamp="2025-10-31 01:45:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 01:46:01.125953551 +0000 UTC m=+60.824721326" watchObservedRunningTime="2025-10-31 01:46:01.127076426 +0000 UTC m=+60.825844188" Oct 31 01:46:01.276704 containerd[1499]: 2025-10-31 01:46:01.125 [WARNING][5025] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"7238a7c0-ff8f-443f-ab69-f2ee0be198c2", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"ca61939241369bd5eb211daeca497be1173b69bfecc419a5ae9d7e25752d0d15", Pod:"goldmane-7c778bb748-tgfh4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali433de113dde", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:01.276704 containerd[1499]: 2025-10-31 01:46:01.126 [INFO][5025] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Oct 31 01:46:01.276704 containerd[1499]: 2025-10-31 01:46:01.126 [INFO][5025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" iface="eth0" netns="" Oct 31 01:46:01.276704 containerd[1499]: 2025-10-31 01:46:01.128 [INFO][5025] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Oct 31 01:46:01.276704 containerd[1499]: 2025-10-31 01:46:01.128 [INFO][5025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Oct 31 01:46:01.276704 containerd[1499]: 2025-10-31 01:46:01.222 [INFO][5035] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" HandleID="k8s-pod-network.77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Workload="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:46:01.276704 containerd[1499]: 2025-10-31 01:46:01.222 [INFO][5035] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:01.276704 containerd[1499]: 2025-10-31 01:46:01.222 [INFO][5035] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:01.276704 containerd[1499]: 2025-10-31 01:46:01.266 [WARNING][5035] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" HandleID="k8s-pod-network.77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Workload="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:46:01.276704 containerd[1499]: 2025-10-31 01:46:01.266 [INFO][5035] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" HandleID="k8s-pod-network.77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Workload="srv--n5tpq.gb1.brightbox.com-k8s-goldmane--7c778bb748--tgfh4-eth0" Oct 31 01:46:01.276704 containerd[1499]: 2025-10-31 01:46:01.269 [INFO][5035] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:01.276704 containerd[1499]: 2025-10-31 01:46:01.273 [INFO][5025] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5" Oct 31 01:46:01.279545 containerd[1499]: time="2025-10-31T01:46:01.278552316Z" level=info msg="TearDown network for sandbox \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\" successfully" Oct 31 01:46:01.290616 containerd[1499]: time="2025-10-31T01:46:01.290342108Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 01:46:01.290616 containerd[1499]: time="2025-10-31T01:46:01.290440317Z" level=info msg="RemovePodSandbox \"77e8cb57e2185b3916c4882741dc4cee83ff2472ec71e7684f23ec29a45b62f5\" returns successfully" Oct 31 01:46:01.292989 containerd[1499]: time="2025-10-31T01:46:01.292094431Z" level=info msg="StopPodSandbox for \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\"" Oct 31 01:46:01.342860 systemd-networkd[1438]: cali556aa9a087f: Gained IPv6LL Oct 31 01:46:01.462621 sshd[4153]: Connection reset by invalid user admin 45.140.17.124 port 32772 [preauth] Oct 31 01:46:01.468360 systemd[1]: sshd@12-10.230.44.66:22-45.140.17.124:32772.service: Deactivated successfully. Oct 31 01:46:01.494697 containerd[1499]: 2025-10-31 01:46:01.401 [WARNING][5056] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0", GenerateName:"calico-apiserver-6b58b8b6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4abe04b-b171-45a2-9c26-8c077d6bf990", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b58b8b6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68", Pod:"calico-apiserver-6b58b8b6b-r89xp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali137f0055ae0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:01.494697 containerd[1499]: 2025-10-31 01:46:01.403 [INFO][5056] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Oct 31 01:46:01.494697 containerd[1499]: 2025-10-31 01:46:01.403 [INFO][5056] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" iface="eth0" netns="" Oct 31 01:46:01.494697 containerd[1499]: 2025-10-31 01:46:01.403 [INFO][5056] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Oct 31 01:46:01.494697 containerd[1499]: 2025-10-31 01:46:01.403 [INFO][5056] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Oct 31 01:46:01.494697 containerd[1499]: 2025-10-31 01:46:01.461 [INFO][5064] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" HandleID="k8s-pod-network.646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:46:01.494697 containerd[1499]: 2025-10-31 01:46:01.461 [INFO][5064] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:01.494697 containerd[1499]: 2025-10-31 01:46:01.461 [INFO][5064] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:01.494697 containerd[1499]: 2025-10-31 01:46:01.482 [WARNING][5064] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" HandleID="k8s-pod-network.646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:46:01.494697 containerd[1499]: 2025-10-31 01:46:01.482 [INFO][5064] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" HandleID="k8s-pod-network.646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:46:01.494697 containerd[1499]: 2025-10-31 01:46:01.489 [INFO][5064] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:01.494697 containerd[1499]: 2025-10-31 01:46:01.491 [INFO][5056] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Oct 31 01:46:01.494697 containerd[1499]: time="2025-10-31T01:46:01.494335222Z" level=info msg="TearDown network for sandbox \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\" successfully" Oct 31 01:46:01.494697 containerd[1499]: time="2025-10-31T01:46:01.494389235Z" level=info msg="StopPodSandbox for \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\" returns successfully" Oct 31 01:46:01.497838 containerd[1499]: time="2025-10-31T01:46:01.496887936Z" level=info msg="RemovePodSandbox for \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\"" Oct 31 01:46:01.497838 containerd[1499]: time="2025-10-31T01:46:01.496971764Z" level=info msg="Forcibly stopping sandbox \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\"" Oct 31 01:46:01.547348 systemd[1]: Started sshd@13-10.230.44.66:22-45.140.17.124:50606.service - OpenSSH per-connection server daemon (45.140.17.124:50606). Oct 31 01:46:01.643115 containerd[1499]: 2025-10-31 01:46:01.573 [WARNING][5080] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0", GenerateName:"calico-apiserver-6b58b8b6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4abe04b-b171-45a2-9c26-8c077d6bf990", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b58b8b6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"9846c02b8fbd8845aeb178bff80d0447d201d54a0f6ea753f419bc161370fa68", Pod:"calico-apiserver-6b58b8b6b-r89xp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali137f0055ae0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:01.643115 containerd[1499]: 2025-10-31 01:46:01.575 [INFO][5080] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Oct 31 01:46:01.643115 containerd[1499]: 2025-10-31 01:46:01.575 [INFO][5080] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" iface="eth0" netns="" Oct 31 01:46:01.643115 containerd[1499]: 2025-10-31 01:46:01.575 [INFO][5080] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Oct 31 01:46:01.643115 containerd[1499]: 2025-10-31 01:46:01.575 [INFO][5080] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Oct 31 01:46:01.643115 containerd[1499]: 2025-10-31 01:46:01.620 [INFO][5090] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" HandleID="k8s-pod-network.646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:46:01.643115 containerd[1499]: 2025-10-31 01:46:01.620 [INFO][5090] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:01.643115 containerd[1499]: 2025-10-31 01:46:01.620 [INFO][5090] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:01.643115 containerd[1499]: 2025-10-31 01:46:01.632 [WARNING][5090] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" HandleID="k8s-pod-network.646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:46:01.643115 containerd[1499]: 2025-10-31 01:46:01.632 [INFO][5090] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" HandleID="k8s-pod-network.646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--r89xp-eth0" Oct 31 01:46:01.643115 containerd[1499]: 2025-10-31 01:46:01.635 [INFO][5090] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:01.643115 containerd[1499]: 2025-10-31 01:46:01.639 [INFO][5080] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c" Oct 31 01:46:01.647330 containerd[1499]: time="2025-10-31T01:46:01.642741558Z" level=info msg="TearDown network for sandbox \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\" successfully" Oct 31 01:46:01.654999 containerd[1499]: time="2025-10-31T01:46:01.654674310Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 01:46:01.654999 containerd[1499]: time="2025-10-31T01:46:01.654775092Z" level=info msg="RemovePodSandbox \"646de07a661cba4c0801072ab54295e47a5375553b3a9e0c3b082c19d5affb5c\" returns successfully" Oct 31 01:46:01.655530 containerd[1499]: time="2025-10-31T01:46:01.655458418Z" level=info msg="StopPodSandbox for \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\"" Oct 31 01:46:01.760634 containerd[1499]: 2025-10-31 01:46:01.707 [WARNING][5105] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b91990ed-b519-4003-921b-695c5958edac", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0", Pod:"csi-node-driver-lvbwj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif78d27c5179", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:01.760634 containerd[1499]: 2025-10-31 01:46:01.707 [INFO][5105] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Oct 31 01:46:01.760634 containerd[1499]: 2025-10-31 01:46:01.707 [INFO][5105] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" iface="eth0" netns="" Oct 31 01:46:01.760634 containerd[1499]: 2025-10-31 01:46:01.707 [INFO][5105] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Oct 31 01:46:01.760634 containerd[1499]: 2025-10-31 01:46:01.707 [INFO][5105] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Oct 31 01:46:01.760634 containerd[1499]: 2025-10-31 01:46:01.743 [INFO][5112] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" HandleID="k8s-pod-network.ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Workload="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:46:01.760634 containerd[1499]: 2025-10-31 01:46:01.743 [INFO][5112] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:01.760634 containerd[1499]: 2025-10-31 01:46:01.744 [INFO][5112] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:01.760634 containerd[1499]: 2025-10-31 01:46:01.753 [WARNING][5112] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" HandleID="k8s-pod-network.ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Workload="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:46:01.760634 containerd[1499]: 2025-10-31 01:46:01.753 [INFO][5112] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" HandleID="k8s-pod-network.ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Workload="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:46:01.760634 containerd[1499]: 2025-10-31 01:46:01.755 [INFO][5112] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:01.760634 containerd[1499]: 2025-10-31 01:46:01.758 [INFO][5105] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Oct 31 01:46:01.760634 containerd[1499]: time="2025-10-31T01:46:01.760236616Z" level=info msg="TearDown network for sandbox \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\" successfully" Oct 31 01:46:01.760634 containerd[1499]: time="2025-10-31T01:46:01.760269558Z" level=info msg="StopPodSandbox for \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\" returns successfully" Oct 31 01:46:01.765671 containerd[1499]: time="2025-10-31T01:46:01.763496120Z" level=info msg="RemovePodSandbox for \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\"" Oct 31 01:46:01.765671 containerd[1499]: time="2025-10-31T01:46:01.763560092Z" level=info msg="Forcibly stopping sandbox \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\"" Oct 31 01:46:01.887226 containerd[1499]: 2025-10-31 01:46:01.828 [WARNING][5126] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b91990ed-b519-4003-921b-695c5958edac", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"cc686b0d8b8a0a3bb83573c429c4407de4f4ba4c832cf3ed4986372898eaebe0", Pod:"csi-node-driver-lvbwj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif78d27c5179", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:01.887226 containerd[1499]: 2025-10-31 01:46:01.828 [INFO][5126] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Oct 31 01:46:01.887226 containerd[1499]: 2025-10-31 01:46:01.828 [INFO][5126] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" iface="eth0" netns="" Oct 31 01:46:01.887226 containerd[1499]: 2025-10-31 01:46:01.829 [INFO][5126] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Oct 31 01:46:01.887226 containerd[1499]: 2025-10-31 01:46:01.829 [INFO][5126] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Oct 31 01:46:01.887226 containerd[1499]: 2025-10-31 01:46:01.868 [INFO][5133] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" HandleID="k8s-pod-network.ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Workload="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:46:01.887226 containerd[1499]: 2025-10-31 01:46:01.868 [INFO][5133] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:01.887226 containerd[1499]: 2025-10-31 01:46:01.869 [INFO][5133] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:01.887226 containerd[1499]: 2025-10-31 01:46:01.880 [WARNING][5133] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" HandleID="k8s-pod-network.ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Workload="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:46:01.887226 containerd[1499]: 2025-10-31 01:46:01.880 [INFO][5133] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" HandleID="k8s-pod-network.ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Workload="srv--n5tpq.gb1.brightbox.com-k8s-csi--node--driver--lvbwj-eth0" Oct 31 01:46:01.887226 containerd[1499]: 2025-10-31 01:46:01.882 [INFO][5133] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:01.887226 containerd[1499]: 2025-10-31 01:46:01.884 [INFO][5126] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76" Oct 31 01:46:01.888570 containerd[1499]: time="2025-10-31T01:46:01.887288040Z" level=info msg="TearDown network for sandbox \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\" successfully" Oct 31 01:46:01.895801 containerd[1499]: time="2025-10-31T01:46:01.895753694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 01:46:01.895900 containerd[1499]: time="2025-10-31T01:46:01.895816874Z" level=info msg="RemovePodSandbox \"ca5af43437ccd43a7685cd844de39b13c27589c315ee9089deb2c9b676c74c76\" returns successfully" Oct 31 01:46:01.896444 containerd[1499]: time="2025-10-31T01:46:01.896411996Z" level=info msg="StopPodSandbox for \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\"" Oct 31 01:46:01.918925 systemd-networkd[1438]: caliacb845be54f: Gained IPv6LL Oct 31 01:46:02.009630 containerd[1499]: 2025-10-31 01:46:01.953 [WARNING][5147] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0", GenerateName:"calico-kube-controllers-84c75ff6b-", Namespace:"calico-system", SelfLink:"", UID:"b0d8846e-dd18-434c-b179-e3c2878ecf3f", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84c75ff6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779", Pod:"calico-kube-controllers-84c75ff6b-nf76t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2558d318b44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:02.009630 containerd[1499]: 2025-10-31 01:46:01.953 [INFO][5147] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Oct 31 01:46:02.009630 containerd[1499]: 2025-10-31 01:46:01.953 [INFO][5147] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" iface="eth0" netns="" Oct 31 01:46:02.009630 containerd[1499]: 2025-10-31 01:46:01.953 [INFO][5147] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Oct 31 01:46:02.009630 containerd[1499]: 2025-10-31 01:46:01.953 [INFO][5147] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Oct 31 01:46:02.009630 containerd[1499]: 2025-10-31 01:46:01.993 [INFO][5154] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" HandleID="k8s-pod-network.f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:46:02.009630 containerd[1499]: 2025-10-31 01:46:01.993 [INFO][5154] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:02.009630 containerd[1499]: 2025-10-31 01:46:01.993 [INFO][5154] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:02.009630 containerd[1499]: 2025-10-31 01:46:02.002 [WARNING][5154] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" HandleID="k8s-pod-network.f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:46:02.009630 containerd[1499]: 2025-10-31 01:46:02.002 [INFO][5154] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" HandleID="k8s-pod-network.f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:46:02.009630 containerd[1499]: 2025-10-31 01:46:02.004 [INFO][5154] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:02.009630 containerd[1499]: 2025-10-31 01:46:02.007 [INFO][5147] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Oct 31 01:46:02.009630 containerd[1499]: time="2025-10-31T01:46:02.009507344Z" level=info msg="TearDown network for sandbox \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\" successfully" Oct 31 01:46:02.009630 containerd[1499]: time="2025-10-31T01:46:02.009541998Z" level=info msg="StopPodSandbox for \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\" returns successfully" Oct 31 01:46:02.011204 containerd[1499]: time="2025-10-31T01:46:02.011166475Z" level=info msg="RemovePodSandbox for \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\"" Oct 31 01:46:02.011296 containerd[1499]: time="2025-10-31T01:46:02.011214000Z" level=info msg="Forcibly stopping sandbox \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\"" Oct 31 01:46:02.178403 containerd[1499]: 2025-10-31 01:46:02.107 [WARNING][5168] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0", GenerateName:"calico-kube-controllers-84c75ff6b-", Namespace:"calico-system", SelfLink:"", UID:"b0d8846e-dd18-434c-b179-e3c2878ecf3f", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84c75ff6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"4e051e785290b4f30b901261c7869182a851489d752d87148aaaf50095455779", Pod:"calico-kube-controllers-84c75ff6b-nf76t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2558d318b44", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:02.178403 containerd[1499]: 2025-10-31 01:46:02.109 [INFO][5168] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Oct 31 01:46:02.178403 containerd[1499]: 2025-10-31 01:46:02.110 [INFO][5168] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" iface="eth0" netns="" Oct 31 01:46:02.178403 containerd[1499]: 2025-10-31 01:46:02.110 [INFO][5168] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Oct 31 01:46:02.178403 containerd[1499]: 2025-10-31 01:46:02.110 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Oct 31 01:46:02.178403 containerd[1499]: 2025-10-31 01:46:02.158 [INFO][5175] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" HandleID="k8s-pod-network.f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:46:02.178403 containerd[1499]: 2025-10-31 01:46:02.158 [INFO][5175] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:02.178403 containerd[1499]: 2025-10-31 01:46:02.158 [INFO][5175] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:02.178403 containerd[1499]: 2025-10-31 01:46:02.170 [WARNING][5175] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" HandleID="k8s-pod-network.f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:46:02.178403 containerd[1499]: 2025-10-31 01:46:02.170 [INFO][5175] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" HandleID="k8s-pod-network.f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--kube--controllers--84c75ff6b--nf76t-eth0" Oct 31 01:46:02.178403 containerd[1499]: 2025-10-31 01:46:02.173 [INFO][5175] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:02.178403 containerd[1499]: 2025-10-31 01:46:02.175 [INFO][5168] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35" Oct 31 01:46:02.180286 containerd[1499]: time="2025-10-31T01:46:02.179621793Z" level=info msg="TearDown network for sandbox \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\" successfully" Oct 31 01:46:02.184995 containerd[1499]: time="2025-10-31T01:46:02.184853956Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 01:46:02.184995 containerd[1499]: time="2025-10-31T01:46:02.184943695Z" level=info msg="RemovePodSandbox \"f17a60b346fa4049d33834c710e3639f3dbe0616835b9f43660e4a5ae1de0a35\" returns successfully" Oct 31 01:46:02.186100 containerd[1499]: time="2025-10-31T01:46:02.185873116Z" level=info msg="StopPodSandbox for \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\"" Oct 31 01:46:02.331712 containerd[1499]: 2025-10-31 01:46:02.244 [WARNING][5189] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0", GenerateName:"calico-apiserver-6b58b8b6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f712c016-8a6e-4625-aab4-a80c982f13bc", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b58b8b6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414", Pod:"calico-apiserver-6b58b8b6b-6wxlj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad248fdecf6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:02.331712 containerd[1499]: 2025-10-31 01:46:02.245 [INFO][5189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Oct 31 01:46:02.331712 containerd[1499]: 2025-10-31 01:46:02.245 [INFO][5189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" iface="eth0" netns="" Oct 31 01:46:02.331712 containerd[1499]: 2025-10-31 01:46:02.245 [INFO][5189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Oct 31 01:46:02.331712 containerd[1499]: 2025-10-31 01:46:02.245 [INFO][5189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Oct 31 01:46:02.331712 containerd[1499]: 2025-10-31 01:46:02.298 [INFO][5196] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" HandleID="k8s-pod-network.2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:46:02.331712 containerd[1499]: 2025-10-31 01:46:02.299 [INFO][5196] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:02.331712 containerd[1499]: 2025-10-31 01:46:02.299 [INFO][5196] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:02.331712 containerd[1499]: 2025-10-31 01:46:02.312 [WARNING][5196] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" HandleID="k8s-pod-network.2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:46:02.331712 containerd[1499]: 2025-10-31 01:46:02.312 [INFO][5196] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" HandleID="k8s-pod-network.2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:46:02.331712 containerd[1499]: 2025-10-31 01:46:02.324 [INFO][5196] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:02.331712 containerd[1499]: 2025-10-31 01:46:02.328 [INFO][5189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Oct 31 01:46:02.332374 containerd[1499]: time="2025-10-31T01:46:02.331900674Z" level=info msg="TearDown network for sandbox \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\" successfully" Oct 31 01:46:02.332374 containerd[1499]: time="2025-10-31T01:46:02.331937694Z" level=info msg="StopPodSandbox for \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\" returns successfully" Oct 31 01:46:02.333123 containerd[1499]: time="2025-10-31T01:46:02.333092277Z" level=info msg="RemovePodSandbox for \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\"" Oct 31 01:46:02.333212 containerd[1499]: time="2025-10-31T01:46:02.333135648Z" level=info msg="Forcibly stopping sandbox \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\"" Oct 31 01:46:02.482300 containerd[1499]: 2025-10-31 01:46:02.395 [WARNING][5211] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0", GenerateName:"calico-apiserver-6b58b8b6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"f712c016-8a6e-4625-aab4-a80c982f13bc", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b58b8b6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"6a8cc2a83772315f7837fa9bac5bd8403c02aa5254b345e29ca478fbb7b0c414", Pod:"calico-apiserver-6b58b8b6b-6wxlj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliad248fdecf6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:02.482300 containerd[1499]: 2025-10-31 01:46:02.395 [INFO][5211] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Oct 31 01:46:02.482300 containerd[1499]: 2025-10-31 01:46:02.395 [INFO][5211] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" iface="eth0" netns="" Oct 31 01:46:02.482300 containerd[1499]: 2025-10-31 01:46:02.395 [INFO][5211] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Oct 31 01:46:02.482300 containerd[1499]: 2025-10-31 01:46:02.395 [INFO][5211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Oct 31 01:46:02.482300 containerd[1499]: 2025-10-31 01:46:02.457 [INFO][5218] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" HandleID="k8s-pod-network.2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:46:02.482300 containerd[1499]: 2025-10-31 01:46:02.457 [INFO][5218] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:02.482300 containerd[1499]: 2025-10-31 01:46:02.457 [INFO][5218] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:02.482300 containerd[1499]: 2025-10-31 01:46:02.474 [WARNING][5218] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" HandleID="k8s-pod-network.2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:46:02.482300 containerd[1499]: 2025-10-31 01:46:02.474 [INFO][5218] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" HandleID="k8s-pod-network.2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Workload="srv--n5tpq.gb1.brightbox.com-k8s-calico--apiserver--6b58b8b6b--6wxlj-eth0" Oct 31 01:46:02.482300 containerd[1499]: 2025-10-31 01:46:02.476 [INFO][5218] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:02.482300 containerd[1499]: 2025-10-31 01:46:02.480 [INFO][5211] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57" Oct 31 01:46:02.482300 containerd[1499]: time="2025-10-31T01:46:02.482264345Z" level=info msg="TearDown network for sandbox \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\" successfully" Oct 31 01:46:02.487264 containerd[1499]: time="2025-10-31T01:46:02.487037432Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 01:46:02.487264 containerd[1499]: time="2025-10-31T01:46:02.487127642Z" level=info msg="RemovePodSandbox \"2a7ffe909dbfb03c27d8a041460503671aa4440b7b3beb86e1a90dfbe7b87a57\" returns successfully" Oct 31 01:46:02.488100 containerd[1499]: time="2025-10-31T01:46:02.488059533Z" level=info msg="StopPodSandbox for \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\"" Oct 31 01:46:02.598390 containerd[1499]: 2025-10-31 01:46:02.545 [WARNING][5235] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-whisker--558d6757c7--7cf2w-eth0" Oct 31 01:46:02.598390 containerd[1499]: 2025-10-31 01:46:02.546 [INFO][5235] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Oct 31 01:46:02.598390 containerd[1499]: 2025-10-31 01:46:02.546 [INFO][5235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" iface="eth0" netns="" Oct 31 01:46:02.598390 containerd[1499]: 2025-10-31 01:46:02.546 [INFO][5235] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Oct 31 01:46:02.598390 containerd[1499]: 2025-10-31 01:46:02.546 [INFO][5235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Oct 31 01:46:02.598390 containerd[1499]: 2025-10-31 01:46:02.576 [INFO][5243] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" HandleID="k8s-pod-network.03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Workload="srv--n5tpq.gb1.brightbox.com-k8s-whisker--558d6757c7--7cf2w-eth0" Oct 31 01:46:02.598390 containerd[1499]: 2025-10-31 01:46:02.577 [INFO][5243] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:02.598390 containerd[1499]: 2025-10-31 01:46:02.577 [INFO][5243] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:02.598390 containerd[1499]: 2025-10-31 01:46:02.590 [WARNING][5243] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" HandleID="k8s-pod-network.03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Workload="srv--n5tpq.gb1.brightbox.com-k8s-whisker--558d6757c7--7cf2w-eth0" Oct 31 01:46:02.598390 containerd[1499]: 2025-10-31 01:46:02.591 [INFO][5243] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" HandleID="k8s-pod-network.03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Workload="srv--n5tpq.gb1.brightbox.com-k8s-whisker--558d6757c7--7cf2w-eth0" Oct 31 01:46:02.598390 containerd[1499]: 2025-10-31 01:46:02.593 [INFO][5243] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:02.598390 containerd[1499]: 2025-10-31 01:46:02.595 [INFO][5235] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Oct 31 01:46:02.598390 containerd[1499]: time="2025-10-31T01:46:02.598247456Z" level=info msg="TearDown network for sandbox \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\" successfully" Oct 31 01:46:02.598390 containerd[1499]: time="2025-10-31T01:46:02.598280445Z" level=info msg="StopPodSandbox for \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\" returns successfully" Oct 31 01:46:02.599867 containerd[1499]: time="2025-10-31T01:46:02.598715747Z" level=info msg="RemovePodSandbox for \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\"" Oct 31 01:46:02.599867 containerd[1499]: time="2025-10-31T01:46:02.598753644Z" level=info msg="Forcibly stopping sandbox \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\"" Oct 31 01:46:02.708630 containerd[1499]: 2025-10-31 01:46:02.659 [WARNING][5257] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" WorkloadEndpoint="srv--n5tpq.gb1.brightbox.com-k8s-whisker--558d6757c7--7cf2w-eth0" Oct 31 01:46:02.708630 containerd[1499]: 2025-10-31 01:46:02.659 [INFO][5257] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Oct 31 01:46:02.708630 containerd[1499]: 2025-10-31 01:46:02.659 [INFO][5257] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" iface="eth0" netns="" Oct 31 01:46:02.708630 containerd[1499]: 2025-10-31 01:46:02.659 [INFO][5257] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Oct 31 01:46:02.708630 containerd[1499]: 2025-10-31 01:46:02.659 [INFO][5257] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Oct 31 01:46:02.708630 containerd[1499]: 2025-10-31 01:46:02.693 [INFO][5264] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" HandleID="k8s-pod-network.03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Workload="srv--n5tpq.gb1.brightbox.com-k8s-whisker--558d6757c7--7cf2w-eth0" Oct 31 01:46:02.708630 containerd[1499]: 2025-10-31 01:46:02.694 [INFO][5264] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:02.708630 containerd[1499]: 2025-10-31 01:46:02.694 [INFO][5264] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:02.708630 containerd[1499]: 2025-10-31 01:46:02.702 [WARNING][5264] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" HandleID="k8s-pod-network.03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Workload="srv--n5tpq.gb1.brightbox.com-k8s-whisker--558d6757c7--7cf2w-eth0" Oct 31 01:46:02.708630 containerd[1499]: 2025-10-31 01:46:02.702 [INFO][5264] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" HandleID="k8s-pod-network.03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Workload="srv--n5tpq.gb1.brightbox.com-k8s-whisker--558d6757c7--7cf2w-eth0" Oct 31 01:46:02.708630 containerd[1499]: 2025-10-31 01:46:02.704 [INFO][5264] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:02.708630 containerd[1499]: 2025-10-31 01:46:02.706 [INFO][5257] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f" Oct 31 01:46:02.708630 containerd[1499]: time="2025-10-31T01:46:02.708503091Z" level=info msg="TearDown network for sandbox \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\" successfully" Oct 31 01:46:02.712641 containerd[1499]: time="2025-10-31T01:46:02.712537884Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 01:46:02.712719 containerd[1499]: time="2025-10-31T01:46:02.712692257Z" level=info msg="RemovePodSandbox \"03ff41c3b7a49beb2fa680c32ecbe0e2e04ff95ce673859665dbc266c243c75f\" returns successfully" Oct 31 01:46:02.713325 containerd[1499]: time="2025-10-31T01:46:02.713287710Z" level=info msg="StopPodSandbox for \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\"" Oct 31 01:46:02.816701 containerd[1499]: 2025-10-31 01:46:02.769 [WARNING][5278] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b865c705-77c3-44e2-b527-f7fc482e79fd", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83", Pod:"coredns-66bc5c9577-zlj9c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali556aa9a087f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:02.816701 containerd[1499]: 2025-10-31 01:46:02.770 [INFO][5278] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Oct 31 01:46:02.816701 containerd[1499]: 2025-10-31 01:46:02.770 [INFO][5278] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" iface="eth0" netns="" Oct 31 01:46:02.816701 containerd[1499]: 2025-10-31 01:46:02.770 [INFO][5278] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Oct 31 01:46:02.816701 containerd[1499]: 2025-10-31 01:46:02.770 [INFO][5278] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Oct 31 01:46:02.816701 containerd[1499]: 2025-10-31 01:46:02.798 [INFO][5285] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" HandleID="k8s-pod-network.f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:46:02.816701 containerd[1499]: 2025-10-31 01:46:02.799 [INFO][5285] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:02.816701 containerd[1499]: 2025-10-31 01:46:02.799 [INFO][5285] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:02.816701 containerd[1499]: 2025-10-31 01:46:02.810 [WARNING][5285] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" HandleID="k8s-pod-network.f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:46:02.816701 containerd[1499]: 2025-10-31 01:46:02.810 [INFO][5285] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" HandleID="k8s-pod-network.f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:46:02.816701 containerd[1499]: 2025-10-31 01:46:02.812 [INFO][5285] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:02.816701 containerd[1499]: 2025-10-31 01:46:02.814 [INFO][5278] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Oct 31 01:46:02.816701 containerd[1499]: time="2025-10-31T01:46:02.816667985Z" level=info msg="TearDown network for sandbox \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\" successfully" Oct 31 01:46:02.818256 containerd[1499]: time="2025-10-31T01:46:02.816711680Z" level=info msg="StopPodSandbox for \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\" returns successfully" Oct 31 01:46:02.818256 containerd[1499]: time="2025-10-31T01:46:02.817561052Z" level=info msg="RemovePodSandbox for \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\"" Oct 31 01:46:02.818256 containerd[1499]: time="2025-10-31T01:46:02.817654494Z" level=info msg="Forcibly stopping sandbox \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\"" Oct 31 01:46:02.909475 containerd[1499]: 2025-10-31 01:46:02.864 [WARNING][5299] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b865c705-77c3-44e2-b527-f7fc482e79fd", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"ac76d631b54456cd03d10f5847ec582bb0c981866a4c076d798e30b3a7d07d83", Pod:"coredns-66bc5c9577-zlj9c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali556aa9a087f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:46:02.909475 containerd[1499]: 2025-10-31 01:46:02.865 [INFO][5299] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Oct 31 01:46:02.909475 containerd[1499]: 2025-10-31 01:46:02.865 [INFO][5299] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" iface="eth0" netns="" Oct 31 01:46:02.909475 containerd[1499]: 2025-10-31 01:46:02.865 [INFO][5299] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Oct 31 01:46:02.909475 containerd[1499]: 2025-10-31 01:46:02.865 [INFO][5299] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Oct 31 01:46:02.909475 containerd[1499]: 2025-10-31 01:46:02.892 [INFO][5306] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" HandleID="k8s-pod-network.f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:46:02.909475 containerd[1499]: 2025-10-31 01:46:02.892 [INFO][5306] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:46:02.909475 containerd[1499]: 2025-10-31 01:46:02.892 [INFO][5306] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:46:02.909475 containerd[1499]: 2025-10-31 01:46:02.903 [WARNING][5306] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" HandleID="k8s-pod-network.f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:46:02.909475 containerd[1499]: 2025-10-31 01:46:02.903 [INFO][5306] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" HandleID="k8s-pod-network.f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--zlj9c-eth0" Oct 31 01:46:02.909475 containerd[1499]: 2025-10-31 01:46:02.905 [INFO][5306] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:46:02.909475 containerd[1499]: 2025-10-31 01:46:02.907 [INFO][5299] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265" Oct 31 01:46:02.910620 containerd[1499]: time="2025-10-31T01:46:02.909566732Z" level=info msg="TearDown network for sandbox \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\" successfully" Oct 31 01:46:02.913966 containerd[1499]: time="2025-10-31T01:46:02.913918334Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 01:46:02.914056 containerd[1499]: time="2025-10-31T01:46:02.913998758Z" level=info msg="RemovePodSandbox \"f096be653a6e995b4fd584f3d5cf54e3b413557ccb2342ee1dfb528e574b8265\" returns successfully" Oct 31 01:46:03.824917 sshd[5085]: Invalid user user from 45.140.17.124 port 50606 Oct 31 01:46:04.402484 sshd[5319]: pam_faillock(sshd:auth): User unknown Oct 31 01:46:04.406927 sshd[5085]: Postponed keyboard-interactive for invalid user user from 45.140.17.124 port 50606 ssh2 [preauth] Oct 31 01:46:04.969879 sshd[5319]: pam_unix(sshd:auth): check pass; user unknown Oct 31 01:46:04.969938 sshd[5319]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=45.140.17.124 Oct 31 01:46:04.970892 sshd[5319]: pam_faillock(sshd:auth): User unknown Oct 31 01:46:06.652963 sshd[5085]: PAM: Permission denied for illegal user user from 45.140.17.124 Oct 31 01:46:06.653725 sshd[5085]: Failed keyboard-interactive/pam for invalid user user from 45.140.17.124 port 50606 ssh2 Oct 31 01:46:07.056817 sshd[5085]: Connection reset by invalid user user 45.140.17.124 port 50606 [preauth] Oct 31 01:46:07.063554 systemd[1]: sshd@13-10.230.44.66:22-45.140.17.124:50606.service: Deactivated successfully. Oct 31 01:46:07.579193 containerd[1499]: time="2025-10-31T01:46:07.578903499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:46:07.887244 containerd[1499]: time="2025-10-31T01:46:07.887104099Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:07.888864 containerd[1499]: time="2025-10-31T01:46:07.888811703Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:46:07.889068 containerd[1499]: time="2025-10-31T01:46:07.888981263Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:46:07.890864 kubelet[2672]: E1031 01:46:07.889410 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:46:07.890864 kubelet[2672]: E1031 01:46:07.889509 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:46:07.890864 kubelet[2672]: E1031 01:46:07.889748 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b58b8b6b-6wxlj_calico-apiserver(f712c016-8a6e-4625-aab4-a80c982f13bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:07.890864 kubelet[2672]: E1031 01:46:07.889839 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" podUID="f712c016-8a6e-4625-aab4-a80c982f13bc" Oct 31 01:46:10.580268 containerd[1499]: time="2025-10-31T01:46:10.579756550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:46:10.902931 containerd[1499]: time="2025-10-31T01:46:10.902849292Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:10.904306 containerd[1499]: time="2025-10-31T01:46:10.904183194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:46:10.904306 containerd[1499]: time="2025-10-31T01:46:10.904229178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 01:46:10.904795 kubelet[2672]: E1031 01:46:10.904702 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:46:10.905321 kubelet[2672]: E1031 01:46:10.904803 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:46:10.905321 kubelet[2672]: E1031 01:46:10.904964 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-67bdd86bbf-pj5dz_calico-system(9911312f-9ce5-498d-99df-b48b4eafeab7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:10.906620 containerd[1499]: time="2025-10-31T01:46:10.906569416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:46:11.225046 containerd[1499]: time="2025-10-31T01:46:11.224856095Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:11.226285 containerd[1499]: time="2025-10-31T01:46:11.226233190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:46:11.227307 containerd[1499]: time="2025-10-31T01:46:11.226352030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 01:46:11.227402 kubelet[2672]: E1031 01:46:11.226549 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:46:11.227402 kubelet[2672]: E1031 01:46:11.226649 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:46:11.227402 kubelet[2672]: E1031 01:46:11.226772 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-67bdd86bbf-pj5dz_calico-system(9911312f-9ce5-498d-99df-b48b4eafeab7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:11.227564 kubelet[2672]: E1031 01:46:11.226844 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67bdd86bbf-pj5dz" podUID="9911312f-9ce5-498d-99df-b48b4eafeab7" Oct 31 01:46:11.579376 containerd[1499]: time="2025-10-31T01:46:11.579225692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:46:11.900475 containerd[1499]: time="2025-10-31T01:46:11.900382842Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:11.901697 containerd[1499]: time="2025-10-31T01:46:11.901570609Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:46:11.901697 containerd[1499]: time="2025-10-31T01:46:11.901631916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:46:11.902289 kubelet[2672]: E1031 01:46:11.901945 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:46:11.902289 kubelet[2672]: E1031 01:46:11.902046 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:46:11.902458 kubelet[2672]: E1031 01:46:11.902308 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b58b8b6b-r89xp_calico-apiserver(f4abe04b-b171-45a2-9c26-8c077d6bf990): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:11.902458 kubelet[2672]: E1031 01:46:11.902388 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" podUID="f4abe04b-b171-45a2-9c26-8c077d6bf990" Oct 31 01:46:11.903420 containerd[1499]: time="2025-10-31T01:46:11.903355883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:46:12.221274 containerd[1499]: time="2025-10-31T01:46:12.220947434Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:12.222737 containerd[1499]: time="2025-10-31T01:46:12.222470273Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:46:12.222737 containerd[1499]: time="2025-10-31T01:46:12.222519235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 01:46:12.222892 kubelet[2672]: E1031 01:46:12.222825 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:46:12.223373 kubelet[2672]: E1031 01:46:12.222912 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:46:12.223373 kubelet[2672]: E1031 01:46:12.223046 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tgfh4_calico-system(7238a7c0-ff8f-443f-ab69-f2ee0be198c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:12.223373 kubelet[2672]: E1031 01:46:12.223102 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgfh4" podUID="7238a7c0-ff8f-443f-ab69-f2ee0be198c2" Oct 31 01:46:12.580695 containerd[1499]: time="2025-10-31T01:46:12.579173419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:46:12.912645 containerd[1499]: time="2025-10-31T01:46:12.912530396Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:12.914147 containerd[1499]: time="2025-10-31T01:46:12.914086633Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:46:12.914307 containerd[1499]: time="2025-10-31T01:46:12.914124647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 01:46:12.914457 kubelet[2672]: E1031 01:46:12.914395 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:46:12.914567 kubelet[2672]: E1031 01:46:12.914471 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:46:12.914690 kubelet[2672]: E1031 01:46:12.914659 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-84c75ff6b-nf76t_calico-system(b0d8846e-dd18-434c-b179-e3c2878ecf3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:12.914817 kubelet[2672]: E1031 01:46:12.914715 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" podUID="b0d8846e-dd18-434c-b179-e3c2878ecf3f" Oct 31 01:46:15.579786 containerd[1499]: time="2025-10-31T01:46:15.579737182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:46:15.891480 containerd[1499]: time="2025-10-31T01:46:15.891227386Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:15.892632 containerd[1499]: time="2025-10-31T01:46:15.892429843Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:46:15.892632 containerd[1499]: time="2025-10-31T01:46:15.892488631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 01:46:15.893053 kubelet[2672]: E1031 01:46:15.892988 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:46:15.893568 kubelet[2672]: E1031 01:46:15.893061 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:46:15.893568 kubelet[2672]: E1031 01:46:15.893168 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lvbwj_calico-system(b91990ed-b519-4003-921b-695c5958edac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:15.894949 containerd[1499]: time="2025-10-31T01:46:15.894548159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:46:16.216657 containerd[1499]: time="2025-10-31T01:46:16.216388858Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:16.217674 containerd[1499]: time="2025-10-31T01:46:16.217625950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:46:16.217863 containerd[1499]: time="2025-10-31T01:46:16.217729698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 01:46:16.218023 kubelet[2672]: E1031 01:46:16.217964 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:46:16.218097 kubelet[2672]: E1031 01:46:16.218038 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:46:16.218183 kubelet[2672]: E1031 01:46:16.218150 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lvbwj_calico-system(b91990ed-b519-4003-921b-695c5958edac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:16.218594 kubelet[2672]: E1031 01:46:16.218217 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:46:21.578934 kubelet[2672]: E1031 01:46:21.578536 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" podUID="f712c016-8a6e-4625-aab4-a80c982f13bc" Oct 31 01:46:22.582389 kubelet[2672]: E1031 01:46:22.580157 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" podUID="f4abe04b-b171-45a2-9c26-8c077d6bf990" Oct 31 01:46:22.591652 kubelet[2672]: E1031 01:46:22.591561 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgfh4" podUID="7238a7c0-ff8f-443f-ab69-f2ee0be198c2" Oct 31 01:46:25.579521 kubelet[2672]: E1031 01:46:25.579381 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67bdd86bbf-pj5dz" podUID="9911312f-9ce5-498d-99df-b48b4eafeab7" Oct 31 01:46:27.580608 kubelet[2672]: E1031 01:46:27.579711 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" podUID="b0d8846e-dd18-434c-b179-e3c2878ecf3f" Oct 31 01:46:28.581118 kubelet[2672]: E1031 01:46:28.580946 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:46:32.580713 containerd[1499]: time="2025-10-31T01:46:32.580663895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:46:32.905440 containerd[1499]: time="2025-10-31T01:46:32.904963710Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:32.906652 containerd[1499]: time="2025-10-31T01:46:32.906567013Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:46:32.908392 containerd[1499]: time="2025-10-31T01:46:32.906627004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:46:32.908798 kubelet[2672]: E1031 01:46:32.907139 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:46:32.908798 kubelet[2672]: E1031 01:46:32.907229 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:46:32.908798 kubelet[2672]: E1031 01:46:32.907377 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b58b8b6b-6wxlj_calico-apiserver(f712c016-8a6e-4625-aab4-a80c982f13bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:32.908798 kubelet[2672]: E1031 01:46:32.907434 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" podUID="f712c016-8a6e-4625-aab4-a80c982f13bc" Oct 31 01:46:36.585965 containerd[1499]: time="2025-10-31T01:46:36.585199812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:46:36.896778 containerd[1499]: time="2025-10-31T01:46:36.896418117Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:36.898134 containerd[1499]: time="2025-10-31T01:46:36.898084130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:46:36.898240 containerd[1499]: time="2025-10-31T01:46:36.898187692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:46:36.899301 kubelet[2672]: E1031 01:46:36.898539 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:46:36.899301 kubelet[2672]: E1031 01:46:36.898648 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:46:36.899301 kubelet[2672]: E1031 01:46:36.898770 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b58b8b6b-r89xp_calico-apiserver(f4abe04b-b171-45a2-9c26-8c077d6bf990): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:36.899301 kubelet[2672]: E1031 01:46:36.898819 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" podUID="f4abe04b-b171-45a2-9c26-8c077d6bf990" Oct 31 01:46:37.580288 containerd[1499]: time="2025-10-31T01:46:37.580179148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:46:37.899333 containerd[1499]: time="2025-10-31T01:46:37.899234124Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:37.901455 containerd[1499]: time="2025-10-31T01:46:37.900329916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:46:37.901455 containerd[1499]: time="2025-10-31T01:46:37.900420813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 01:46:37.901830 kubelet[2672]: E1031 01:46:37.900644 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:46:37.901830 kubelet[2672]: E1031 01:46:37.900720 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:46:37.904373 kubelet[2672]: E1031 01:46:37.904173 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tgfh4_calico-system(7238a7c0-ff8f-443f-ab69-f2ee0be198c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:37.904373 kubelet[2672]: E1031 01:46:37.904262 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgfh4" podUID="7238a7c0-ff8f-443f-ab69-f2ee0be198c2" Oct 31 01:46:39.579869 containerd[1499]: time="2025-10-31T01:46:39.579527446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:46:39.887449 containerd[1499]: time="2025-10-31T01:46:39.887302988Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:39.889069 containerd[1499]: time="2025-10-31T01:46:39.888892527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:46:39.889069 containerd[1499]: time="2025-10-31T01:46:39.888937189Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 01:46:39.889345 kubelet[2672]: E1031 01:46:39.889256 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:46:39.890231 kubelet[2672]: E1031 01:46:39.889341 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:46:39.890231 kubelet[2672]: E1031 01:46:39.889475 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-67bdd86bbf-pj5dz_calico-system(9911312f-9ce5-498d-99df-b48b4eafeab7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:39.893097 containerd[1499]: time="2025-10-31T01:46:39.892689456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:46:40.214271 containerd[1499]: time="2025-10-31T01:46:40.214030178Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:40.218887 containerd[1499]: time="2025-10-31T01:46:40.218807275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:46:40.219041 containerd[1499]: time="2025-10-31T01:46:40.218818594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 01:46:40.220311 kubelet[2672]: E1031 01:46:40.219291 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:46:40.220311 kubelet[2672]: E1031 01:46:40.219400 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:46:40.220311 kubelet[2672]: E1031 01:46:40.219594 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-67bdd86bbf-pj5dz_calico-system(9911312f-9ce5-498d-99df-b48b4eafeab7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:40.220526 kubelet[2672]: E1031 01:46:40.219669 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67bdd86bbf-pj5dz" podUID="9911312f-9ce5-498d-99df-b48b4eafeab7" Oct 31 01:46:41.274113 systemd[1]: Started sshd@14-10.230.44.66:22-147.75.109.163:34050.service - OpenSSH per-connection server daemon (147.75.109.163:34050). Oct 31 01:46:42.234610 sshd[5381]: Accepted publickey for core from 147.75.109.163 port 34050 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:46:42.236516 sshd[5381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:46:42.257283 systemd-logind[1485]: New session 12 of user core. Oct 31 01:46:42.266242 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 01:46:42.581714 containerd[1499]: time="2025-10-31T01:46:42.580364761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:46:42.901530 containerd[1499]: time="2025-10-31T01:46:42.901462687Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:42.902874 containerd[1499]: time="2025-10-31T01:46:42.902432884Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:46:42.902874 containerd[1499]: time="2025-10-31T01:46:42.902536193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 01:46:42.903641 kubelet[2672]: E1031 01:46:42.903185 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:46:42.903641 kubelet[2672]: E1031 01:46:42.903257 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:46:42.906188 kubelet[2672]: E1031 01:46:42.904083 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lvbwj_calico-system(b91990ed-b519-4003-921b-695c5958edac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:42.906503 containerd[1499]: time="2025-10-31T01:46:42.903724139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:46:43.228101 containerd[1499]: time="2025-10-31T01:46:43.227812408Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:43.229432 containerd[1499]: time="2025-10-31T01:46:43.229382228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:46:43.230461 containerd[1499]: time="2025-10-31T01:46:43.229567397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 01:46:43.232215 kubelet[2672]: E1031 01:46:43.231904 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:46:43.232215 kubelet[2672]: E1031 01:46:43.231976 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:46:43.233808 kubelet[2672]: E1031 01:46:43.232227 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-84c75ff6b-nf76t_calico-system(b0d8846e-dd18-434c-b179-e3c2878ecf3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:43.233808 kubelet[2672]: E1031 01:46:43.233732 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" podUID="b0d8846e-dd18-434c-b179-e3c2878ecf3f" Oct 31 01:46:43.234059 containerd[1499]: time="2025-10-31T01:46:43.232487895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:46:43.546860 containerd[1499]: time="2025-10-31T01:46:43.546688728Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:46:43.548020 containerd[1499]: time="2025-10-31T01:46:43.547960360Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 01:46:43.561563 containerd[1499]: time="2025-10-31T01:46:43.561479546Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:46:43.561960 kubelet[2672]: E1031 01:46:43.561905 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:46:43.562097 kubelet[2672]: E1031 01:46:43.561977 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:46:43.562180 kubelet[2672]: E1031 01:46:43.562097 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lvbwj_calico-system(b91990ed-b519-4003-921b-695c5958edac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:46:43.562291 kubelet[2672]: E1031 01:46:43.562173 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:46:43.629844 sshd[5381]: pam_unix(sshd:session): session closed for user core Oct 31 01:46:43.637750 systemd[1]: sshd@14-10.230.44.66:22-147.75.109.163:34050.service: Deactivated successfully. Oct 31 01:46:43.641563 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 01:46:43.642580 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Oct 31 01:46:43.644943 systemd-logind[1485]: Removed session 12. Oct 31 01:46:48.585261 kubelet[2672]: E1031 01:46:48.584954 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" podUID="f4abe04b-b171-45a2-9c26-8c077d6bf990" Oct 31 01:46:48.586302 kubelet[2672]: E1031 01:46:48.586071 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" podUID="f712c016-8a6e-4625-aab4-a80c982f13bc" Oct 31 01:46:48.790945 systemd[1]: Started sshd@15-10.230.44.66:22-147.75.109.163:34052.service - OpenSSH per-connection server daemon (147.75.109.163:34052). Oct 31 01:46:49.718103 sshd[5395]: Accepted publickey for core from 147.75.109.163 port 34052 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:46:49.720420 sshd[5395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:46:49.731633 systemd-logind[1485]: New session 13 of user core. Oct 31 01:46:49.737212 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 01:46:50.468379 sshd[5395]: pam_unix(sshd:session): session closed for user core Oct 31 01:46:50.473326 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Oct 31 01:46:50.475376 systemd[1]: sshd@15-10.230.44.66:22-147.75.109.163:34052.service: Deactivated successfully. Oct 31 01:46:50.480129 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 01:46:50.484739 systemd-logind[1485]: Removed session 13. Oct 31 01:46:50.580287 kubelet[2672]: E1031 01:46:50.579800 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgfh4" podUID="7238a7c0-ff8f-443f-ab69-f2ee0be198c2" Oct 31 01:46:51.579600 kubelet[2672]: E1031 01:46:51.579476 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67bdd86bbf-pj5dz" podUID="9911312f-9ce5-498d-99df-b48b4eafeab7" Oct 31 01:46:55.625961 systemd[1]: Started sshd@16-10.230.44.66:22-147.75.109.163:47110.service - OpenSSH per-connection server daemon (147.75.109.163:47110). Oct 31 01:46:56.527643 sshd[5409]: Accepted publickey for core from 147.75.109.163 port 47110 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:46:56.539684 sshd[5409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:46:56.548420 systemd-logind[1485]: New session 14 of user core. Oct 31 01:46:56.553824 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 01:46:56.581422 kubelet[2672]: E1031 01:46:56.580989 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" podUID="b0d8846e-dd18-434c-b179-e3c2878ecf3f" Oct 31 01:46:56.583739 kubelet[2672]: E1031 01:46:56.583342 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:46:57.370861 sshd[5409]: pam_unix(sshd:session): session closed for user core Oct 31 01:46:57.375035 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Oct 31 01:46:57.375553 systemd[1]: sshd@16-10.230.44.66:22-147.75.109.163:47110.service: Deactivated successfully. Oct 31 01:46:57.380486 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 01:46:57.385144 systemd-logind[1485]: Removed session 14. Oct 31 01:46:57.538770 systemd[1]: Started sshd@17-10.230.44.66:22-147.75.109.163:47124.service - OpenSSH per-connection server daemon (147.75.109.163:47124). Oct 31 01:46:58.470390 sshd[5444]: Accepted publickey for core from 147.75.109.163 port 47124 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:46:58.472947 sshd[5444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:46:58.483818 systemd-logind[1485]: New session 15 of user core. Oct 31 01:46:58.489113 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 01:46:59.306250 sshd[5444]: pam_unix(sshd:session): session closed for user core Oct 31 01:46:59.312461 systemd[1]: sshd@17-10.230.44.66:22-147.75.109.163:47124.service: Deactivated successfully. Oct 31 01:46:59.316833 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 01:46:59.318514 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Oct 31 01:46:59.320328 systemd-logind[1485]: Removed session 15. Oct 31 01:46:59.467953 systemd[1]: Started sshd@18-10.230.44.66:22-147.75.109.163:47132.service - OpenSSH per-connection server daemon (147.75.109.163:47132). Oct 31 01:46:59.586610 kubelet[2672]: E1031 01:46:59.585406 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" podUID="f712c016-8a6e-4625-aab4-a80c982f13bc" Oct 31 01:47:00.385251 sshd[5456]: Accepted publickey for core from 147.75.109.163 port 47132 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:47:00.388060 sshd[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:47:00.395816 systemd-logind[1485]: New session 16 of user core. Oct 31 01:47:00.400789 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 01:47:00.583080 kubelet[2672]: E1031 01:47:00.582886 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" podUID="f4abe04b-b171-45a2-9c26-8c077d6bf990" Oct 31 01:47:01.137929 sshd[5456]: pam_unix(sshd:session): session closed for user core Oct 31 01:47:01.143804 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Oct 31 01:47:01.145434 systemd[1]: sshd@18-10.230.44.66:22-147.75.109.163:47132.service: Deactivated successfully. Oct 31 01:47:01.150592 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 01:47:01.156712 systemd-logind[1485]: Removed session 16. Oct 31 01:47:02.581826 kubelet[2672]: E1031 01:47:02.580472 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67bdd86bbf-pj5dz" podUID="9911312f-9ce5-498d-99df-b48b4eafeab7" Oct 31 01:47:02.948391 containerd[1499]: time="2025-10-31T01:47:02.948053823Z" level=info msg="StopPodSandbox for \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\"" Oct 31 01:47:03.172955 containerd[1499]: 2025-10-31 01:47:03.079 [WARNING][5481] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e995549a-d4b6-43b7-9c52-4c9c14a4dcdf", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855", Pod:"coredns-66bc5c9577-f9xmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliacb845be54f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:47:03.172955 containerd[1499]: 2025-10-31 01:47:03.079 [INFO][5481] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Oct 31 01:47:03.172955 containerd[1499]: 2025-10-31 01:47:03.079 [INFO][5481] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" iface="eth0" netns="" Oct 31 01:47:03.172955 containerd[1499]: 2025-10-31 01:47:03.079 [INFO][5481] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Oct 31 01:47:03.172955 containerd[1499]: 2025-10-31 01:47:03.079 [INFO][5481] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Oct 31 01:47:03.172955 containerd[1499]: 2025-10-31 01:47:03.147 [INFO][5488] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" HandleID="k8s-pod-network.a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:47:03.172955 containerd[1499]: 2025-10-31 01:47:03.148 [INFO][5488] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:47:03.172955 containerd[1499]: 2025-10-31 01:47:03.148 [INFO][5488] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:47:03.172955 containerd[1499]: 2025-10-31 01:47:03.161 [WARNING][5488] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" HandleID="k8s-pod-network.a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:47:03.172955 containerd[1499]: 2025-10-31 01:47:03.161 [INFO][5488] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" HandleID="k8s-pod-network.a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:47:03.172955 containerd[1499]: 2025-10-31 01:47:03.163 [INFO][5488] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:47:03.172955 containerd[1499]: 2025-10-31 01:47:03.166 [INFO][5481] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Oct 31 01:47:03.172955 containerd[1499]: time="2025-10-31T01:47:03.172499209Z" level=info msg="TearDown network for sandbox \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\" successfully" Oct 31 01:47:03.172955 containerd[1499]: time="2025-10-31T01:47:03.172535381Z" level=info msg="StopPodSandbox for \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\" returns successfully" Oct 31 01:47:03.173954 containerd[1499]: time="2025-10-31T01:47:03.173347604Z" level=info msg="RemovePodSandbox for \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\"" Oct 31 01:47:03.173954 containerd[1499]: time="2025-10-31T01:47:03.173432751Z" level=info msg="Forcibly stopping sandbox \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\"" Oct 31 01:47:03.340027 containerd[1499]: 2025-10-31 01:47:03.261 [WARNING][5502] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e995549a-d4b6-43b7-9c52-4c9c14a4dcdf", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 1, 45, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-n5tpq.gb1.brightbox.com", ContainerID:"ff994504f0ab6ee2c4f8e69384a4e019003a458d9a54d8d9e490d8e9fb949855", Pod:"coredns-66bc5c9577-f9xmz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliacb845be54f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 01:47:03.340027 containerd[1499]: 2025-10-31 01:47:03.261 [INFO][5502] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Oct 31 01:47:03.340027 containerd[1499]: 2025-10-31 01:47:03.261 [INFO][5502] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" iface="eth0" netns="" Oct 31 01:47:03.340027 containerd[1499]: 2025-10-31 01:47:03.261 [INFO][5502] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Oct 31 01:47:03.340027 containerd[1499]: 2025-10-31 01:47:03.261 [INFO][5502] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Oct 31 01:47:03.340027 containerd[1499]: 2025-10-31 01:47:03.311 [INFO][5509] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" HandleID="k8s-pod-network.a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:47:03.340027 containerd[1499]: 2025-10-31 01:47:03.311 [INFO][5509] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 01:47:03.340027 containerd[1499]: 2025-10-31 01:47:03.311 [INFO][5509] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 01:47:03.340027 containerd[1499]: 2025-10-31 01:47:03.326 [WARNING][5509] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" HandleID="k8s-pod-network.a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:47:03.340027 containerd[1499]: 2025-10-31 01:47:03.326 [INFO][5509] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" HandleID="k8s-pod-network.a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Workload="srv--n5tpq.gb1.brightbox.com-k8s-coredns--66bc5c9577--f9xmz-eth0" Oct 31 01:47:03.340027 containerd[1499]: 2025-10-31 01:47:03.329 [INFO][5509] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 01:47:03.340027 containerd[1499]: 2025-10-31 01:47:03.334 [INFO][5502] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43" Oct 31 01:47:03.340027 containerd[1499]: time="2025-10-31T01:47:03.339903542Z" level=info msg="TearDown network for sandbox \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\" successfully" Oct 31 01:47:03.344966 containerd[1499]: time="2025-10-31T01:47:03.344929893Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Oct 31 01:47:03.345330 containerd[1499]: time="2025-10-31T01:47:03.345154132Z" level=info msg="RemovePodSandbox \"a5b32f0fb504fd45207349559dcd0e8784f0d0bc4783ac091f63613dc1b9af43\" returns successfully" Oct 31 01:47:04.584893 kubelet[2672]: E1031 01:47:04.584806 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgfh4" podUID="7238a7c0-ff8f-443f-ab69-f2ee0be198c2" Oct 31 01:47:06.303027 systemd[1]: Started sshd@19-10.230.44.66:22-147.75.109.163:44576.service - OpenSSH per-connection server daemon (147.75.109.163:44576). Oct 31 01:47:07.253620 sshd[5516]: Accepted publickey for core from 147.75.109.163 port 44576 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:47:07.257174 sshd[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:47:07.269433 systemd-logind[1485]: New session 17 of user core. Oct 31 01:47:07.276539 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 01:47:08.015877 sshd[5516]: pam_unix(sshd:session): session closed for user core Oct 31 01:47:08.020210 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Oct 31 01:47:08.022761 systemd[1]: sshd@19-10.230.44.66:22-147.75.109.163:44576.service: Deactivated successfully. Oct 31 01:47:08.028099 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 01:47:08.032235 systemd-logind[1485]: Removed session 17. Oct 31 01:47:10.588536 kubelet[2672]: E1031 01:47:10.588439 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" podUID="b0d8846e-dd18-434c-b179-e3c2878ecf3f" Oct 31 01:47:10.592757 kubelet[2672]: E1031 01:47:10.591363 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:47:12.582733 kubelet[2672]: E1031 01:47:12.582678 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" podUID="f712c016-8a6e-4625-aab4-a80c982f13bc" Oct 31 01:47:13.179570 systemd[1]: Started sshd@20-10.230.44.66:22-147.75.109.163:48154.service - OpenSSH per-connection server daemon (147.75.109.163:48154). Oct 31 01:47:14.087616 sshd[5537]: Accepted publickey for core from 147.75.109.163 port 48154 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:47:14.089515 sshd[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:47:14.099446 systemd-logind[1485]: New session 18 of user core. Oct 31 01:47:14.106316 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 01:47:14.581798 kubelet[2672]: E1031 01:47:14.580337 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" podUID="f4abe04b-b171-45a2-9c26-8c077d6bf990" Oct 31 01:47:14.852301 sshd[5537]: pam_unix(sshd:session): session closed for user core Oct 31 01:47:14.859138 systemd[1]: sshd@20-10.230.44.66:22-147.75.109.163:48154.service: Deactivated successfully. Oct 31 01:47:14.859337 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Oct 31 01:47:14.863215 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 01:47:14.869345 systemd-logind[1485]: Removed session 18. Oct 31 01:47:17.581420 kubelet[2672]: E1031 01:47:17.581319 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67bdd86bbf-pj5dz" podUID="9911312f-9ce5-498d-99df-b48b4eafeab7" Oct 31 01:47:19.582341 containerd[1499]: time="2025-10-31T01:47:19.581559338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 01:47:19.900016 containerd[1499]: time="2025-10-31T01:47:19.899954128Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:47:19.901264 containerd[1499]: time="2025-10-31T01:47:19.901189182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 01:47:19.901429 containerd[1499]: time="2025-10-31T01:47:19.901236717Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 01:47:19.903674 kubelet[2672]: E1031 01:47:19.901719 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:47:19.903674 kubelet[2672]: E1031 01:47:19.901802 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 01:47:19.903674 kubelet[2672]: E1031 01:47:19.901963 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-tgfh4_calico-system(7238a7c0-ff8f-443f-ab69-f2ee0be198c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 01:47:19.903674 kubelet[2672]: E1031 01:47:19.902011 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgfh4" podUID="7238a7c0-ff8f-443f-ab69-f2ee0be198c2" Oct 31 01:47:20.002405 systemd[1]: Started sshd@21-10.230.44.66:22-147.75.109.163:48170.service - OpenSSH per-connection server daemon (147.75.109.163:48170). Oct 31 01:47:20.935979 sshd[5556]: Accepted publickey for core from 147.75.109.163 port 48170 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:47:20.938393 sshd[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:47:20.949155 systemd-logind[1485]: New session 19 of user core. Oct 31 01:47:20.956823 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 01:47:21.584998 kubelet[2672]: E1031 01:47:21.584911 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:47:21.774724 sshd[5556]: pam_unix(sshd:session): session closed for user core Oct 31 01:47:21.784705 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Oct 31 01:47:21.785362 systemd[1]: sshd@21-10.230.44.66:22-147.75.109.163:48170.service: Deactivated successfully. Oct 31 01:47:21.788830 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 01:47:21.791553 systemd-logind[1485]: Removed session 19. Oct 31 01:47:21.937961 systemd[1]: Started sshd@22-10.230.44.66:22-147.75.109.163:51660.service - OpenSSH per-connection server daemon (147.75.109.163:51660). Oct 31 01:47:22.854181 sshd[5571]: Accepted publickey for core from 147.75.109.163 port 51660 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:47:22.856695 sshd[5571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:47:22.867552 systemd-logind[1485]: New session 20 of user core. Oct 31 01:47:22.872769 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 01:47:23.579818 containerd[1499]: time="2025-10-31T01:47:23.579571123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:47:23.908832 containerd[1499]: time="2025-10-31T01:47:23.908753244Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:47:23.910042 containerd[1499]: time="2025-10-31T01:47:23.909792831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:47:23.910751 containerd[1499]: time="2025-10-31T01:47:23.910054027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:47:23.911126 kubelet[2672]: E1031 01:47:23.911074 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:47:23.911753 kubelet[2672]: E1031 01:47:23.911143 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:47:23.911753 kubelet[2672]: E1031 01:47:23.911247 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b58b8b6b-6wxlj_calico-apiserver(f712c016-8a6e-4625-aab4-a80c982f13bc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:47:23.911753 kubelet[2672]: E1031 01:47:23.911320 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" podUID="f712c016-8a6e-4625-aab4-a80c982f13bc" Oct 31 01:47:24.005919 sshd[5571]: pam_unix(sshd:session): session closed for user core Oct 31 01:47:24.015531 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Oct 31 01:47:24.017059 systemd[1]: sshd@22-10.230.44.66:22-147.75.109.163:51660.service: Deactivated successfully. Oct 31 01:47:24.025464 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 01:47:24.029649 systemd-logind[1485]: Removed session 20. Oct 31 01:47:24.163174 systemd[1]: Started sshd@23-10.230.44.66:22-147.75.109.163:51672.service - OpenSSH per-connection server daemon (147.75.109.163:51672). Oct 31 01:47:24.581428 containerd[1499]: time="2025-10-31T01:47:24.581276668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 01:47:24.918372 containerd[1499]: time="2025-10-31T01:47:24.918271352Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:47:24.919468 containerd[1499]: time="2025-10-31T01:47:24.919422558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 01:47:24.919720 containerd[1499]: time="2025-10-31T01:47:24.919569386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 01:47:24.921522 kubelet[2672]: E1031 01:47:24.920785 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:47:24.921522 kubelet[2672]: E1031 01:47:24.920854 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 01:47:24.921522 kubelet[2672]: E1031 01:47:24.920951 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-84c75ff6b-nf76t_calico-system(b0d8846e-dd18-434c-b179-e3c2878ecf3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 01:47:24.921522 kubelet[2672]: E1031 01:47:24.921001 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" podUID="b0d8846e-dd18-434c-b179-e3c2878ecf3f" Oct 31 01:47:25.094825 sshd[5583]: Accepted publickey for core from 147.75.109.163 port 51672 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:47:25.096716 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:47:25.106078 systemd-logind[1485]: New session 21 of user core. Oct 31 01:47:25.112821 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 31 01:47:26.034351 systemd[1]: run-containerd-runc-k8s.io-e583f86e7bf9b46c2aa0ddb201feafec74abac7f40303c2fec25b991ce7c8b19-runc.WunQS9.mount: Deactivated successfully. Oct 31 01:47:26.588692 containerd[1499]: time="2025-10-31T01:47:26.587608185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 01:47:26.901607 containerd[1499]: time="2025-10-31T01:47:26.900830001Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:47:26.902050 containerd[1499]: time="2025-10-31T01:47:26.901836476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 01:47:26.902050 containerd[1499]: time="2025-10-31T01:47:26.901959331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 01:47:26.903127 kubelet[2672]: E1031 01:47:26.902334 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:47:26.903127 kubelet[2672]: E1031 01:47:26.902660 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 01:47:26.915638 kubelet[2672]: E1031 01:47:26.915480 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6b58b8b6b-r89xp_calico-apiserver(f4abe04b-b171-45a2-9c26-8c077d6bf990): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 01:47:26.923103 kubelet[2672]: E1031 01:47:26.923033 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" podUID="f4abe04b-b171-45a2-9c26-8c077d6bf990" Oct 31 01:47:27.037341 sshd[5583]: pam_unix(sshd:session): session closed for user core Oct 31 01:47:27.050485 systemd[1]: sshd@23-10.230.44.66:22-147.75.109.163:51672.service: Deactivated successfully. Oct 31 01:47:27.056240 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 01:47:27.060212 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Oct 31 01:47:27.063292 systemd-logind[1485]: Removed session 21. Oct 31 01:47:27.202025 systemd[1]: Started sshd@24-10.230.44.66:22-147.75.109.163:51678.service - OpenSSH per-connection server daemon (147.75.109.163:51678). Oct 31 01:47:28.169454 sshd[5623]: Accepted publickey for core from 147.75.109.163 port 51678 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:47:28.172122 sshd[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:47:28.182699 systemd-logind[1485]: New session 22 of user core. Oct 31 01:47:28.186096 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 31 01:47:29.240224 sshd[5623]: pam_unix(sshd:session): session closed for user core Oct 31 01:47:29.247384 systemd[1]: sshd@24-10.230.44.66:22-147.75.109.163:51678.service: Deactivated successfully. Oct 31 01:47:29.252889 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 01:47:29.258158 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Oct 31 01:47:29.260564 systemd-logind[1485]: Removed session 22. Oct 31 01:47:29.397906 systemd[1]: Started sshd@25-10.230.44.66:22-147.75.109.163:51688.service - OpenSSH per-connection server daemon (147.75.109.163:51688). Oct 31 01:47:30.324658 sshd[5636]: Accepted publickey for core from 147.75.109.163 port 51688 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:47:30.327581 sshd[5636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:47:30.336049 systemd-logind[1485]: New session 23 of user core. Oct 31 01:47:30.341792 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 31 01:47:31.077959 sshd[5636]: pam_unix(sshd:session): session closed for user core Oct 31 01:47:31.082145 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Oct 31 01:47:31.086378 systemd[1]: sshd@25-10.230.44.66:22-147.75.109.163:51688.service: Deactivated successfully. Oct 31 01:47:31.090005 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 01:47:31.092088 systemd-logind[1485]: Removed session 23. Oct 31 01:47:32.582427 kubelet[2672]: E1031 01:47:32.582308 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgfh4" podUID="7238a7c0-ff8f-443f-ab69-f2ee0be198c2" Oct 31 01:47:32.605522 containerd[1499]: time="2025-10-31T01:47:32.605466553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 01:47:32.930090 containerd[1499]: time="2025-10-31T01:47:32.929993204Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:47:32.931503 containerd[1499]: time="2025-10-31T01:47:32.931375102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 01:47:32.931647 containerd[1499]: time="2025-10-31T01:47:32.931501830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 01:47:32.933150 kubelet[2672]: E1031 01:47:32.932032 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:47:32.933150 kubelet[2672]: E1031 01:47:32.932134 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 01:47:32.933150 kubelet[2672]: E1031 01:47:32.932299 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-67bdd86bbf-pj5dz_calico-system(9911312f-9ce5-498d-99df-b48b4eafeab7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 01:47:32.935140 containerd[1499]: time="2025-10-31T01:47:32.934248534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 01:47:33.256478 containerd[1499]: time="2025-10-31T01:47:33.255699967Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:47:33.257801 containerd[1499]: time="2025-10-31T01:47:33.257274396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 01:47:33.257801 containerd[1499]: time="2025-10-31T01:47:33.257353165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 01:47:33.257994 kubelet[2672]: E1031 01:47:33.257671 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:47:33.257994 kubelet[2672]: E1031 01:47:33.257753 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 01:47:33.257994 kubelet[2672]: E1031 01:47:33.257890 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-67bdd86bbf-pj5dz_calico-system(9911312f-9ce5-498d-99df-b48b4eafeab7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 01:47:33.258446 kubelet[2672]: E1031 01:47:33.257993 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67bdd86bbf-pj5dz" podUID="9911312f-9ce5-498d-99df-b48b4eafeab7" Oct 31 01:47:34.607416 kubelet[2672]: E1031 01:47:34.606940 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" podUID="f712c016-8a6e-4625-aab4-a80c982f13bc" Oct 31 01:47:35.580334 containerd[1499]: time="2025-10-31T01:47:35.580114179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 01:47:35.615234 kubelet[2672]: E1031 01:47:35.615131 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-84c75ff6b-nf76t" podUID="b0d8846e-dd18-434c-b179-e3c2878ecf3f" Oct 31 01:47:35.963253 containerd[1499]: time="2025-10-31T01:47:35.963016674Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:47:35.964425 containerd[1499]: time="2025-10-31T01:47:35.964269525Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 01:47:35.964425 containerd[1499]: time="2025-10-31T01:47:35.964291060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 01:47:35.965106 kubelet[2672]: E1031 01:47:35.964825 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:47:35.965106 kubelet[2672]: E1031 01:47:35.964940 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 01:47:35.965421 kubelet[2672]: E1031 01:47:35.965080 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-lvbwj_calico-system(b91990ed-b519-4003-921b-695c5958edac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 01:47:35.967014 containerd[1499]: time="2025-10-31T01:47:35.966451352Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 01:47:36.241001 systemd[1]: Started sshd@26-10.230.44.66:22-147.75.109.163:52648.service - OpenSSH per-connection server daemon (147.75.109.163:52648). Oct 31 01:47:36.275843 containerd[1499]: time="2025-10-31T01:47:36.275759463Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Oct 31 01:47:36.281240 containerd[1499]: time="2025-10-31T01:47:36.281189208Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 01:47:36.281655 containerd[1499]: time="2025-10-31T01:47:36.281200998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 01:47:36.282133 kubelet[2672]: E1031 01:47:36.281989 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:47:36.282133 kubelet[2672]: E1031 01:47:36.282088 2672 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 01:47:36.282784 kubelet[2672]: E1031 01:47:36.282411 2672 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-lvbwj_calico-system(b91990ed-b519-4003-921b-695c5958edac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 01:47:36.282784 kubelet[2672]: E1031 01:47:36.282544 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac" Oct 31 01:47:37.148624 sshd[5669]: Accepted publickey for core from 147.75.109.163 port 52648 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:47:37.150306 sshd[5669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:47:37.160348 systemd-logind[1485]: New session 24 of user core. Oct 31 01:47:37.168875 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 31 01:47:37.915277 sshd[5669]: pam_unix(sshd:session): session closed for user core Oct 31 01:47:37.920316 systemd[1]: sshd@26-10.230.44.66:22-147.75.109.163:52648.service: Deactivated successfully. Oct 31 01:47:37.920842 systemd-logind[1485]: Session 24 logged out. Waiting for processes to exit. Oct 31 01:47:37.926155 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 01:47:37.932219 systemd-logind[1485]: Removed session 24. Oct 31 01:47:38.585614 kubelet[2672]: E1031 01:47:38.584814 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-r89xp" podUID="f4abe04b-b171-45a2-9c26-8c077d6bf990" Oct 31 01:47:43.081975 systemd[1]: Started sshd@27-10.230.44.66:22-147.75.109.163:57950.service - OpenSSH per-connection server daemon (147.75.109.163:57950). Oct 31 01:47:44.053676 sshd[5686]: Accepted publickey for core from 147.75.109.163 port 57950 ssh2: RSA SHA256:d+nLrY8Dsc9/yJeymnhT6SHXxGEkOkD6rfqu967eLjU Oct 31 01:47:44.058648 sshd[5686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 01:47:44.074490 systemd-logind[1485]: New session 25 of user core. Oct 31 01:47:44.080827 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 31 01:47:44.882838 sshd[5686]: pam_unix(sshd:session): session closed for user core Oct 31 01:47:44.888284 systemd[1]: sshd@27-10.230.44.66:22-147.75.109.163:57950.service: Deactivated successfully. Oct 31 01:47:44.891375 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 01:47:44.893922 systemd-logind[1485]: Session 25 logged out. Waiting for processes to exit. Oct 31 01:47:44.895555 systemd-logind[1485]: Removed session 25. Oct 31 01:47:45.600252 kubelet[2672]: E1031 01:47:45.600173 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6b58b8b6b-6wxlj" podUID="f712c016-8a6e-4625-aab4-a80c982f13bc" Oct 31 01:47:45.602038 kubelet[2672]: E1031 01:47:45.600556 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-67bdd86bbf-pj5dz" podUID="9911312f-9ce5-498d-99df-b48b4eafeab7" Oct 31 01:47:45.602038 kubelet[2672]: E1031 01:47:45.601307 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-tgfh4" podUID="7238a7c0-ff8f-443f-ab69-f2ee0be198c2" Oct 31 01:47:47.580806 kubelet[2672]: E1031 01:47:47.580718 2672 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lvbwj" podUID="b91990ed-b519-4003-921b-695c5958edac"