Sep 13 01:35:29.013698 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 01:35:29.013737 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 01:35:29.013761 kernel: BIOS-provided physical RAM map: Sep 13 01:35:29.013784 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 01:35:29.013793 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 01:35:29.013803 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 01:35:29.013814 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Sep 13 01:35:29.013825 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Sep 13 01:35:29.013834 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 13 01:35:29.013844 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 13 01:35:29.013871 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 01:35:29.013883 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 01:35:29.013899 kernel: NX (Execute Disable) protection: active Sep 13 01:35:29.013909 kernel: APIC: Static calls initialized Sep 13 01:35:29.013921 kernel: SMBIOS 2.8 present. Sep 13 01:35:29.013932 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Sep 13 01:35:29.013943 kernel: Hypervisor detected: KVM Sep 13 01:35:29.013958 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 01:35:29.013969 kernel: kvm-clock: using sched offset of 4398023124 cycles Sep 13 01:35:29.013981 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 01:35:29.013992 kernel: tsc: Detected 2799.998 MHz processor Sep 13 01:35:29.014003 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 01:35:29.014014 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 01:35:29.014025 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Sep 13 01:35:29.014036 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 01:35:29.014048 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 01:35:29.014063 kernel: Using GB pages for direct mapping Sep 13 01:35:29.014074 kernel: ACPI: Early table checksum verification disabled Sep 13 01:35:29.014085 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Sep 13 01:35:29.014096 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 01:35:29.014107 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 01:35:29.014118 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 01:35:29.014129 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Sep 13 01:35:29.014140 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 01:35:29.014151 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 01:35:29.014166 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 01:35:29.014177 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 01:35:29.014188 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Sep 13 01:35:29.014199 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Sep 13 01:35:29.014222 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Sep 13 01:35:29.014238 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Sep 13 01:35:29.014250 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Sep 13 01:35:29.014265 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Sep 13 01:35:29.014276 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Sep 13 01:35:29.014287 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 01:35:29.014298 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 01:35:29.014322 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 13 01:35:29.014332 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Sep 13 01:35:29.014343 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 13 01:35:29.014357 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Sep 13 01:35:29.014368 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 13 01:35:29.014391 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Sep 13 01:35:29.014402 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 13 01:35:29.014412 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Sep 13 01:35:29.014431 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 13 01:35:29.014442 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Sep 13 01:35:29.014453 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 13 01:35:29.014464 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Sep 13 01:35:29.014474 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 13 01:35:29.014502 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Sep 13 01:35:29.014513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 01:35:29.014525 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 13 01:35:29.014536 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Sep 13 01:35:29.014548 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Sep 13 01:35:29.014563 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Sep 13 01:35:29.014575 kernel: Zone ranges: Sep 13 01:35:29.014586 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 01:35:29.014597 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Sep 13 01:35:29.014613 kernel: Normal empty Sep 13 01:35:29.014624 kernel: Movable zone start for each node Sep 13 01:35:29.014636 kernel: Early memory node ranges Sep 13 01:35:29.014647 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 01:35:29.014658 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Sep 13 01:35:29.014670 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Sep 13 01:35:29.014681 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 01:35:29.014693 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 01:35:29.014704 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Sep 13 01:35:29.014715 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 01:35:29.014731 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 01:35:29.014742 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 01:35:29.014770 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 01:35:29.014782 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 01:35:29.014794 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 01:35:29.014805 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 01:35:29.014816 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 01:35:29.014828 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 01:35:29.014839 kernel: TSC deadline timer available Sep 13 01:35:29.014867 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Sep 13 01:35:29.014880 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 01:35:29.014892 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 13 01:35:29.014903 kernel: Booting paravirtualized kernel on KVM Sep 13 01:35:29.014915 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 01:35:29.014927 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 13 01:35:29.014939 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 13 01:35:29.014950 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 13 01:35:29.014961 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 13 01:35:29.014979 kernel: kvm-guest: PV spinlocks enabled Sep 13 01:35:29.014990 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 13 01:35:29.015003 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 01:35:29.015015 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 01:35:29.015027 kernel: random: crng init done Sep 13 01:35:29.015038 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 01:35:29.015050 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 01:35:29.015061 kernel: Fallback order for Node 0: 0 Sep 13 01:35:29.015077 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Sep 13 01:35:29.015089 kernel: Policy zone: DMA32 Sep 13 01:35:29.015100 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 01:35:29.015112 kernel: software IO TLB: area num 16. Sep 13 01:35:29.015124 kernel: Memory: 1901540K/2096616K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 194816K reserved, 0K cma-reserved) Sep 13 01:35:29.015135 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 13 01:35:29.015147 kernel: Kernel/User page tables isolation: enabled Sep 13 01:35:29.015158 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 01:35:29.015169 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 01:35:29.015185 kernel: Dynamic Preempt: voluntary Sep 13 01:35:29.015197 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 01:35:29.015213 kernel: rcu: RCU event tracing is enabled. Sep 13 01:35:29.015225 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 13 01:35:29.015237 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 01:35:29.015260 kernel: Rude variant of Tasks RCU enabled. Sep 13 01:35:29.015276 kernel: Tracing variant of Tasks RCU enabled. Sep 13 01:35:29.015289 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 01:35:29.015301 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 13 01:35:29.015313 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Sep 13 01:35:29.015325 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 01:35:29.015337 kernel: Console: colour VGA+ 80x25 Sep 13 01:35:29.015353 kernel: printk: console [tty0] enabled Sep 13 01:35:29.015365 kernel: printk: console [ttyS0] enabled Sep 13 01:35:29.015377 kernel: ACPI: Core revision 20230628 Sep 13 01:35:29.015389 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 01:35:29.015401 kernel: x2apic enabled Sep 13 01:35:29.015417 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 01:35:29.015430 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Sep 13 01:35:29.015443 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Sep 13 01:35:29.015455 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 13 01:35:29.015467 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 13 01:35:29.015479 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 13 01:35:29.015491 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 01:35:29.015502 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 01:35:29.015514 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 01:35:29.015531 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 13 01:35:29.015543 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 01:35:29.015555 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 01:35:29.015567 kernel: MDS: Mitigation: Clear CPU buffers Sep 13 01:35:29.015579 kernel: MMIO Stale Data: Unknown: No mitigations Sep 13 01:35:29.015590 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 13 01:35:29.015602 kernel: active return thunk: its_return_thunk Sep 13 01:35:29.015614 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 01:35:29.015626 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 01:35:29.015638 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 01:35:29.015650 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 01:35:29.015666 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 01:35:29.015678 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 01:35:29.015690 kernel: Freeing SMP alternatives memory: 32K Sep 13 01:35:29.015702 kernel: pid_max: default: 32768 minimum: 301 Sep 13 01:35:29.015714 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 01:35:29.015726 kernel: landlock: Up and running. Sep 13 01:35:29.015738 kernel: SELinux: Initializing. Sep 13 01:35:29.015776 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 01:35:29.015790 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 01:35:29.015802 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Sep 13 01:35:29.015814 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 13 01:35:29.015832 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 13 01:35:29.015844 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 13 01:35:29.015935 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Sep 13 01:35:29.015951 kernel: signal: max sigframe size: 1776 Sep 13 01:35:29.015963 kernel: rcu: Hierarchical SRCU implementation. Sep 13 01:35:29.015976 kernel: rcu: Max phase no-delay instances is 400. Sep 13 01:35:29.015988 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 01:35:29.016000 kernel: smp: Bringing up secondary CPUs ... Sep 13 01:35:29.016012 kernel: smpboot: x86: Booting SMP configuration: Sep 13 01:35:29.016030 kernel: .... node #0, CPUs: #1 Sep 13 01:35:29.016042 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 13 01:35:29.016055 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 01:35:29.016067 kernel: smpboot: Max logical packages: 16 Sep 13 01:35:29.016079 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Sep 13 01:35:29.016104 kernel: devtmpfs: initialized Sep 13 01:35:29.016115 kernel: x86/mm: Memory block size: 128MB Sep 13 01:35:29.016127 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 01:35:29.016139 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 13 01:35:29.016167 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 01:35:29.016178 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 01:35:29.016190 kernel: audit: initializing netlink subsys (disabled) Sep 13 01:35:29.016201 kernel: audit: type=2000 audit(1757727327.689:1): state=initialized audit_enabled=0 res=1 Sep 13 01:35:29.016212 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 01:35:29.016236 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 01:35:29.016247 kernel: cpuidle: using governor menu Sep 13 01:35:29.016258 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 01:35:29.016272 kernel: dca service started, version 1.12.1 Sep 13 01:35:29.016299 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 13 01:35:29.016312 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 13 01:35:29.016323 kernel: PCI: Using configuration type 1 for base access Sep 13 01:35:29.016335 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 01:35:29.016360 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 01:35:29.016372 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 01:35:29.016384 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 01:35:29.016396 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 01:35:29.016408 kernel: ACPI: Added _OSI(Module Device) Sep 13 01:35:29.016424 kernel: ACPI: Added _OSI(Processor Device) Sep 13 01:35:29.016436 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 01:35:29.016449 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 01:35:29.016461 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 01:35:29.016473 kernel: ACPI: Interpreter enabled Sep 13 01:35:29.016485 kernel: ACPI: PM: (supports S0 S5) Sep 13 01:35:29.016497 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 01:35:29.016509 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 01:35:29.016521 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 01:35:29.016557 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 13 01:35:29.016571 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 01:35:29.016842 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 01:35:29.017057 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 01:35:29.017222 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 01:35:29.017241 kernel: PCI host bridge to bus 0000:00 Sep 13 01:35:29.017451 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 01:35:29.017613 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 01:35:29.017773 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 01:35:29.017938 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Sep 13 01:35:29.018086 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 13 01:35:29.018233 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Sep 13 01:35:29.018380 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 01:35:29.018572 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 13 01:35:29.018797 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Sep 13 01:35:29.020120 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Sep 13 01:35:29.020288 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Sep 13 01:35:29.021483 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Sep 13 01:35:29.021660 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 01:35:29.021910 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 13 01:35:29.022089 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Sep 13 01:35:29.022280 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 13 01:35:29.022457 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Sep 13 01:35:29.022646 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 13 01:35:29.022827 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Sep 13 01:35:29.025097 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 13 01:35:29.025272 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Sep 13 01:35:29.025477 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 13 01:35:29.025647 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Sep 13 01:35:29.026927 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 13 01:35:29.027117 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Sep 13 01:35:29.027320 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 13 01:35:29.027518 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Sep 13 01:35:29.027716 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 13 01:35:29.028439 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Sep 13 01:35:29.028686 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 13 01:35:29.028894 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 13 01:35:29.029095 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Sep 13 01:35:29.029289 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Sep 13 01:35:29.029460 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Sep 13 01:35:29.029653 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 13 01:35:29.029836 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 01:35:29.032098 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Sep 13 01:35:29.032271 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Sep 13 01:35:29.032577 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 13 01:35:29.032811 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 13 01:35:29.034040 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 13 01:35:29.034231 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Sep 13 01:35:29.034406 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Sep 13 01:35:29.034605 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 13 01:35:29.034784 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 13 01:35:29.036041 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Sep 13 01:35:29.036254 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Sep 13 01:35:29.036436 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 13 01:35:29.036989 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 13 01:35:29.037158 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 01:35:29.037344 kernel: pci_bus 0000:02: extended config space not accessible Sep 13 01:35:29.037555 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Sep 13 01:35:29.037740 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Sep 13 01:35:29.038986 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 13 01:35:29.039171 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 13 01:35:29.039388 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 13 01:35:29.039562 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Sep 13 01:35:29.039727 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 13 01:35:29.040949 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 13 01:35:29.041155 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 01:35:29.041366 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 13 01:35:29.041539 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Sep 13 01:35:29.041705 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 13 01:35:29.043442 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 13 01:35:29.043612 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 01:35:29.043815 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 13 01:35:29.045037 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 13 01:35:29.045230 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 01:35:29.045425 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 13 01:35:29.045585 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 13 01:35:29.045756 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 01:35:29.045942 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 13 01:35:29.046142 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 13 01:35:29.046327 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 01:35:29.046550 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 13 01:35:29.046724 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 13 01:35:29.048935 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 01:35:29.049107 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 13 01:35:29.049278 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 13 01:35:29.049447 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 01:35:29.049466 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 01:35:29.049479 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 01:35:29.049503 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 01:35:29.049515 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 01:35:29.049535 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 13 01:35:29.049560 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 13 01:35:29.049572 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 13 01:35:29.049584 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 13 01:35:29.049596 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 13 01:35:29.049607 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 13 01:35:29.049619 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 13 01:35:29.049631 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 13 01:35:29.049643 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 13 01:35:29.049659 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 13 01:35:29.049671 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 13 01:35:29.049683 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 13 01:35:29.049695 kernel: iommu: Default domain type: Translated Sep 13 01:35:29.049707 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 01:35:29.049719 kernel: PCI: Using ACPI for IRQ routing Sep 13 01:35:29.049753 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 01:35:29.049768 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 01:35:29.049781 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Sep 13 01:35:29.051982 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 13 01:35:29.052172 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 13 01:35:29.052339 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 01:35:29.052358 kernel: vgaarb: loaded Sep 13 01:35:29.052371 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 01:35:29.052384 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 01:35:29.052396 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 01:35:29.052408 kernel: pnp: PnP ACPI init Sep 13 01:35:29.052634 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 13 01:35:29.052656 kernel: pnp: PnP ACPI: found 5 devices Sep 13 01:35:29.052675 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 01:35:29.052688 kernel: NET: Registered PF_INET protocol family Sep 13 01:35:29.052700 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 01:35:29.052713 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 01:35:29.052725 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 01:35:29.052740 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 01:35:29.052771 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 01:35:29.052784 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 01:35:29.052796 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 01:35:29.052808 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 01:35:29.052821 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 01:35:29.052833 kernel: NET: Registered PF_XDP protocol family Sep 13 01:35:29.053015 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Sep 13 01:35:29.053179 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 13 01:35:29.053357 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 13 01:35:29.053518 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 13 01:35:29.053677 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 13 01:35:29.053873 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 13 01:35:29.054050 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 13 01:35:29.054231 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 13 01:35:29.054410 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 13 01:35:29.054589 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 13 01:35:29.054791 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 13 01:35:29.056998 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 13 01:35:29.057163 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 13 01:35:29.057332 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 13 01:35:29.057503 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 13 01:35:29.057678 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 13 01:35:29.058941 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 13 01:35:29.059123 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 13 01:35:29.059288 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 13 01:35:29.059450 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 13 01:35:29.059625 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 13 01:35:29.059809 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 01:35:29.061033 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 13 01:35:29.061198 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 13 01:35:29.061372 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 13 01:35:29.061541 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 01:35:29.061711 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 13 01:35:29.063578 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 13 01:35:29.063785 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 13 01:35:29.063996 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 01:35:29.064192 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 13 01:35:29.064364 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 13 01:35:29.064556 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 13 01:35:29.064728 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 01:35:29.064929 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 13 01:35:29.065093 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 13 01:35:29.065255 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 13 01:35:29.065430 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 01:35:29.065619 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 13 01:35:29.065803 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 13 01:35:29.067996 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 13 01:35:29.068164 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 01:35:29.068331 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 13 01:35:29.068518 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 13 01:35:29.068705 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 13 01:35:29.068923 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 01:35:29.069092 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 13 01:35:29.069258 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 13 01:35:29.069422 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 13 01:35:29.069610 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 01:35:29.069780 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 01:35:29.071972 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 01:35:29.072156 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 01:35:29.072389 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Sep 13 01:35:29.072565 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 13 01:35:29.072727 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Sep 13 01:35:29.072940 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 13 01:35:29.073103 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Sep 13 01:35:29.073271 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 13 01:35:29.073476 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 13 01:35:29.073710 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Sep 13 01:35:29.075973 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 13 01:35:29.076174 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 13 01:35:29.076381 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Sep 13 01:35:29.076562 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 13 01:35:29.076792 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 13 01:35:29.077027 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 13 01:35:29.077182 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 13 01:35:29.077335 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 13 01:35:29.077526 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Sep 13 01:35:29.077702 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 13 01:35:29.078976 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 13 01:35:29.079160 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Sep 13 01:35:29.079330 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 13 01:35:29.079483 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 13 01:35:29.079664 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Sep 13 01:35:29.079883 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Sep 13 01:35:29.080058 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 13 01:35:29.080231 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Sep 13 01:35:29.080385 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 13 01:35:29.080543 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 13 01:35:29.080563 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 13 01:35:29.080589 kernel: PCI: CLS 0 bytes, default 64 Sep 13 01:35:29.080602 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 13 01:35:29.080621 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Sep 13 01:35:29.080647 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 01:35:29.080661 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Sep 13 01:35:29.080674 kernel: Initialise system trusted keyrings Sep 13 01:35:29.080691 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 01:35:29.080704 kernel: Key type asymmetric registered Sep 13 01:35:29.080717 kernel: Asymmetric key parser 'x509' registered Sep 13 01:35:29.080730 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 01:35:29.080773 kernel: io scheduler mq-deadline registered Sep 13 01:35:29.080792 kernel: io scheduler kyber registered Sep 13 01:35:29.080805 kernel: io scheduler bfq registered Sep 13 01:35:29.080996 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 13 01:35:29.081163 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 13 01:35:29.081334 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:35:29.081498 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 13 01:35:29.081662 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 13 01:35:29.081839 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:35:29.082090 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 13 01:35:29.082289 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 13 01:35:29.082459 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:35:29.082634 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 13 01:35:29.082838 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 13 01:35:29.083045 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:35:29.083247 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 13 01:35:29.083417 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 13 01:35:29.083586 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:35:29.083785 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 13 01:35:29.083987 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 13 01:35:29.084202 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:35:29.084397 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 13 01:35:29.084557 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 13 01:35:29.084725 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:35:29.084927 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 13 01:35:29.085122 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 13 01:35:29.085284 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 13 01:35:29.085305 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 01:35:29.085319 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 13 01:35:29.085340 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 13 01:35:29.085353 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 01:35:29.085366 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 01:35:29.085387 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 01:35:29.085401 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 01:35:29.085414 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 01:35:29.085652 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 13 01:35:29.085674 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 01:35:29.085848 kernel: rtc_cmos 00:03: registered as rtc0 Sep 13 01:35:29.086030 kernel: rtc_cmos 00:03: setting system clock to 2025-09-13T01:35:28 UTC (1757727328) Sep 13 01:35:29.086194 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 13 01:35:29.086214 kernel: intel_pstate: CPU model not supported Sep 13 01:35:29.086227 kernel: NET: Registered PF_INET6 protocol family Sep 13 01:35:29.086240 kernel: Segment Routing with IPv6 Sep 13 01:35:29.086253 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 01:35:29.086275 kernel: NET: Registered PF_PACKET protocol family Sep 13 01:35:29.086288 kernel: Key type dns_resolver registered Sep 13 01:35:29.086307 kernel: IPI shorthand broadcast: enabled Sep 13 01:35:29.086320 kernel: sched_clock: Marking stable (1137014795, 221207673)->(1598789450, -240566982) Sep 13 01:35:29.086333 kernel: registered taskstats version 1 Sep 13 01:35:29.086346 kernel: Loading compiled-in X.509 certificates Sep 13 01:35:29.086359 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 01:35:29.086376 kernel: Key type .fscrypt registered Sep 13 01:35:29.086389 kernel: Key type fscrypt-provisioning registered Sep 13 01:35:29.086407 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 01:35:29.086420 kernel: ima: Allocated hash algorithm: sha1 Sep 13 01:35:29.086437 kernel: ima: No architecture policies found Sep 13 01:35:29.086450 kernel: clk: Disabling unused clocks Sep 13 01:35:29.086464 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 01:35:29.086476 kernel: Write protecting the kernel read-only data: 36864k Sep 13 01:35:29.086502 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 01:35:29.086515 kernel: Run /init as init process Sep 13 01:35:29.086537 kernel: with arguments: Sep 13 01:35:29.086550 kernel: /init Sep 13 01:35:29.086601 kernel: with environment: Sep 13 01:35:29.086621 kernel: HOME=/ Sep 13 01:35:29.086633 kernel: TERM=linux Sep 13 01:35:29.086646 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 01:35:29.086662 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 01:35:29.086677 systemd[1]: Detected virtualization kvm. Sep 13 01:35:29.086691 systemd[1]: Detected architecture x86-64. Sep 13 01:35:29.086704 systemd[1]: Running in initrd. Sep 13 01:35:29.086718 systemd[1]: No hostname configured, using default hostname. Sep 13 01:35:29.086736 systemd[1]: Hostname set to . Sep 13 01:35:29.086762 systemd[1]: Initializing machine ID from VM UUID. Sep 13 01:35:29.086776 systemd[1]: Queued start job for default target initrd.target. Sep 13 01:35:29.086790 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 01:35:29.086804 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 01:35:29.086818 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 01:35:29.086832 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 01:35:29.086845 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 01:35:29.086889 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 01:35:29.086905 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 01:35:29.086919 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 01:35:29.086932 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 01:35:29.086946 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 01:35:29.086960 systemd[1]: Reached target paths.target - Path Units. Sep 13 01:35:29.086979 systemd[1]: Reached target slices.target - Slice Units. Sep 13 01:35:29.086993 systemd[1]: Reached target swap.target - Swaps. Sep 13 01:35:29.087007 systemd[1]: Reached target timers.target - Timer Units. Sep 13 01:35:29.087020 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 01:35:29.087034 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 01:35:29.087047 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 01:35:29.087061 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 01:35:29.087075 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 01:35:29.087089 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 01:35:29.087113 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 01:35:29.087127 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 01:35:29.087141 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 01:35:29.087154 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 01:35:29.087168 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 01:35:29.087191 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 01:35:29.087204 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 01:35:29.087218 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 01:35:29.087232 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 01:35:29.087257 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 01:35:29.087318 systemd-journald[202]: Collecting audit messages is disabled. Sep 13 01:35:29.087349 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 01:35:29.087364 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 01:35:29.087389 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 01:35:29.087403 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 01:35:29.087416 kernel: Bridge firewalling registered Sep 13 01:35:29.087437 systemd-journald[202]: Journal started Sep 13 01:35:29.087467 systemd-journald[202]: Runtime Journal (/run/log/journal/230fb8ba9bdf496781317a98bc7e4c71) is 4.7M, max 38.0M, 33.2M free. Sep 13 01:35:29.030991 systemd-modules-load[203]: Inserted module 'overlay' Sep 13 01:35:29.077638 systemd-modules-load[203]: Inserted module 'br_netfilter' Sep 13 01:35:29.131065 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 01:35:29.132339 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 01:35:29.133312 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 01:35:29.134808 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 01:35:29.147119 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 01:35:29.148838 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 01:35:29.156023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 01:35:29.159020 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 01:35:29.176663 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 01:35:29.182841 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 01:35:29.186167 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 01:35:29.187255 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 01:35:29.195148 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 01:35:29.199022 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 01:35:29.212423 dracut-cmdline[237]: dracut-dracut-053 Sep 13 01:35:29.219141 dracut-cmdline[237]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 01:35:29.243397 systemd-resolved[239]: Positive Trust Anchors: Sep 13 01:35:29.243416 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:35:29.243464 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 01:35:29.248674 systemd-resolved[239]: Defaulting to hostname 'linux'. Sep 13 01:35:29.251085 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 01:35:29.253041 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 01:35:29.330936 kernel: SCSI subsystem initialized Sep 13 01:35:29.342896 kernel: Loading iSCSI transport class v2.0-870. Sep 13 01:35:29.354886 kernel: iscsi: registered transport (tcp) Sep 13 01:35:29.380041 kernel: iscsi: registered transport (qla4xxx) Sep 13 01:35:29.380110 kernel: QLogic iSCSI HBA Driver Sep 13 01:35:29.434404 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 01:35:29.441053 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 01:35:29.472902 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 01:35:29.475154 kernel: device-mapper: uevent: version 1.0.3 Sep 13 01:35:29.475197 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 01:35:29.523898 kernel: raid6: sse2x4 gen() 13976 MB/s Sep 13 01:35:29.540904 kernel: raid6: sse2x2 gen() 9565 MB/s Sep 13 01:35:29.559445 kernel: raid6: sse2x1 gen() 9868 MB/s Sep 13 01:35:29.559529 kernel: raid6: using algorithm sse2x4 gen() 13976 MB/s Sep 13 01:35:29.578512 kernel: raid6: .... xor() 7692 MB/s, rmw enabled Sep 13 01:35:29.578605 kernel: raid6: using ssse3x2 recovery algorithm Sep 13 01:35:29.603882 kernel: xor: automatically using best checksumming function avx Sep 13 01:35:29.791889 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 01:35:29.805846 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 01:35:29.813046 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 01:35:29.839833 systemd-udevd[422]: Using default interface naming scheme 'v255'. Sep 13 01:35:29.846951 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 01:35:29.855109 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 01:35:29.876982 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Sep 13 01:35:29.917487 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 01:35:29.924061 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 01:35:30.038107 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 01:35:30.046255 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 01:35:30.083087 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 01:35:30.086561 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 01:35:30.088500 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 01:35:30.090164 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 01:35:30.097044 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 01:35:30.125789 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 01:35:30.170221 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Sep 13 01:35:30.180878 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 01:35:30.193877 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 13 01:35:30.202909 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 01:35:30.203185 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 01:35:30.204551 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 01:35:30.225678 kernel: libata version 3.00 loaded. Sep 13 01:35:30.225742 kernel: AVX version of gcm_enc/dec engaged. Sep 13 01:35:30.205322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 01:35:30.232523 kernel: AES CTR mode by8 optimization enabled Sep 13 01:35:30.205494 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 01:35:30.266231 kernel: ahci 0000:00:1f.2: version 3.0 Sep 13 01:35:30.266549 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 13 01:35:30.266591 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 13 01:35:30.267504 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 13 01:35:30.267679 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 01:35:30.267697 kernel: GPT:17805311 != 125829119 Sep 13 01:35:30.267738 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 01:35:30.267757 kernel: GPT:17805311 != 125829119 Sep 13 01:35:30.267773 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 01:35:30.267789 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 01:35:30.267813 kernel: scsi host0: ahci Sep 13 01:35:30.268043 kernel: scsi host1: ahci Sep 13 01:35:30.268235 kernel: scsi host2: ahci Sep 13 01:35:30.268456 kernel: scsi host3: ahci Sep 13 01:35:30.268668 kernel: scsi host4: ahci Sep 13 01:35:30.270531 kernel: scsi host5: ahci Sep 13 01:35:30.271216 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Sep 13 01:35:30.271237 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Sep 13 01:35:30.271264 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Sep 13 01:35:30.208126 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 01:35:30.283209 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Sep 13 01:35:30.283236 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Sep 13 01:35:30.283255 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Sep 13 01:35:30.226550 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 01:35:30.294482 kernel: ACPI: bus type USB registered Sep 13 01:35:30.294527 kernel: usbcore: registered new interface driver usbfs Sep 13 01:35:30.296559 kernel: usbcore: registered new interface driver hub Sep 13 01:35:30.296595 kernel: usbcore: registered new device driver usb Sep 13 01:35:30.313893 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (468) Sep 13 01:35:30.329787 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 01:35:30.389430 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (477) Sep 13 01:35:30.390584 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 01:35:30.397142 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 01:35:30.397996 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 01:35:30.421166 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 01:35:30.427983 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 01:35:30.434039 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 01:35:30.436375 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 01:35:30.446089 disk-uuid[559]: Primary Header is updated. Sep 13 01:35:30.446089 disk-uuid[559]: Secondary Entries is updated. Sep 13 01:35:30.446089 disk-uuid[559]: Secondary Header is updated. Sep 13 01:35:30.453873 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 01:35:30.464876 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 01:35:30.470666 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 01:35:30.589562 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 13 01:35:30.589626 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 13 01:35:30.589667 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 13 01:35:30.589690 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 13 01:35:30.591409 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 13 01:35:30.593088 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 13 01:35:30.641879 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 13 01:35:30.644965 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Sep 13 01:35:30.647884 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 13 01:35:30.652000 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 13 01:35:30.652279 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Sep 13 01:35:30.652496 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Sep 13 01:35:30.656284 kernel: hub 1-0:1.0: USB hub found Sep 13 01:35:30.656557 kernel: hub 1-0:1.0: 4 ports detected Sep 13 01:35:30.656818 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 13 01:35:30.660193 kernel: hub 2-0:1.0: USB hub found Sep 13 01:35:30.660494 kernel: hub 2-0:1.0: 4 ports detected Sep 13 01:35:30.900986 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 13 01:35:31.041903 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 01:35:31.048001 kernel: usbcore: registered new interface driver usbhid Sep 13 01:35:31.048069 kernel: usbhid: USB HID core driver Sep 13 01:35:31.055164 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Sep 13 01:35:31.055212 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Sep 13 01:35:31.466217 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 01:35:31.467998 disk-uuid[560]: The operation has completed successfully. Sep 13 01:35:31.517232 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 01:35:31.517411 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 01:35:31.540091 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 01:35:31.553127 sh[585]: Success Sep 13 01:35:31.570898 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Sep 13 01:35:31.630368 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 01:35:31.647022 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 01:35:31.648826 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 01:35:31.669890 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 01:35:31.669948 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 01:35:31.671953 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 01:35:31.674170 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 01:35:31.676784 kernel: BTRFS info (device dm-0): using free space tree Sep 13 01:35:31.687002 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 01:35:31.688515 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 01:35:31.694040 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 01:35:31.699207 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 01:35:31.710283 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 01:35:31.710330 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 01:35:31.712022 kernel: BTRFS info (device vda6): using free space tree Sep 13 01:35:31.719030 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 01:35:31.730308 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 01:35:31.733168 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 01:35:31.739876 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 01:35:31.748776 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 01:35:31.832196 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 01:35:31.843096 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 01:35:31.882833 systemd-networkd[767]: lo: Link UP Sep 13 01:35:31.882847 systemd-networkd[767]: lo: Gained carrier Sep 13 01:35:31.886063 systemd-networkd[767]: Enumeration completed Sep 13 01:35:31.886199 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 01:35:31.888107 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 01:35:31.888119 systemd-networkd[767]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:35:31.889502 systemd[1]: Reached target network.target - Network. Sep 13 01:35:31.891148 systemd-networkd[767]: eth0: Link UP Sep 13 01:35:31.891154 systemd-networkd[767]: eth0: Gained carrier Sep 13 01:35:31.891175 systemd-networkd[767]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 01:35:31.904972 systemd-networkd[767]: eth0: DHCPv4 address 10.230.67.162/30, gateway 10.230.67.161 acquired from 10.230.67.161 Sep 13 01:35:31.910778 ignition[675]: Ignition 2.19.0 Sep 13 01:35:31.911735 ignition[675]: Stage: fetch-offline Sep 13 01:35:31.912489 ignition[675]: no configs at "/usr/lib/ignition/base.d" Sep 13 01:35:31.913267 ignition[675]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 01:35:31.914117 ignition[675]: parsed url from cmdline: "" Sep 13 01:35:31.914124 ignition[675]: no config URL provided Sep 13 01:35:31.914135 ignition[675]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:35:31.914151 ignition[675]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:35:31.916017 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 01:35:31.914159 ignition[675]: failed to fetch config: resource requires networking Sep 13 01:35:31.914743 ignition[675]: Ignition finished successfully Sep 13 01:35:31.930008 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 01:35:31.950288 ignition[777]: Ignition 2.19.0 Sep 13 01:35:31.950308 ignition[777]: Stage: fetch Sep 13 01:35:31.950568 ignition[777]: no configs at "/usr/lib/ignition/base.d" Sep 13 01:35:31.950588 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 01:35:31.950757 ignition[777]: parsed url from cmdline: "" Sep 13 01:35:31.950764 ignition[777]: no config URL provided Sep 13 01:35:31.950773 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 01:35:31.950789 ignition[777]: no config at "/usr/lib/ignition/user.ign" Sep 13 01:35:31.950984 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 13 01:35:31.951013 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 13 01:35:31.951026 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 13 01:35:31.968662 ignition[777]: GET result: OK Sep 13 01:35:31.969667 ignition[777]: parsing config with SHA512: 14819219d5e22846b4b7df9f537549defe3e279e64db5b132be8db7742536769d6024876377bf83d7e05ef9270dee51115ff5975278c4c22a9cc5eac70fe848f Sep 13 01:35:31.978192 unknown[777]: fetched base config from "system" Sep 13 01:35:31.979374 ignition[777]: fetch: fetch complete Sep 13 01:35:31.978209 unknown[777]: fetched base config from "system" Sep 13 01:35:31.979385 ignition[777]: fetch: fetch passed Sep 13 01:35:31.978241 unknown[777]: fetched user config from "openstack" Sep 13 01:35:31.979539 ignition[777]: Ignition finished successfully Sep 13 01:35:31.983899 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 01:35:31.992069 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 01:35:32.016765 ignition[784]: Ignition 2.19.0 Sep 13 01:35:32.016787 ignition[784]: Stage: kargs Sep 13 01:35:32.017143 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 13 01:35:32.017162 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 01:35:32.021055 ignition[784]: kargs: kargs passed Sep 13 01:35:32.022475 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 01:35:32.021128 ignition[784]: Ignition finished successfully Sep 13 01:35:32.030051 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 01:35:32.050169 ignition[790]: Ignition 2.19.0 Sep 13 01:35:32.050187 ignition[790]: Stage: disks Sep 13 01:35:32.050425 ignition[790]: no configs at "/usr/lib/ignition/base.d" Sep 13 01:35:32.050444 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 01:35:32.055377 ignition[790]: disks: disks passed Sep 13 01:35:32.055445 ignition[790]: Ignition finished successfully Sep 13 01:35:32.056738 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 01:35:32.058362 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 01:35:32.059383 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 01:35:32.061154 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 01:35:32.062750 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 01:35:32.064151 systemd[1]: Reached target basic.target - Basic System. Sep 13 01:35:32.071052 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 01:35:32.091050 systemd-fsck[798]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 13 01:35:32.094236 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 01:35:32.099984 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 01:35:32.222892 kernel: EXT4-fs (vda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 01:35:32.224333 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 01:35:32.225662 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 01:35:32.233978 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 01:35:32.237977 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 01:35:32.239111 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 01:35:32.240821 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Sep 13 01:35:32.242942 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 01:35:32.242978 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 01:35:32.252221 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 01:35:32.268568 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (806) Sep 13 01:35:32.268601 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 01:35:32.268627 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 01:35:32.268646 kernel: BTRFS info (device vda6): using free space tree Sep 13 01:35:32.268663 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 01:35:32.266358 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 01:35:32.279088 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 01:35:32.334881 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 01:35:32.343775 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Sep 13 01:35:32.351886 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 01:35:32.363239 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 01:35:32.465071 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 01:35:32.476054 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 01:35:32.479052 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 01:35:32.489898 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 01:35:32.522968 ignition[924]: INFO : Ignition 2.19.0 Sep 13 01:35:32.524993 ignition[924]: INFO : Stage: mount Sep 13 01:35:32.524993 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 01:35:32.524993 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 01:35:32.526064 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 01:35:32.530427 ignition[924]: INFO : mount: mount passed Sep 13 01:35:32.530427 ignition[924]: INFO : Ignition finished successfully Sep 13 01:35:32.528937 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 01:35:32.668653 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 01:35:33.497846 systemd-networkd[767]: eth0: Gained IPv6LL Sep 13 01:35:35.005567 systemd-networkd[767]: eth0: Ignoring DHCPv6 address 2a02:1348:179:90e8:24:19ff:fee6:43a2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:90e8:24:19ff:fee6:43a2/64 assigned by NDisc. Sep 13 01:35:35.005583 systemd-networkd[767]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 13 01:35:39.415266 coreos-metadata[808]: Sep 13 01:35:39.415 WARN failed to locate config-drive, using the metadata service API instead Sep 13 01:35:39.439562 coreos-metadata[808]: Sep 13 01:35:39.439 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 13 01:35:39.462415 coreos-metadata[808]: Sep 13 01:35:39.462 INFO Fetch successful Sep 13 01:35:39.463323 coreos-metadata[808]: Sep 13 01:35:39.462 INFO wrote hostname srv-bbx8z.gb1.brightbox.com to /sysroot/etc/hostname Sep 13 01:35:39.466043 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 13 01:35:39.466274 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Sep 13 01:35:39.474016 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 01:35:39.500097 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 01:35:39.511879 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Sep 13 01:35:39.515151 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 01:35:39.515184 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 01:35:39.516987 kernel: BTRFS info (device vda6): using free space tree Sep 13 01:35:39.523901 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 01:35:39.525257 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 01:35:39.562263 ignition[958]: INFO : Ignition 2.19.0 Sep 13 01:35:39.562263 ignition[958]: INFO : Stage: files Sep 13 01:35:39.564152 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 01:35:39.564152 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 01:35:39.564152 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Sep 13 01:35:39.566847 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 01:35:39.566847 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 01:35:39.568805 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 01:35:39.570095 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 01:35:39.571253 unknown[958]: wrote ssh authorized keys file for user: core Sep 13 01:35:39.572288 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 01:35:39.574008 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 01:35:39.575311 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 13 01:35:39.892922 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 13 01:35:40.321810 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 13 01:35:40.321810 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 01:35:40.321810 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 13 01:35:40.535608 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 01:35:41.090606 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 01:35:41.093058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 01:35:41.093058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 01:35:41.093058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:35:41.093058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 01:35:41.093058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:35:41.093058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 01:35:41.093058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:35:41.093058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 01:35:41.093058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:35:41.093058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 01:35:41.093058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 01:35:41.093058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 01:35:41.093058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 01:35:41.093058 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 13 01:35:41.335732 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 01:35:43.290403 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 13 01:35:43.290403 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 13 01:35:43.295988 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:35:43.297294 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 01:35:43.297294 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 13 01:35:43.297294 ignition[958]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 13 01:35:43.297294 ignition[958]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 01:35:43.297294 ignition[958]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:35:43.297294 ignition[958]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 01:35:43.297294 ignition[958]: INFO : files: files passed Sep 13 01:35:43.309317 ignition[958]: INFO : Ignition finished successfully Sep 13 01:35:43.300424 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 01:35:43.311156 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 01:35:43.317580 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 01:35:43.321227 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 01:35:43.321389 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 01:35:43.346642 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 01:35:43.346642 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 01:35:43.349737 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 01:35:43.351211 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 01:35:43.352625 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 01:35:43.359112 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 01:35:43.388926 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 01:35:43.389124 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 01:35:43.391202 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 01:35:43.392282 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 01:35:43.393731 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 01:35:43.403518 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 01:35:43.421259 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 01:35:43.429143 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 01:35:43.442637 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 01:35:43.444471 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 01:35:43.446132 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 01:35:43.447177 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 01:35:43.447419 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 01:35:43.449031 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 01:35:43.449957 systemd[1]: Stopped target basic.target - Basic System. Sep 13 01:35:43.451447 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 01:35:43.452773 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 01:35:43.454151 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 01:35:43.455666 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 01:35:43.457158 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 01:35:43.458700 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 01:35:43.460182 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 01:35:43.461681 systemd[1]: Stopped target swap.target - Swaps. Sep 13 01:35:43.463048 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 01:35:43.463235 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 01:35:43.464972 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 01:35:43.465968 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 01:35:43.467422 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 01:35:43.467607 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 01:35:43.469003 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 01:35:43.469237 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 01:35:43.471013 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 01:35:43.471174 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 01:35:43.472743 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 01:35:43.472905 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 01:35:43.490234 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 01:35:43.492261 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 01:35:43.492455 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 01:35:43.498597 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 01:35:43.501068 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 01:35:43.507470 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 01:35:43.508728 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 01:35:43.508919 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 01:35:43.517284 ignition[1011]: INFO : Ignition 2.19.0 Sep 13 01:35:43.517284 ignition[1011]: INFO : Stage: umount Sep 13 01:35:43.522639 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 01:35:43.522639 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 13 01:35:43.519478 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 01:35:43.529769 ignition[1011]: INFO : umount: umount passed Sep 13 01:35:43.529769 ignition[1011]: INFO : Ignition finished successfully Sep 13 01:35:43.519655 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 01:35:43.525261 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 01:35:43.525409 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 01:35:43.527209 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 01:35:43.527276 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 01:35:43.528946 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 01:35:43.529025 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 01:35:43.530995 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 01:35:43.531059 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 01:35:43.533098 systemd[1]: Stopped target network.target - Network. Sep 13 01:35:43.533697 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 01:35:43.533781 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 01:35:43.537128 systemd[1]: Stopped target paths.target - Path Units. Sep 13 01:35:43.538041 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 01:35:43.543251 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 01:35:43.544287 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 01:35:43.544902 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 01:35:43.545629 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 01:35:43.545707 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 01:35:43.547047 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 01:35:43.547129 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 01:35:43.548480 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 01:35:43.548574 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 01:35:43.550142 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 01:35:43.550207 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 01:35:43.551645 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 01:35:43.553191 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 01:35:43.556096 systemd-networkd[767]: eth0: DHCPv6 lease lost Sep 13 01:35:43.556102 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 01:35:43.556975 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 01:35:43.557134 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 01:35:43.561430 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 01:35:43.561630 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 01:35:43.564816 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 01:35:43.565291 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 01:35:43.571373 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 01:35:43.571482 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 01:35:43.573187 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 01:35:43.573282 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 01:35:43.579985 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 01:35:43.582210 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 01:35:43.582291 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 01:35:43.583739 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:35:43.583807 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 01:35:43.587235 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 01:35:43.587304 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 01:35:43.589095 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 01:35:43.589168 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 01:35:43.590962 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 01:35:43.603643 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 01:35:43.604903 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 01:35:43.606067 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 01:35:43.606273 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 01:35:43.609339 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 01:35:43.609442 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 01:35:43.611038 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 01:35:43.611101 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 01:35:43.612539 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 01:35:43.612611 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 01:35:43.614720 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 01:35:43.614785 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 01:35:43.616109 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 01:35:43.616193 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 01:35:43.628166 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 01:35:43.630235 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 01:35:43.630332 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 01:35:43.631182 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 01:35:43.631277 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 01:35:43.637227 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 01:35:43.637366 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 01:35:43.638691 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 01:35:43.646091 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 01:35:43.655008 systemd[1]: Switching root. Sep 13 01:35:43.688886 systemd-journald[202]: Journal stopped Sep 13 01:35:45.085785 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Sep 13 01:35:45.085929 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 01:35:45.085962 kernel: SELinux: policy capability open_perms=1 Sep 13 01:35:45.085982 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 01:35:45.086004 kernel: SELinux: policy capability always_check_network=0 Sep 13 01:35:45.086023 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 01:35:45.086054 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 01:35:45.086071 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 01:35:45.086088 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 01:35:45.086129 kernel: audit: type=1403 audit(1757727343.930:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 01:35:45.086156 systemd[1]: Successfully loaded SELinux policy in 50.848ms. Sep 13 01:35:45.086188 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.140ms. Sep 13 01:35:45.086214 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 01:35:45.086235 systemd[1]: Detected virtualization kvm. Sep 13 01:35:45.086254 systemd[1]: Detected architecture x86-64. Sep 13 01:35:45.086273 systemd[1]: Detected first boot. Sep 13 01:35:45.086291 systemd[1]: Hostname set to . Sep 13 01:35:45.086326 systemd[1]: Initializing machine ID from VM UUID. Sep 13 01:35:45.086347 zram_generator::config[1053]: No configuration found. Sep 13 01:35:45.086367 systemd[1]: Populated /etc with preset unit settings. Sep 13 01:35:45.086386 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 13 01:35:45.086404 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 13 01:35:45.086435 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 13 01:35:45.086453 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 01:35:45.086470 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 01:35:45.086516 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 01:35:45.086554 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 01:35:45.086582 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 01:35:45.086608 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 01:35:45.086628 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 01:35:45.086648 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 01:35:45.086667 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 01:35:45.086686 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 01:35:45.086705 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 01:35:45.086734 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 01:35:45.086756 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 01:35:45.086776 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 01:35:45.086810 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 01:35:45.086834 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 01:35:45.086904 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 13 01:35:45.086926 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 13 01:35:45.086960 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 13 01:35:45.086980 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 01:35:45.086998 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 01:35:45.087017 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 01:35:45.087035 systemd[1]: Reached target slices.target - Slice Units. Sep 13 01:35:45.087067 systemd[1]: Reached target swap.target - Swaps. Sep 13 01:35:45.087086 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 01:35:45.087106 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 01:35:45.087153 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 01:35:45.087184 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 01:35:45.087206 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 01:35:45.087234 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 01:35:45.087254 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 01:35:45.087273 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 01:35:45.087291 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 01:35:45.087322 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:35:45.087343 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 01:35:45.087371 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 01:35:45.087391 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 01:35:45.087411 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 01:35:45.087429 systemd[1]: Reached target machines.target - Containers. Sep 13 01:35:45.087448 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 01:35:45.087469 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 01:35:45.087512 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 01:35:45.087533 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 01:35:45.087553 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 01:35:45.087571 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 01:35:45.087590 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 01:35:45.087610 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 01:35:45.087630 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 01:35:45.087668 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 01:35:45.087700 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 13 01:35:45.087721 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 13 01:35:45.087739 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 13 01:35:45.087764 systemd[1]: Stopped systemd-fsck-usr.service. Sep 13 01:35:45.087790 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 01:35:45.087810 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 01:35:45.087829 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 01:35:45.087848 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 01:35:45.087896 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 01:35:45.087930 systemd[1]: verity-setup.service: Deactivated successfully. Sep 13 01:35:45.087952 systemd[1]: Stopped verity-setup.service. Sep 13 01:35:45.087972 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:35:45.088003 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 01:35:45.088021 kernel: loop: module loaded Sep 13 01:35:45.088039 kernel: fuse: init (API version 7.39) Sep 13 01:35:45.088078 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 01:35:45.088099 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 01:35:45.088119 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 01:35:45.088152 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 01:35:45.088173 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 01:35:45.088205 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 01:35:45.088225 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 01:35:45.088251 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 01:35:45.088311 systemd-journald[1139]: Collecting audit messages is disabled. Sep 13 01:35:45.088350 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:35:45.088370 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 01:35:45.088389 systemd-journald[1139]: Journal started Sep 13 01:35:45.088438 systemd-journald[1139]: Runtime Journal (/run/log/journal/230fb8ba9bdf496781317a98bc7e4c71) is 4.7M, max 38.0M, 33.2M free. Sep 13 01:35:44.722996 systemd[1]: Queued start job for default target multi-user.target. Sep 13 01:35:45.091587 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 01:35:44.744284 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 01:35:44.745069 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 13 01:35:45.093726 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:35:45.094892 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 01:35:45.096620 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 01:35:45.097072 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 01:35:45.098297 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:35:45.098621 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 01:35:45.099749 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 01:35:45.100988 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 01:35:45.102173 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 01:35:45.117553 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 01:35:45.127974 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 01:35:45.134997 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 01:35:45.137940 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 01:35:45.137992 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 01:35:45.142868 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 01:35:45.146896 kernel: ACPI: bus type drm_connector registered Sep 13 01:35:45.150546 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 01:35:45.153240 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 01:35:45.155187 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 01:35:45.163109 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 01:35:45.167586 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 01:35:45.168639 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:35:45.178052 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 01:35:45.180003 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 01:35:45.187961 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 01:35:45.203119 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 01:35:45.209211 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 01:35:45.217806 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:35:45.218177 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 01:35:45.220356 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 01:35:45.222272 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 01:35:45.224572 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 01:35:45.249149 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 01:35:45.259231 systemd-journald[1139]: Time spent on flushing to /var/log/journal/230fb8ba9bdf496781317a98bc7e4c71 is 111.164ms for 1142 entries. Sep 13 01:35:45.259231 systemd-journald[1139]: System Journal (/var/log/journal/230fb8ba9bdf496781317a98bc7e4c71) is 8.0M, max 584.8M, 576.8M free. Sep 13 01:35:45.397105 systemd-journald[1139]: Received client request to flush runtime journal. Sep 13 01:35:45.397161 kernel: loop0: detected capacity change from 0 to 224512 Sep 13 01:35:45.397186 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 01:35:45.397209 kernel: loop1: detected capacity change from 0 to 142488 Sep 13 01:35:45.267347 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 01:35:45.269307 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 01:35:45.281137 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 01:35:45.313389 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 01:35:45.352361 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 01:35:45.353862 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 01:35:45.384285 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 01:35:45.391064 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 01:35:45.404121 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 01:35:45.465891 kernel: loop2: detected capacity change from 0 to 8 Sep 13 01:35:45.473946 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 13 01:35:45.473971 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 13 01:35:45.484897 kernel: loop3: detected capacity change from 0 to 140768 Sep 13 01:35:45.501449 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 01:35:45.553894 kernel: loop4: detected capacity change from 0 to 224512 Sep 13 01:35:45.591133 kernel: loop5: detected capacity change from 0 to 142488 Sep 13 01:35:45.609366 kernel: loop6: detected capacity change from 0 to 8 Sep 13 01:35:45.612884 kernel: loop7: detected capacity change from 0 to 140768 Sep 13 01:35:45.629758 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Sep 13 01:35:45.631177 (sd-merge)[1210]: Merged extensions into '/usr'. Sep 13 01:35:45.633384 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 01:35:45.643041 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 01:35:45.647053 systemd[1]: Reloading requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 01:35:45.647079 systemd[1]: Reloading... Sep 13 01:35:45.773884 zram_generator::config[1237]: No configuration found. Sep 13 01:35:45.941077 ldconfig[1179]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 01:35:46.073009 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:35:46.138323 systemd[1]: Reloading finished in 490 ms. Sep 13 01:35:46.166908 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 01:35:46.171217 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 01:35:46.184332 systemd[1]: Starting ensure-sysext.service... Sep 13 01:35:46.193038 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 01:35:46.195158 udevadm[1212]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 01:35:46.207293 systemd[1]: Reloading requested from client PID 1293 ('systemctl') (unit ensure-sysext.service)... Sep 13 01:35:46.207317 systemd[1]: Reloading... Sep 13 01:35:46.283769 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 01:35:46.284350 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 01:35:46.288967 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 01:35:46.289368 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Sep 13 01:35:46.289501 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Sep 13 01:35:46.297767 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 01:35:46.297789 systemd-tmpfiles[1294]: Skipping /boot Sep 13 01:35:46.309927 zram_generator::config[1320]: No configuration found. Sep 13 01:35:46.321148 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 01:35:46.321170 systemd-tmpfiles[1294]: Skipping /boot Sep 13 01:35:46.517309 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:35:46.583346 systemd[1]: Reloading finished in 375 ms. Sep 13 01:35:46.603468 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 01:35:46.608486 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 01:35:46.622062 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 01:35:46.635120 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 01:35:46.640285 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 01:35:46.649659 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 01:35:46.653515 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 01:35:46.663184 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 01:35:46.671647 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:35:46.673004 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 01:35:46.681298 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 01:35:46.689314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 01:35:46.694161 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 01:35:46.695082 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 01:35:46.695283 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:35:46.709223 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 01:35:46.713106 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:35:46.713376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 01:35:46.713632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 01:35:46.713768 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:35:46.719558 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:35:46.719885 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 01:35:46.752303 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 01:35:46.754595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 01:35:46.754789 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 01:35:46.756123 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 01:35:46.756375 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 01:35:46.762261 systemd[1]: Finished ensure-sysext.service. Sep 13 01:35:46.764460 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 01:35:46.764720 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 01:35:46.771520 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 01:35:46.775744 systemd-udevd[1384]: Using default interface naming scheme 'v255'. Sep 13 01:35:46.777322 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 01:35:46.777591 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 01:35:46.779594 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 01:35:46.780087 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 01:35:46.788993 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 01:35:46.789198 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 01:35:46.798761 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 01:35:46.807641 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 01:35:46.810964 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 01:35:46.836512 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 01:35:46.851657 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 01:35:46.854063 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 01:35:46.867830 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 01:35:46.869007 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 01:35:46.881642 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 01:35:46.883237 augenrules[1427]: No rules Sep 13 01:35:46.884736 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 01:35:47.011974 systemd-networkd[1415]: lo: Link UP Sep 13 01:35:47.011988 systemd-networkd[1415]: lo: Gained carrier Sep 13 01:35:47.013028 systemd-networkd[1415]: Enumeration completed Sep 13 01:35:47.013199 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 01:35:47.022085 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 01:35:47.064012 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 01:35:47.065041 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 01:35:47.078173 systemd-resolved[1382]: Positive Trust Anchors: Sep 13 01:35:47.078217 systemd-resolved[1382]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 01:35:47.078261 systemd-resolved[1382]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 01:35:47.089449 systemd-resolved[1382]: Using system hostname 'srv-bbx8z.gb1.brightbox.com'. Sep 13 01:35:47.093600 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 01:35:47.094620 systemd[1]: Reached target network.target - Network. Sep 13 01:35:47.095943 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 01:35:47.108127 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 13 01:35:47.121908 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1434) Sep 13 01:35:47.147363 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 01:35:47.147379 systemd-networkd[1415]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 01:35:47.148758 systemd-networkd[1415]: eth0: Link UP Sep 13 01:35:47.148778 systemd-networkd[1415]: eth0: Gained carrier Sep 13 01:35:47.148794 systemd-networkd[1415]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 01:35:47.164954 systemd-networkd[1415]: eth0: DHCPv4 address 10.230.67.162/30, gateway 10.230.67.161 acquired from 10.230.67.161 Sep 13 01:35:47.165983 systemd-timesyncd[1407]: Network configuration changed, trying to establish connection. Sep 13 01:35:47.234691 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 13 01:35:47.238302 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 01:35:47.245094 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 01:35:47.250878 kernel: ACPI: button: Power Button [PWRF] Sep 13 01:35:47.262910 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 01:35:47.278964 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 01:35:47.330905 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 13 01:35:47.366480 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 13 01:35:47.367457 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 13 01:35:47.367732 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 13 01:35:47.400652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 01:35:47.597969 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 01:35:47.629679 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 01:35:47.636154 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 01:35:47.658509 lvm[1466]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:35:47.697733 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 01:35:47.699723 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 01:35:47.700566 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 01:35:47.701467 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 01:35:47.702306 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 01:35:47.703612 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 01:35:47.704474 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 01:35:47.705303 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 01:35:47.706068 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 01:35:47.706126 systemd[1]: Reached target paths.target - Path Units. Sep 13 01:35:47.706779 systemd[1]: Reached target timers.target - Timer Units. Sep 13 01:35:47.709155 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 01:35:47.711831 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 01:35:47.718109 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 01:35:47.720739 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 01:35:47.722250 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 01:35:47.723074 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 01:35:47.723736 systemd[1]: Reached target basic.target - Basic System. Sep 13 01:35:47.724498 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 01:35:47.724546 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 01:35:47.728005 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 01:35:47.733182 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 01:35:47.736985 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 01:35:47.743749 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 01:35:47.748470 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 01:35:47.758814 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 01:35:47.761213 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 01:35:47.765374 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 01:35:47.770495 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 01:35:47.773120 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 01:35:47.778094 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 01:35:48.452522 systemd-resolved[1382]: Clock change detected. Flushing caches. Sep 13 01:35:48.452851 systemd-timesyncd[1407]: Contacted time server 109.74.197.50:123 (0.flatcar.pool.ntp.org). Sep 13 01:35:48.453453 systemd-timesyncd[1407]: Initial clock synchronization to Sat 2025-09-13 01:35:48.452374 UTC. Sep 13 01:35:48.462303 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 01:35:48.466222 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 01:35:48.467089 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 01:35:48.470582 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 01:35:48.476509 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 01:35:48.480731 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 01:35:48.489306 jq[1474]: false Sep 13 01:35:48.499098 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 01:35:48.499460 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 01:35:48.512014 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 01:35:48.513080 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 01:35:48.523189 extend-filesystems[1475]: Found loop4 Sep 13 01:35:48.535436 extend-filesystems[1475]: Found loop5 Sep 13 01:35:48.535436 extend-filesystems[1475]: Found loop6 Sep 13 01:35:48.535436 extend-filesystems[1475]: Found loop7 Sep 13 01:35:48.535436 extend-filesystems[1475]: Found vda Sep 13 01:35:48.535436 extend-filesystems[1475]: Found vda1 Sep 13 01:35:48.535436 extend-filesystems[1475]: Found vda2 Sep 13 01:35:48.535436 extend-filesystems[1475]: Found vda3 Sep 13 01:35:48.535436 extend-filesystems[1475]: Found usr Sep 13 01:35:48.535436 extend-filesystems[1475]: Found vda4 Sep 13 01:35:48.535436 extend-filesystems[1475]: Found vda6 Sep 13 01:35:48.535436 extend-filesystems[1475]: Found vda7 Sep 13 01:35:48.535436 extend-filesystems[1475]: Found vda9 Sep 13 01:35:48.535436 extend-filesystems[1475]: Checking size of /dev/vda9 Sep 13 01:35:48.569546 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 01:35:48.568750 dbus-daemon[1473]: [system] SELinux support is enabled Sep 13 01:35:48.577253 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 01:35:48.580604 dbus-daemon[1473]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1415 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 13 01:35:48.577312 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 01:35:48.578651 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 01:35:48.578698 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 01:35:48.593356 dbus-daemon[1473]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 13 01:35:48.597776 (ntainerd)[1501]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 01:35:48.602756 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 13 01:35:48.604300 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 01:35:48.611412 jq[1485]: true Sep 13 01:35:48.612412 update_engine[1483]: I20250913 01:35:48.607703 1483 main.cc:92] Flatcar Update Engine starting Sep 13 01:35:48.605618 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 01:35:48.635256 update_engine[1483]: I20250913 01:35:48.634887 1483 update_check_scheduler.cc:74] Next update check in 7m19s Sep 13 01:35:48.635345 extend-filesystems[1475]: Resized partition /dev/vda9 Sep 13 01:35:48.628598 systemd[1]: Started update-engine.service - Update Engine. Sep 13 01:35:48.639978 extend-filesystems[1510]: resize2fs 1.47.1 (20-May-2024) Sep 13 01:35:48.638599 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 01:35:48.652275 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Sep 13 01:35:48.652333 jq[1508]: true Sep 13 01:35:48.667650 tar[1494]: linux-amd64/LICENSE Sep 13 01:35:48.669845 tar[1494]: linux-amd64/helm Sep 13 01:35:48.772014 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1437) Sep 13 01:35:48.788296 systemd-logind[1482]: Watching system buttons on /dev/input/event2 (Power Button) Sep 13 01:35:48.788345 systemd-logind[1482]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 01:35:48.793683 systemd-logind[1482]: New seat seat0. Sep 13 01:35:48.812073 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 01:35:48.897513 dbus-daemon[1473]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 13 01:35:48.899914 bash[1530]: Updated "/home/core/.ssh/authorized_keys" Sep 13 01:35:48.897754 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 13 01:35:48.901682 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 01:35:48.908149 dbus-daemon[1473]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1506 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 13 01:35:48.919455 systemd[1]: Starting sshkeys.service... Sep 13 01:35:48.936420 systemd[1]: Starting polkit.service - Authorization Manager... Sep 13 01:35:49.014604 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 01:35:49.028880 polkitd[1538]: Started polkitd version 121 Sep 13 01:35:49.029818 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 01:35:49.049452 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 13 01:35:49.070225 systemd[1]: Started polkit.service - Authorization Manager. Sep 13 01:35:49.063805 polkitd[1538]: Loading rules from directory /etc/polkit-1/rules.d Sep 13 01:35:49.063922 polkitd[1538]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 13 01:35:49.086779 extend-filesystems[1510]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 01:35:49.086779 extend-filesystems[1510]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 13 01:35:49.086779 extend-filesystems[1510]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 13 01:35:49.084773 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 01:35:49.068638 polkitd[1538]: Finished loading, compiling and executing 2 rules Sep 13 01:35:49.102281 containerd[1501]: time="2025-09-13T01:35:49.089776765Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 01:35:49.102591 extend-filesystems[1475]: Resized filesystem in /dev/vda9 Sep 13 01:35:49.085111 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 01:35:49.069238 dbus-daemon[1473]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 13 01:35:49.069789 polkitd[1538]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 13 01:35:49.117754 systemd-hostnamed[1506]: Hostname set to (static) Sep 13 01:35:49.173521 locksmithd[1513]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 01:35:49.191214 containerd[1501]: time="2025-09-13T01:35:49.190099029Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:35:49.199400 containerd[1501]: time="2025-09-13T01:35:49.198379383Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:35:49.199400 containerd[1501]: time="2025-09-13T01:35:49.198433158Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 01:35:49.199400 containerd[1501]: time="2025-09-13T01:35:49.198464168Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 01:35:49.199400 containerd[1501]: time="2025-09-13T01:35:49.198755770Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 01:35:49.199400 containerd[1501]: time="2025-09-13T01:35:49.198790675Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 01:35:49.199400 containerd[1501]: time="2025-09-13T01:35:49.198906435Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:35:49.199400 containerd[1501]: time="2025-09-13T01:35:49.198942599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:35:49.200631 containerd[1501]: time="2025-09-13T01:35:49.200601017Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:35:49.201391 containerd[1501]: time="2025-09-13T01:35:49.201052454Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 01:35:49.201391 containerd[1501]: time="2025-09-13T01:35:49.201090879Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:35:49.201391 containerd[1501]: time="2025-09-13T01:35:49.201111026Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 01:35:49.201391 containerd[1501]: time="2025-09-13T01:35:49.201239136Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:35:49.203394 containerd[1501]: time="2025-09-13T01:35:49.202668094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 01:35:49.203394 containerd[1501]: time="2025-09-13T01:35:49.202887769Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 01:35:49.203394 containerd[1501]: time="2025-09-13T01:35:49.202913851Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 01:35:49.203394 containerd[1501]: time="2025-09-13T01:35:49.203065839Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 01:35:49.203394 containerd[1501]: time="2025-09-13T01:35:49.203163306Z" level=info msg="metadata content store policy set" policy=shared Sep 13 01:35:49.208933 containerd[1501]: time="2025-09-13T01:35:49.208843654Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 01:35:49.209387 containerd[1501]: time="2025-09-13T01:35:49.209044056Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 01:35:49.209387 containerd[1501]: time="2025-09-13T01:35:49.209124011Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 01:35:49.209387 containerd[1501]: time="2025-09-13T01:35:49.209166305Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 01:35:49.209387 containerd[1501]: time="2025-09-13T01:35:49.209204527Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 01:35:49.210565 containerd[1501]: time="2025-09-13T01:35:49.210197806Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 01:35:49.210952 containerd[1501]: time="2025-09-13T01:35:49.210910945Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 01:35:49.211484 containerd[1501]: time="2025-09-13T01:35:49.211456806Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 01:35:49.211683 containerd[1501]: time="2025-09-13T01:35:49.211656779Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 01:35:49.211798 containerd[1501]: time="2025-09-13T01:35:49.211775013Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 01:35:49.212219 containerd[1501]: time="2025-09-13T01:35:49.211939852Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 01:35:49.213410 containerd[1501]: time="2025-09-13T01:35:49.212414624Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 01:35:49.213410 containerd[1501]: time="2025-09-13T01:35:49.212471846Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 01:35:49.213410 containerd[1501]: time="2025-09-13T01:35:49.212498298Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 01:35:49.213410 containerd[1501]: time="2025-09-13T01:35:49.212521127Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 01:35:49.213410 containerd[1501]: time="2025-09-13T01:35:49.212541962Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 01:35:49.213410 containerd[1501]: time="2025-09-13T01:35:49.212561601Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 01:35:49.213410 containerd[1501]: time="2025-09-13T01:35:49.212585101Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 01:35:49.213410 containerd[1501]: time="2025-09-13T01:35:49.212615538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.213410 containerd[1501]: time="2025-09-13T01:35:49.212652110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.213410 containerd[1501]: time="2025-09-13T01:35:49.212675024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.213410 containerd[1501]: time="2025-09-13T01:35:49.212704213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.213410 containerd[1501]: time="2025-09-13T01:35:49.212726649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.213410 containerd[1501]: time="2025-09-13T01:35:49.212748464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.213410 containerd[1501]: time="2025-09-13T01:35:49.212767862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.213862 containerd[1501]: time="2025-09-13T01:35:49.212787045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.213862 containerd[1501]: time="2025-09-13T01:35:49.212807082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.213862 containerd[1501]: time="2025-09-13T01:35:49.212829925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.213862 containerd[1501]: time="2025-09-13T01:35:49.212848998Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.213862 containerd[1501]: time="2025-09-13T01:35:49.212868978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.213862 containerd[1501]: time="2025-09-13T01:35:49.212889106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.213862 containerd[1501]: time="2025-09-13T01:35:49.212912186Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 01:35:49.213862 containerd[1501]: time="2025-09-13T01:35:49.212970422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.213862 containerd[1501]: time="2025-09-13T01:35:49.212993786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.213862 containerd[1501]: time="2025-09-13T01:35:49.213012936Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 01:35:49.216382 containerd[1501]: time="2025-09-13T01:35:49.214454950Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 01:35:49.216382 containerd[1501]: time="2025-09-13T01:35:49.215459747Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 01:35:49.216382 containerd[1501]: time="2025-09-13T01:35:49.215495166Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 01:35:49.216382 containerd[1501]: time="2025-09-13T01:35:49.215515907Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 01:35:49.216382 containerd[1501]: time="2025-09-13T01:35:49.215535061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.216382 containerd[1501]: time="2025-09-13T01:35:49.215564322Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 01:35:49.216382 containerd[1501]: time="2025-09-13T01:35:49.215585992Z" level=info msg="NRI interface is disabled by configuration." Sep 13 01:35:49.216382 containerd[1501]: time="2025-09-13T01:35:49.215604162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 01:35:49.216665 containerd[1501]: time="2025-09-13T01:35:49.216048081Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 01:35:49.216665 containerd[1501]: time="2025-09-13T01:35:49.216179811Z" level=info msg="Connect containerd service" Sep 13 01:35:49.216665 containerd[1501]: time="2025-09-13T01:35:49.216244070Z" level=info msg="using legacy CRI server" Sep 13 01:35:49.216665 containerd[1501]: time="2025-09-13T01:35:49.216260347Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 01:35:49.218385 containerd[1501]: time="2025-09-13T01:35:49.217531244Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 01:35:49.219985 containerd[1501]: time="2025-09-13T01:35:49.219771457Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:35:49.222822 containerd[1501]: time="2025-09-13T01:35:49.220435164Z" level=info msg="Start subscribing containerd event" Sep 13 01:35:49.222822 containerd[1501]: time="2025-09-13T01:35:49.222183126Z" level=info msg="Start recovering state" Sep 13 01:35:49.222822 containerd[1501]: time="2025-09-13T01:35:49.222427152Z" level=info msg="Start event monitor" Sep 13 01:35:49.222822 containerd[1501]: time="2025-09-13T01:35:49.222477817Z" level=info msg="Start snapshots syncer" Sep 13 01:35:49.222822 containerd[1501]: time="2025-09-13T01:35:49.222500764Z" level=info msg="Start cni network conf syncer for default" Sep 13 01:35:49.222822 containerd[1501]: time="2025-09-13T01:35:49.222520333Z" level=info msg="Start streaming server" Sep 13 01:35:49.227851 containerd[1501]: time="2025-09-13T01:35:49.227809745Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 01:35:49.228686 containerd[1501]: time="2025-09-13T01:35:49.228658565Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 01:35:49.229444 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 01:35:49.231984 containerd[1501]: time="2025-09-13T01:35:49.231956660Z" level=info msg="containerd successfully booted in 0.159809s" Sep 13 01:35:49.335897 systemd-networkd[1415]: eth0: Gained IPv6LL Sep 13 01:35:49.346011 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 01:35:49.347781 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 01:35:49.358776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:35:49.368134 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 01:35:49.439649 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 01:35:49.443551 sshd_keygen[1516]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 01:35:49.490632 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 01:35:49.505605 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 01:35:49.515738 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 01:35:49.517091 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 01:35:49.528844 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 01:35:49.546742 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 01:35:49.554909 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 01:35:49.565905 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 01:35:49.567429 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 01:35:49.845791 tar[1494]: linux-amd64/README.md Sep 13 01:35:49.861879 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 01:35:50.454532 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:35:50.466119 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 01:35:50.706987 systemd-networkd[1415]: eth0: Ignoring DHCPv6 address 2a02:1348:179:90e8:24:19ff:fee6:43a2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:90e8:24:19ff:fee6:43a2/64 assigned by NDisc. Sep 13 01:35:50.707001 systemd-networkd[1415]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 13 01:35:51.065062 kubelet[1598]: E0913 01:35:51.064844 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:35:51.068574 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:35:51.068840 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:35:51.069711 systemd[1]: kubelet.service: Consumed 1.066s CPU time. Sep 13 01:35:54.419503 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 01:35:54.438996 systemd[1]: Started sshd@0-10.230.67.162:22-139.178.68.195:55824.service - OpenSSH per-connection server daemon (139.178.68.195:55824). Sep 13 01:35:54.649408 login[1588]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Sep 13 01:35:54.649880 login[1587]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:35:54.669035 systemd-logind[1482]: New session 1 of user core. Sep 13 01:35:54.674471 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 01:35:54.680968 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 01:35:54.712702 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 01:35:54.726852 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 01:35:54.804022 (systemd)[1617]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 01:35:54.957090 systemd[1617]: Queued start job for default target default.target. Sep 13 01:35:54.969087 systemd[1617]: Created slice app.slice - User Application Slice. Sep 13 01:35:54.969139 systemd[1617]: Reached target paths.target - Paths. Sep 13 01:35:54.969165 systemd[1617]: Reached target timers.target - Timers. Sep 13 01:35:54.972121 systemd[1617]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 01:35:54.996650 systemd[1617]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 01:35:54.996891 systemd[1617]: Reached target sockets.target - Sockets. Sep 13 01:35:54.996917 systemd[1617]: Reached target basic.target - Basic System. Sep 13 01:35:54.996997 systemd[1617]: Reached target default.target - Main User Target. Sep 13 01:35:54.997061 systemd[1617]: Startup finished in 181ms. Sep 13 01:35:54.997220 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 01:35:55.013739 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 01:35:55.337978 sshd[1609]: Accepted publickey for core from 139.178.68.195 port 55824 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:35:55.340435 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:35:55.348011 systemd-logind[1482]: New session 3 of user core. Sep 13 01:35:55.353658 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 01:35:55.541740 coreos-metadata[1472]: Sep 13 01:35:55.541 WARN failed to locate config-drive, using the metadata service API instead Sep 13 01:35:55.568522 coreos-metadata[1472]: Sep 13 01:35:55.568 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Sep 13 01:35:55.574255 coreos-metadata[1472]: Sep 13 01:35:55.574 INFO Fetch failed with 404: resource not found Sep 13 01:35:55.574417 coreos-metadata[1472]: Sep 13 01:35:55.574 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 13 01:35:55.575089 coreos-metadata[1472]: Sep 13 01:35:55.575 INFO Fetch successful Sep 13 01:35:55.575207 coreos-metadata[1472]: Sep 13 01:35:55.575 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Sep 13 01:35:55.588468 coreos-metadata[1472]: Sep 13 01:35:55.588 INFO Fetch successful Sep 13 01:35:55.588468 coreos-metadata[1472]: Sep 13 01:35:55.588 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Sep 13 01:35:55.603108 coreos-metadata[1472]: Sep 13 01:35:55.603 INFO Fetch successful Sep 13 01:35:55.603272 coreos-metadata[1472]: Sep 13 01:35:55.603 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Sep 13 01:35:55.623760 coreos-metadata[1472]: Sep 13 01:35:55.623 INFO Fetch successful Sep 13 01:35:55.623960 coreos-metadata[1472]: Sep 13 01:35:55.623 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Sep 13 01:35:55.641662 coreos-metadata[1472]: Sep 13 01:35:55.641 INFO Fetch successful Sep 13 01:35:55.653497 login[1588]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 13 01:35:55.661465 systemd-logind[1482]: New session 2 of user core. Sep 13 01:35:55.679776 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 01:35:55.682567 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 01:35:55.686699 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 01:35:56.116030 systemd[1]: Started sshd@1-10.230.67.162:22-139.178.68.195:55834.service - OpenSSH per-connection server daemon (139.178.68.195:55834). Sep 13 01:35:56.188838 coreos-metadata[1541]: Sep 13 01:35:56.188 WARN failed to locate config-drive, using the metadata service API instead Sep 13 01:35:56.212679 coreos-metadata[1541]: Sep 13 01:35:56.212 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 13 01:35:56.249164 coreos-metadata[1541]: Sep 13 01:35:56.249 INFO Fetch successful Sep 13 01:35:56.249404 coreos-metadata[1541]: Sep 13 01:35:56.249 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 13 01:35:56.295678 coreos-metadata[1541]: Sep 13 01:35:56.295 INFO Fetch successful Sep 13 01:35:56.298303 unknown[1541]: wrote ssh authorized keys file for user: core Sep 13 01:35:56.319312 update-ssh-keys[1660]: Updated "/home/core/.ssh/authorized_keys" Sep 13 01:35:56.320234 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 01:35:56.323113 systemd[1]: Finished sshkeys.service. Sep 13 01:35:56.327590 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 01:35:56.327934 systemd[1]: Startup finished in 1.311s (kernel) + 15.180s (initrd) + 11.777s (userspace) = 28.270s. Sep 13 01:35:56.995078 sshd[1655]: Accepted publickey for core from 139.178.68.195 port 55834 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:35:56.997222 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:35:57.006290 systemd-logind[1482]: New session 4 of user core. Sep 13 01:35:57.012618 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 01:35:57.612664 sshd[1655]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:57.616768 systemd-logind[1482]: Session 4 logged out. Waiting for processes to exit. Sep 13 01:35:57.617289 systemd[1]: sshd@1-10.230.67.162:22-139.178.68.195:55834.service: Deactivated successfully. Sep 13 01:35:57.619499 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 01:35:57.621610 systemd-logind[1482]: Removed session 4. Sep 13 01:35:57.775867 systemd[1]: Started sshd@2-10.230.67.162:22-139.178.68.195:55840.service - OpenSSH per-connection server daemon (139.178.68.195:55840). Sep 13 01:35:58.659457 sshd[1668]: Accepted publickey for core from 139.178.68.195 port 55840 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:35:58.661622 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:35:58.669612 systemd-logind[1482]: New session 5 of user core. Sep 13 01:35:58.676736 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 01:35:59.275125 sshd[1668]: pam_unix(sshd:session): session closed for user core Sep 13 01:35:59.281646 systemd[1]: sshd@2-10.230.67.162:22-139.178.68.195:55840.service: Deactivated successfully. Sep 13 01:35:59.283883 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 01:35:59.285724 systemd-logind[1482]: Session 5 logged out. Waiting for processes to exit. Sep 13 01:35:59.287348 systemd-logind[1482]: Removed session 5. Sep 13 01:35:59.457700 systemd[1]: Started sshd@3-10.230.67.162:22-139.178.68.195:55854.service - OpenSSH per-connection server daemon (139.178.68.195:55854). Sep 13 01:36:00.341139 sshd[1675]: Accepted publickey for core from 139.178.68.195 port 55854 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:36:00.343313 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:36:00.351358 systemd-logind[1482]: New session 6 of user core. Sep 13 01:36:00.357673 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 01:36:00.961032 sshd[1675]: pam_unix(sshd:session): session closed for user core Sep 13 01:36:00.965093 systemd[1]: sshd@3-10.230.67.162:22-139.178.68.195:55854.service: Deactivated successfully. Sep 13 01:36:00.967331 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 01:36:00.969326 systemd-logind[1482]: Session 6 logged out. Waiting for processes to exit. Sep 13 01:36:00.971188 systemd-logind[1482]: Removed session 6. Sep 13 01:36:01.118519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 01:36:01.128662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:36:01.131498 systemd[1]: Started sshd@4-10.230.67.162:22-139.178.68.195:36532.service - OpenSSH per-connection server daemon (139.178.68.195:36532). Sep 13 01:36:01.306587 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:36:01.326985 (kubelet)[1692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 01:36:01.410648 kubelet[1692]: E0913 01:36:01.410546 1692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:36:01.414225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:36:01.414515 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:36:02.024778 sshd[1683]: Accepted publickey for core from 139.178.68.195 port 36532 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:36:02.027035 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:36:02.034647 systemd-logind[1482]: New session 7 of user core. Sep 13 01:36:02.047614 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 01:36:02.518001 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 01:36:02.519145 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 01:36:02.536040 sudo[1700]: pam_unix(sudo:session): session closed for user root Sep 13 01:36:02.680801 sshd[1683]: pam_unix(sshd:session): session closed for user core Sep 13 01:36:02.686111 systemd[1]: sshd@4-10.230.67.162:22-139.178.68.195:36532.service: Deactivated successfully. Sep 13 01:36:02.688985 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 01:36:02.690536 systemd-logind[1482]: Session 7 logged out. Waiting for processes to exit. Sep 13 01:36:02.691928 systemd-logind[1482]: Removed session 7. Sep 13 01:36:02.837607 systemd[1]: Started sshd@5-10.230.67.162:22-139.178.68.195:36544.service - OpenSSH per-connection server daemon (139.178.68.195:36544). Sep 13 01:36:03.755826 sshd[1705]: Accepted publickey for core from 139.178.68.195 port 36544 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:36:03.758349 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:36:03.765647 systemd-logind[1482]: New session 8 of user core. Sep 13 01:36:03.772617 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 01:36:04.249033 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 01:36:04.250116 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 01:36:04.255615 sudo[1709]: pam_unix(sudo:session): session closed for user root Sep 13 01:36:04.263948 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 01:36:04.264403 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 01:36:04.284767 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 01:36:04.288554 auditctl[1712]: No rules Sep 13 01:36:04.289085 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 01:36:04.289403 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 01:36:04.297843 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 01:36:04.347023 augenrules[1730]: No rules Sep 13 01:36:04.349408 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 01:36:04.351240 sudo[1708]: pam_unix(sudo:session): session closed for user root Sep 13 01:36:04.499226 sshd[1705]: pam_unix(sshd:session): session closed for user core Sep 13 01:36:04.503882 systemd[1]: sshd@5-10.230.67.162:22-139.178.68.195:36544.service: Deactivated successfully. Sep 13 01:36:04.506826 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 01:36:04.509006 systemd-logind[1482]: Session 8 logged out. Waiting for processes to exit. Sep 13 01:36:04.510497 systemd-logind[1482]: Removed session 8. Sep 13 01:36:04.672218 systemd[1]: Started sshd@6-10.230.67.162:22-139.178.68.195:36560.service - OpenSSH per-connection server daemon (139.178.68.195:36560). Sep 13 01:36:05.561758 sshd[1738]: Accepted publickey for core from 139.178.68.195 port 36560 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:36:05.563884 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:36:05.570016 systemd-logind[1482]: New session 9 of user core. Sep 13 01:36:05.576685 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 01:36:06.041751 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 01:36:06.042210 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 01:36:06.467052 (dockerd)[1758]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 01:36:06.467868 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 01:36:06.904522 dockerd[1758]: time="2025-09-13T01:36:06.903825996Z" level=info msg="Starting up" Sep 13 01:36:07.068087 dockerd[1758]: time="2025-09-13T01:36:07.067687086Z" level=info msg="Loading containers: start." Sep 13 01:36:07.205427 kernel: Initializing XFRM netlink socket Sep 13 01:36:07.312135 systemd-networkd[1415]: docker0: Link UP Sep 13 01:36:07.328491 dockerd[1758]: time="2025-09-13T01:36:07.328419742Z" level=info msg="Loading containers: done." Sep 13 01:36:07.349398 dockerd[1758]: time="2025-09-13T01:36:07.349288525Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 01:36:07.349655 dockerd[1758]: time="2025-09-13T01:36:07.349464404Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 01:36:07.349655 dockerd[1758]: time="2025-09-13T01:36:07.349639821Z" level=info msg="Daemon has completed initialization" Sep 13 01:36:07.402447 dockerd[1758]: time="2025-09-13T01:36:07.401238971Z" level=info msg="API listen on /run/docker.sock" Sep 13 01:36:07.404959 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 01:36:08.831293 containerd[1501]: time="2025-09-13T01:36:08.830632490Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 13 01:36:09.958906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4223234831.mount: Deactivated successfully. Sep 13 01:36:11.449114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 01:36:11.462756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:36:11.694011 (kubelet)[1966]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 01:36:11.694241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:36:11.769748 kubelet[1966]: E0913 01:36:11.769173 1966 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:36:11.772778 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:36:11.773045 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:36:11.936401 containerd[1501]: time="2025-09-13T01:36:11.935865433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:11.937566 containerd[1501]: time="2025-09-13T01:36:11.937108134Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837924" Sep 13 01:36:11.938298 containerd[1501]: time="2025-09-13T01:36:11.938245080Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:11.942645 containerd[1501]: time="2025-09-13T01:36:11.942606622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:11.946900 containerd[1501]: time="2025-09-13T01:36:11.946861015Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 3.116112526s" Sep 13 01:36:11.946982 containerd[1501]: time="2025-09-13T01:36:11.946932181Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 13 01:36:11.949638 containerd[1501]: time="2025-09-13T01:36:11.949597262Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 13 01:36:13.950105 containerd[1501]: time="2025-09-13T01:36:13.949909179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:13.951477 containerd[1501]: time="2025-09-13T01:36:13.951397421Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787035" Sep 13 01:36:13.953025 containerd[1501]: time="2025-09-13T01:36:13.952864520Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:13.959275 containerd[1501]: time="2025-09-13T01:36:13.959206080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:13.961113 containerd[1501]: time="2025-09-13T01:36:13.961041944Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.011387309s" Sep 13 01:36:13.961113 containerd[1501]: time="2025-09-13T01:36:13.961101217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 13 01:36:13.963686 containerd[1501]: time="2025-09-13T01:36:13.963201762Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 13 01:36:15.754255 containerd[1501]: time="2025-09-13T01:36:15.754072959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:15.756410 containerd[1501]: time="2025-09-13T01:36:15.755985516Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176297" Sep 13 01:36:15.757355 containerd[1501]: time="2025-09-13T01:36:15.757299310Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:15.761177 containerd[1501]: time="2025-09-13T01:36:15.761142192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:15.763354 containerd[1501]: time="2025-09-13T01:36:15.762677331Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.799432714s" Sep 13 01:36:15.763354 containerd[1501]: time="2025-09-13T01:36:15.762826806Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 13 01:36:15.764764 containerd[1501]: time="2025-09-13T01:36:15.764735148Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 13 01:36:18.665207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3514666853.mount: Deactivated successfully. Sep 13 01:36:19.447452 containerd[1501]: time="2025-09-13T01:36:19.446423343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:19.448789 containerd[1501]: time="2025-09-13T01:36:19.448733406Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924214" Sep 13 01:36:19.449990 containerd[1501]: time="2025-09-13T01:36:19.449918627Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:19.454196 containerd[1501]: time="2025-09-13T01:36:19.453138503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:19.454196 containerd[1501]: time="2025-09-13T01:36:19.453856445Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 3.688811607s" Sep 13 01:36:19.454196 containerd[1501]: time="2025-09-13T01:36:19.453905075Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 13 01:36:19.456011 containerd[1501]: time="2025-09-13T01:36:19.455966370Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 01:36:20.083526 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount121162030.mount: Deactivated successfully. Sep 13 01:36:20.745755 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 13 01:36:21.277313 containerd[1501]: time="2025-09-13T01:36:21.277188146Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:21.280463 containerd[1501]: time="2025-09-13T01:36:21.280391688Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 13 01:36:21.280911 containerd[1501]: time="2025-09-13T01:36:21.280854927Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:21.287405 containerd[1501]: time="2025-09-13T01:36:21.285979290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:21.287676 containerd[1501]: time="2025-09-13T01:36:21.287641169Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.831618508s" Sep 13 01:36:21.287843 containerd[1501]: time="2025-09-13T01:36:21.287816542Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 01:36:21.288979 containerd[1501]: time="2025-09-13T01:36:21.288952588Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 01:36:21.876047 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 13 01:36:21.888676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:36:21.895422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2546192039.mount: Deactivated successfully. Sep 13 01:36:21.900215 containerd[1501]: time="2025-09-13T01:36:21.900122995Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:21.902560 containerd[1501]: time="2025-09-13T01:36:21.902457662Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 13 01:36:21.904432 containerd[1501]: time="2025-09-13T01:36:21.903558881Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:21.907033 containerd[1501]: time="2025-09-13T01:36:21.907003439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:21.908883 containerd[1501]: time="2025-09-13T01:36:21.908837896Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 619.752593ms" Sep 13 01:36:21.909578 containerd[1501]: time="2025-09-13T01:36:21.909550675Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 01:36:21.910514 containerd[1501]: time="2025-09-13T01:36:21.910279297Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 13 01:36:22.080383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:36:22.094846 (kubelet)[2057]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 01:36:22.189260 kubelet[2057]: E0913 01:36:22.188489 2057 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 01:36:22.191607 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 01:36:22.191872 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 01:36:22.605948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1217699342.mount: Deactivated successfully. Sep 13 01:36:26.362464 containerd[1501]: time="2025-09-13T01:36:26.361141179Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:26.364112 containerd[1501]: time="2025-09-13T01:36:26.362714305Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Sep 13 01:36:26.364345 containerd[1501]: time="2025-09-13T01:36:26.364284696Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:26.371200 containerd[1501]: time="2025-09-13T01:36:26.371118312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:26.373861 containerd[1501]: time="2025-09-13T01:36:26.372772910Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.462114514s" Sep 13 01:36:26.373861 containerd[1501]: time="2025-09-13T01:36:26.372839394Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 13 01:36:30.826971 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:36:30.841770 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:36:30.879589 systemd[1]: Reloading requested from client PID 2146 ('systemctl') (unit session-9.scope)... Sep 13 01:36:30.879646 systemd[1]: Reloading... Sep 13 01:36:31.048253 zram_generator::config[2188]: No configuration found. Sep 13 01:36:31.238424 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:36:31.349290 systemd[1]: Reloading finished in 468 ms. Sep 13 01:36:31.417493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:36:31.431896 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:36:31.432501 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 01:36:31.432937 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:36:31.438762 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:36:31.687777 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:36:31.699928 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 01:36:31.764347 kubelet[2254]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:36:31.765419 kubelet[2254]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 01:36:31.765419 kubelet[2254]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:36:31.765419 kubelet[2254]: I0913 01:36:31.765160 2254 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:36:32.526443 kubelet[2254]: I0913 01:36:32.526288 2254 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 01:36:32.526443 kubelet[2254]: I0913 01:36:32.526377 2254 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:36:32.526905 kubelet[2254]: I0913 01:36:32.526781 2254 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 01:36:32.567392 kubelet[2254]: E0913 01:36:32.566437 2254 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.67.162:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.67.162:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:36:32.567936 kubelet[2254]: I0913 01:36:32.567904 2254 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:36:32.581259 kubelet[2254]: E0913 01:36:32.581160 2254 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:36:32.581601 kubelet[2254]: I0913 01:36:32.581578 2254 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:36:32.590602 kubelet[2254]: I0913 01:36:32.590574 2254 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:36:32.592839 kubelet[2254]: I0913 01:36:32.592773 2254 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:36:32.593124 kubelet[2254]: I0913 01:36:32.592837 2254 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-bbx8z.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 01:36:32.594825 kubelet[2254]: I0913 01:36:32.594792 2254 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:36:32.594825 kubelet[2254]: I0913 01:36:32.594822 2254 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 01:36:32.596176 kubelet[2254]: I0913 01:36:32.596122 2254 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:36:32.606331 kubelet[2254]: I0913 01:36:32.606014 2254 kubelet.go:446] "Attempting to sync node with API server" Sep 13 01:36:32.606331 kubelet[2254]: I0913 01:36:32.606076 2254 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:36:32.606331 kubelet[2254]: I0913 01:36:32.606123 2254 kubelet.go:352] "Adding apiserver pod source" Sep 13 01:36:32.606331 kubelet[2254]: I0913 01:36:32.606148 2254 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:36:32.608389 kubelet[2254]: W0913 01:36:32.607977 2254 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.67.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-bbx8z.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.67.162:6443: connect: connection refused Sep 13 01:36:32.608389 kubelet[2254]: E0913 01:36:32.608100 2254 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.67.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-bbx8z.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.67.162:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:36:32.612731 kubelet[2254]: W0913 01:36:32.612529 2254 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.67.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.67.162:6443: connect: connection refused Sep 13 01:36:32.612731 kubelet[2254]: E0913 01:36:32.612582 2254 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.67.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.67.162:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:36:32.614661 kubelet[2254]: I0913 01:36:32.614604 2254 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 01:36:32.618440 kubelet[2254]: I0913 01:36:32.618415 2254 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 01:36:32.620409 kubelet[2254]: W0913 01:36:32.619339 2254 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 01:36:32.622444 kubelet[2254]: I0913 01:36:32.622419 2254 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 01:36:32.622857 kubelet[2254]: I0913 01:36:32.622599 2254 server.go:1287] "Started kubelet" Sep 13 01:36:32.626085 kubelet[2254]: I0913 01:36:32.625313 2254 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:36:32.628313 kubelet[2254]: I0913 01:36:32.626867 2254 server.go:479] "Adding debug handlers to kubelet server" Sep 13 01:36:32.628313 kubelet[2254]: I0913 01:36:32.627447 2254 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:36:32.628313 kubelet[2254]: I0913 01:36:32.628217 2254 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:36:32.633368 kubelet[2254]: E0913 01:36:32.630575 2254 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.67.162:6443/api/v1/namespaces/default/events\": dial tcp 10.230.67.162:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-bbx8z.gb1.brightbox.com.1864b3c0313dbb28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-bbx8z.gb1.brightbox.com,UID:srv-bbx8z.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-bbx8z.gb1.brightbox.com,},FirstTimestamp:2025-09-13 01:36:32.622557992 +0000 UTC m=+0.914339464,LastTimestamp:2025-09-13 01:36:32.622557992 +0000 UTC m=+0.914339464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-bbx8z.gb1.brightbox.com,}" Sep 13 01:36:32.635227 kubelet[2254]: I0913 01:36:32.634792 2254 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:36:32.638146 kubelet[2254]: I0913 01:36:32.637277 2254 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 01:36:32.638146 kubelet[2254]: E0913 01:36:32.637756 2254 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-bbx8z.gb1.brightbox.com\" not found" Sep 13 01:36:32.640715 kubelet[2254]: I0913 01:36:32.638598 2254 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 01:36:32.640715 kubelet[2254]: I0913 01:36:32.638704 2254 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:36:32.640715 kubelet[2254]: I0913 01:36:32.639791 2254 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:36:32.648385 kubelet[2254]: W0913 01:36:32.648317 2254 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.67.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.67.162:6443: connect: connection refused Sep 13 01:36:32.648585 kubelet[2254]: E0913 01:36:32.648555 2254 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.67.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.67.162:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:36:32.648839 kubelet[2254]: E0913 01:36:32.648788 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.67.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-bbx8z.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.67.162:6443: connect: connection refused" interval="200ms" Sep 13 01:36:32.651088 kubelet[2254]: I0913 01:36:32.651060 2254 factory.go:221] Registration of the systemd container factory successfully Sep 13 01:36:32.653394 kubelet[2254]: I0913 01:36:32.653347 2254 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:36:32.654264 kubelet[2254]: E0913 01:36:32.654236 2254 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:36:32.656791 kubelet[2254]: I0913 01:36:32.656768 2254 factory.go:221] Registration of the containerd container factory successfully Sep 13 01:36:32.672386 kubelet[2254]: I0913 01:36:32.670465 2254 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 01:36:32.672386 kubelet[2254]: I0913 01:36:32.672110 2254 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 01:36:32.672386 kubelet[2254]: I0913 01:36:32.672168 2254 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 01:36:32.672386 kubelet[2254]: I0913 01:36:32.672214 2254 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 01:36:32.672386 kubelet[2254]: I0913 01:36:32.672228 2254 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 01:36:32.672386 kubelet[2254]: E0913 01:36:32.672331 2254 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:36:32.685228 kubelet[2254]: W0913 01:36:32.685164 2254 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.67.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.67.162:6443: connect: connection refused Sep 13 01:36:32.685538 kubelet[2254]: E0913 01:36:32.685478 2254 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.67.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.67.162:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:36:32.702251 kubelet[2254]: I0913 01:36:32.702198 2254 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 01:36:32.702251 kubelet[2254]: I0913 01:36:32.702227 2254 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 01:36:32.702251 kubelet[2254]: I0913 01:36:32.702258 2254 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:36:32.704701 kubelet[2254]: I0913 01:36:32.704662 2254 policy_none.go:49] "None policy: Start" Sep 13 01:36:32.704789 kubelet[2254]: I0913 01:36:32.704705 2254 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 01:36:32.704789 kubelet[2254]: I0913 01:36:32.704736 2254 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:36:32.718717 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 13 01:36:32.738865 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 13 01:36:32.739539 kubelet[2254]: E0913 01:36:32.739495 2254 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-bbx8z.gb1.brightbox.com\" not found" Sep 13 01:36:32.744879 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 13 01:36:32.754128 kubelet[2254]: I0913 01:36:32.754057 2254 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 01:36:32.754420 kubelet[2254]: I0913 01:36:32.754391 2254 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:36:32.754487 kubelet[2254]: I0913 01:36:32.754436 2254 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:36:32.756696 kubelet[2254]: I0913 01:36:32.756657 2254 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:36:32.758413 kubelet[2254]: E0913 01:36:32.758348 2254 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 01:36:32.759152 kubelet[2254]: E0913 01:36:32.758444 2254 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-bbx8z.gb1.brightbox.com\" not found" Sep 13 01:36:32.790888 systemd[1]: Created slice kubepods-burstable-pod666e930dc6f634320d83a1f8e8f0cdd2.slice - libcontainer container kubepods-burstable-pod666e930dc6f634320d83a1f8e8f0cdd2.slice. Sep 13 01:36:32.805895 kubelet[2254]: E0913 01:36:32.805852 2254 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-bbx8z.gb1.brightbox.com\" not found" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:32.812041 systemd[1]: Created slice kubepods-burstable-pod0fa81a883f454eff649105381fbe6f90.slice - libcontainer container kubepods-burstable-pod0fa81a883f454eff649105381fbe6f90.slice. Sep 13 01:36:32.824477 kubelet[2254]: E0913 01:36:32.823698 2254 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-bbx8z.gb1.brightbox.com\" not found" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:32.827885 systemd[1]: Created slice kubepods-burstable-poddf618bcac0f01f038a9daafb8d7e8e8b.slice - libcontainer container kubepods-burstable-poddf618bcac0f01f038a9daafb8d7e8e8b.slice. Sep 13 01:36:32.831357 kubelet[2254]: E0913 01:36:32.831071 2254 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-bbx8z.gb1.brightbox.com\" not found" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:32.840599 kubelet[2254]: I0913 01:36:32.840296 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/666e930dc6f634320d83a1f8e8f0cdd2-k8s-certs\") pod \"kube-apiserver-srv-bbx8z.gb1.brightbox.com\" (UID: \"666e930dc6f634320d83a1f8e8f0cdd2\") " pod="kube-system/kube-apiserver-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:32.840599 kubelet[2254]: I0913 01:36:32.840345 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fa81a883f454eff649105381fbe6f90-ca-certs\") pod \"kube-controller-manager-srv-bbx8z.gb1.brightbox.com\" (UID: \"0fa81a883f454eff649105381fbe6f90\") " pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:32.840599 kubelet[2254]: I0913 01:36:32.840415 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0fa81a883f454eff649105381fbe6f90-flexvolume-dir\") pod \"kube-controller-manager-srv-bbx8z.gb1.brightbox.com\" (UID: \"0fa81a883f454eff649105381fbe6f90\") " pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:32.840599 kubelet[2254]: I0913 01:36:32.840444 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fa81a883f454eff649105381fbe6f90-k8s-certs\") pod \"kube-controller-manager-srv-bbx8z.gb1.brightbox.com\" (UID: \"0fa81a883f454eff649105381fbe6f90\") " pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:32.840599 kubelet[2254]: I0913 01:36:32.840474 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/666e930dc6f634320d83a1f8e8f0cdd2-ca-certs\") pod \"kube-apiserver-srv-bbx8z.gb1.brightbox.com\" (UID: \"666e930dc6f634320d83a1f8e8f0cdd2\") " pod="kube-system/kube-apiserver-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:32.840917 kubelet[2254]: I0913 01:36:32.840508 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/666e930dc6f634320d83a1f8e8f0cdd2-usr-share-ca-certificates\") pod \"kube-apiserver-srv-bbx8z.gb1.brightbox.com\" (UID: \"666e930dc6f634320d83a1f8e8f0cdd2\") " pod="kube-system/kube-apiserver-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:32.840917 kubelet[2254]: I0913 01:36:32.840538 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fa81a883f454eff649105381fbe6f90-kubeconfig\") pod \"kube-controller-manager-srv-bbx8z.gb1.brightbox.com\" (UID: \"0fa81a883f454eff649105381fbe6f90\") " pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:32.840917 kubelet[2254]: I0913 01:36:32.840564 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fa81a883f454eff649105381fbe6f90-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-bbx8z.gb1.brightbox.com\" (UID: \"0fa81a883f454eff649105381fbe6f90\") " pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:32.840917 kubelet[2254]: I0913 01:36:32.840609 2254 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df618bcac0f01f038a9daafb8d7e8e8b-kubeconfig\") pod \"kube-scheduler-srv-bbx8z.gb1.brightbox.com\" (UID: \"df618bcac0f01f038a9daafb8d7e8e8b\") " pod="kube-system/kube-scheduler-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:32.849925 kubelet[2254]: E0913 01:36:32.849872 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.67.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-bbx8z.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.67.162:6443: connect: connection refused" interval="400ms" Sep 13 01:36:32.857926 kubelet[2254]: I0913 01:36:32.857462 2254 kubelet_node_status.go:75] "Attempting to register node" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:32.857926 kubelet[2254]: E0913 01:36:32.857875 2254 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.67.162:6443/api/v1/nodes\": dial tcp 10.230.67.162:6443: connect: connection refused" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:33.061169 kubelet[2254]: I0913 01:36:33.060966 2254 kubelet_node_status.go:75] "Attempting to register node" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:33.061447 kubelet[2254]: E0913 01:36:33.061416 2254 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.67.162:6443/api/v1/nodes\": dial tcp 10.230.67.162:6443: connect: connection refused" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:33.108697 containerd[1501]: time="2025-09-13T01:36:33.108487302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-bbx8z.gb1.brightbox.com,Uid:666e930dc6f634320d83a1f8e8f0cdd2,Namespace:kube-system,Attempt:0,}" Sep 13 01:36:33.125107 containerd[1501]: time="2025-09-13T01:36:33.125012011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-bbx8z.gb1.brightbox.com,Uid:0fa81a883f454eff649105381fbe6f90,Namespace:kube-system,Attempt:0,}" Sep 13 01:36:33.133282 containerd[1501]: time="2025-09-13T01:36:33.132935372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-bbx8z.gb1.brightbox.com,Uid:df618bcac0f01f038a9daafb8d7e8e8b,Namespace:kube-system,Attempt:0,}" Sep 13 01:36:33.251126 kubelet[2254]: E0913 01:36:33.251063 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.67.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-bbx8z.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.67.162:6443: connect: connection refused" interval="800ms" Sep 13 01:36:33.464952 kubelet[2254]: I0913 01:36:33.464814 2254 kubelet_node_status.go:75] "Attempting to register node" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:33.465594 kubelet[2254]: E0913 01:36:33.465559 2254 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.67.162:6443/api/v1/nodes\": dial tcp 10.230.67.162:6443: connect: connection refused" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:33.549806 kubelet[2254]: W0913 01:36:33.549723 2254 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.67.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-bbx8z.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.67.162:6443: connect: connection refused Sep 13 01:36:33.549975 kubelet[2254]: E0913 01:36:33.549819 2254 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.67.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-bbx8z.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.67.162:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:36:33.680460 kubelet[2254]: W0913 01:36:33.680310 2254 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.67.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.67.162:6443: connect: connection refused Sep 13 01:36:33.680460 kubelet[2254]: E0913 01:36:33.680394 2254 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.67.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.67.162:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:36:33.721016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4185884938.mount: Deactivated successfully. Sep 13 01:36:33.728335 containerd[1501]: time="2025-09-13T01:36:33.727053832Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 01:36:33.728694 containerd[1501]: time="2025-09-13T01:36:33.728612160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 01:36:33.730395 containerd[1501]: time="2025-09-13T01:36:33.729467180Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 01:36:33.730659 containerd[1501]: time="2025-09-13T01:36:33.730578377Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 01:36:33.734978 containerd[1501]: time="2025-09-13T01:36:33.734239164Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 01:36:33.734978 containerd[1501]: time="2025-09-13T01:36:33.734488005Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 01:36:33.734978 containerd[1501]: time="2025-09-13T01:36:33.734923361Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 13 01:36:33.741709 containerd[1501]: time="2025-09-13T01:36:33.741612853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 01:36:33.744234 containerd[1501]: time="2025-09-13T01:36:33.744200268Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 619.094646ms" Sep 13 01:36:33.749313 containerd[1501]: time="2025-09-13T01:36:33.748699242Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 615.660507ms" Sep 13 01:36:33.750661 containerd[1501]: time="2025-09-13T01:36:33.750533907Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 641.841863ms" Sep 13 01:36:33.789037 kubelet[2254]: W0913 01:36:33.782677 2254 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.67.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.67.162:6443: connect: connection refused Sep 13 01:36:33.789037 kubelet[2254]: E0913 01:36:33.782762 2254 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.67.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.67.162:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:36:33.947448 containerd[1501]: time="2025-09-13T01:36:33.947017352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:36:33.947448 containerd[1501]: time="2025-09-13T01:36:33.947116065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:36:33.947448 containerd[1501]: time="2025-09-13T01:36:33.947146838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:36:33.947448 containerd[1501]: time="2025-09-13T01:36:33.947308883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:36:33.973441 containerd[1501]: time="2025-09-13T01:36:33.953746639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:36:33.973441 containerd[1501]: time="2025-09-13T01:36:33.953890947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:36:33.973441 containerd[1501]: time="2025-09-13T01:36:33.953920531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:36:33.973441 containerd[1501]: time="2025-09-13T01:36:33.954181792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:36:33.973441 containerd[1501]: time="2025-09-13T01:36:33.954809242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:36:33.973441 containerd[1501]: time="2025-09-13T01:36:33.954898824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:36:33.973441 containerd[1501]: time="2025-09-13T01:36:33.954924162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:36:33.973441 containerd[1501]: time="2025-09-13T01:36:33.955873029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:36:33.996616 systemd[1]: Started cri-containerd-291ffef29682f16ae272a054a1844220952cc818a2dabae7b7efc521ae2ba59a.scope - libcontainer container 291ffef29682f16ae272a054a1844220952cc818a2dabae7b7efc521ae2ba59a. Sep 13 01:36:34.007158 systemd[1]: Started cri-containerd-e2dbd782a88ae3abcf4f66206daeb59d7261fd10f181e4db819ff782ba05ecde.scope - libcontainer container e2dbd782a88ae3abcf4f66206daeb59d7261fd10f181e4db819ff782ba05ecde. Sep 13 01:36:34.014322 systemd[1]: Started cri-containerd-a642a1dfd1420ad5b09efbb089869d6208d6407e08e97a55e3a31437eb0f3561.scope - libcontainer container a642a1dfd1420ad5b09efbb089869d6208d6407e08e97a55e3a31437eb0f3561. Sep 13 01:36:34.053729 kubelet[2254]: E0913 01:36:34.053599 2254 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.67.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-bbx8z.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.67.162:6443: connect: connection refused" interval="1.6s" Sep 13 01:36:34.059490 update_engine[1483]: I20250913 01:36:34.058095 1483 update_attempter.cc:509] Updating boot flags... Sep 13 01:36:34.180406 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2402) Sep 13 01:36:34.186353 containerd[1501]: time="2025-09-13T01:36:34.186299369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-bbx8z.gb1.brightbox.com,Uid:df618bcac0f01f038a9daafb8d7e8e8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"291ffef29682f16ae272a054a1844220952cc818a2dabae7b7efc521ae2ba59a\"" Sep 13 01:36:34.200590 kubelet[2254]: W0913 01:36:34.200025 2254 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.67.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.67.162:6443: connect: connection refused Sep 13 01:36:34.200590 kubelet[2254]: E0913 01:36:34.200153 2254 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.67.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.67.162:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:36:34.219474 containerd[1501]: time="2025-09-13T01:36:34.217670117Z" level=info msg="CreateContainer within sandbox \"291ffef29682f16ae272a054a1844220952cc818a2dabae7b7efc521ae2ba59a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 01:36:34.219474 containerd[1501]: time="2025-09-13T01:36:34.218003352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-bbx8z.gb1.brightbox.com,Uid:666e930dc6f634320d83a1f8e8f0cdd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"a642a1dfd1420ad5b09efbb089869d6208d6407e08e97a55e3a31437eb0f3561\"" Sep 13 01:36:34.219474 containerd[1501]: time="2025-09-13T01:36:34.218770883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-bbx8z.gb1.brightbox.com,Uid:0fa81a883f454eff649105381fbe6f90,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2dbd782a88ae3abcf4f66206daeb59d7261fd10f181e4db819ff782ba05ecde\"" Sep 13 01:36:34.229426 containerd[1501]: time="2025-09-13T01:36:34.229217390Z" level=info msg="CreateContainer within sandbox \"e2dbd782a88ae3abcf4f66206daeb59d7261fd10f181e4db819ff782ba05ecde\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 01:36:34.230482 containerd[1501]: time="2025-09-13T01:36:34.230438808Z" level=info msg="CreateContainer within sandbox \"a642a1dfd1420ad5b09efbb089869d6208d6407e08e97a55e3a31437eb0f3561\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 01:36:34.263360 containerd[1501]: time="2025-09-13T01:36:34.263267766Z" level=info msg="CreateContainer within sandbox \"e2dbd782a88ae3abcf4f66206daeb59d7261fd10f181e4db819ff782ba05ecde\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a4ef7c11b63b6c35087e3c8af4abbc22d18f8838ed29b335350f52a093f357a2\"" Sep 13 01:36:34.264463 containerd[1501]: time="2025-09-13T01:36:34.264430575Z" level=info msg="CreateContainer within sandbox \"291ffef29682f16ae272a054a1844220952cc818a2dabae7b7efc521ae2ba59a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e2c27a11428f834c9353514f0a31ff39223c5b8edf237ee4b32f1deb7001d0da\"" Sep 13 01:36:34.264769 containerd[1501]: time="2025-09-13T01:36:34.264740439Z" level=info msg="StartContainer for \"a4ef7c11b63b6c35087e3c8af4abbc22d18f8838ed29b335350f52a093f357a2\"" Sep 13 01:36:34.265471 containerd[1501]: time="2025-09-13T01:36:34.265087137Z" level=info msg="StartContainer for \"e2c27a11428f834c9353514f0a31ff39223c5b8edf237ee4b32f1deb7001d0da\"" Sep 13 01:36:34.268831 containerd[1501]: time="2025-09-13T01:36:34.268791112Z" level=info msg="CreateContainer within sandbox \"a642a1dfd1420ad5b09efbb089869d6208d6407e08e97a55e3a31437eb0f3561\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3be930aa0c84204279724e1fecd664c9fd91581235fd1d25ed46703914330ce3\"" Sep 13 01:36:34.271966 kubelet[2254]: I0913 01:36:34.271343 2254 kubelet_node_status.go:75] "Attempting to register node" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:34.271966 kubelet[2254]: E0913 01:36:34.271892 2254 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.67.162:6443/api/v1/nodes\": dial tcp 10.230.67.162:6443: connect: connection refused" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:34.272219 containerd[1501]: time="2025-09-13T01:36:34.272190408Z" level=info msg="StartContainer for \"3be930aa0c84204279724e1fecd664c9fd91581235fd1d25ed46703914330ce3\"" Sep 13 01:36:34.315828 systemd[1]: Started cri-containerd-e2c27a11428f834c9353514f0a31ff39223c5b8edf237ee4b32f1deb7001d0da.scope - libcontainer container e2c27a11428f834c9353514f0a31ff39223c5b8edf237ee4b32f1deb7001d0da. Sep 13 01:36:34.333567 systemd[1]: Started cri-containerd-3be930aa0c84204279724e1fecd664c9fd91581235fd1d25ed46703914330ce3.scope - libcontainer container 3be930aa0c84204279724e1fecd664c9fd91581235fd1d25ed46703914330ce3. Sep 13 01:36:34.343649 systemd[1]: Started cri-containerd-a4ef7c11b63b6c35087e3c8af4abbc22d18f8838ed29b335350f52a093f357a2.scope - libcontainer container a4ef7c11b63b6c35087e3c8af4abbc22d18f8838ed29b335350f52a093f357a2. Sep 13 01:36:34.440487 containerd[1501]: time="2025-09-13T01:36:34.440360722Z" level=info msg="StartContainer for \"a4ef7c11b63b6c35087e3c8af4abbc22d18f8838ed29b335350f52a093f357a2\" returns successfully" Sep 13 01:36:34.464970 containerd[1501]: time="2025-09-13T01:36:34.464173113Z" level=info msg="StartContainer for \"3be930aa0c84204279724e1fecd664c9fd91581235fd1d25ed46703914330ce3\" returns successfully" Sep 13 01:36:34.464970 containerd[1501]: time="2025-09-13T01:36:34.464173120Z" level=info msg="StartContainer for \"e2c27a11428f834c9353514f0a31ff39223c5b8edf237ee4b32f1deb7001d0da\" returns successfully" Sep 13 01:36:34.587601 kubelet[2254]: E0913 01:36:34.587424 2254 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.67.162:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.67.162:6443: connect: connection refused" logger="UnhandledError" Sep 13 01:36:34.705339 kubelet[2254]: E0913 01:36:34.705283 2254 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-bbx8z.gb1.brightbox.com\" not found" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:34.712408 kubelet[2254]: E0913 01:36:34.709689 2254 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-bbx8z.gb1.brightbox.com\" not found" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:34.717383 kubelet[2254]: E0913 01:36:34.715314 2254 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-bbx8z.gb1.brightbox.com\" not found" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:35.720399 kubelet[2254]: E0913 01:36:35.717810 2254 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-bbx8z.gb1.brightbox.com\" not found" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:35.723932 kubelet[2254]: E0913 01:36:35.721730 2254 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-bbx8z.gb1.brightbox.com\" not found" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:35.879228 kubelet[2254]: I0913 01:36:35.878774 2254 kubelet_node_status.go:75] "Attempting to register node" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:36.724818 kubelet[2254]: E0913 01:36:36.724374 2254 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-bbx8z.gb1.brightbox.com\" not found" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:37.368892 kubelet[2254]: E0913 01:36:37.368814 2254 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-bbx8z.gb1.brightbox.com\" not found" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:37.435163 kubelet[2254]: E0913 01:36:37.434669 2254 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-bbx8z.gb1.brightbox.com.1864b3c0313dbb28 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-bbx8z.gb1.brightbox.com,UID:srv-bbx8z.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-bbx8z.gb1.brightbox.com,},FirstTimestamp:2025-09-13 01:36:32.622557992 +0000 UTC m=+0.914339464,LastTimestamp:2025-09-13 01:36:32.622557992 +0000 UTC m=+0.914339464,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-bbx8z.gb1.brightbox.com,}" Sep 13 01:36:37.469438 kubelet[2254]: E0913 01:36:37.469293 2254 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-bbx8z.gb1.brightbox.com\" not found" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:37.483335 kubelet[2254]: I0913 01:36:37.482954 2254 kubelet_node_status.go:78] "Successfully registered node" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:37.492477 kubelet[2254]: E0913 01:36:37.491279 2254 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{srv-bbx8z.gb1.brightbox.com.1864b3c03320e27f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-bbx8z.gb1.brightbox.com,UID:srv-bbx8z.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:srv-bbx8z.gb1.brightbox.com,},FirstTimestamp:2025-09-13 01:36:32.654221951 +0000 UTC m=+0.946003419,LastTimestamp:2025-09-13 01:36:32.654221951 +0000 UTC m=+0.946003419,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-bbx8z.gb1.brightbox.com,}" Sep 13 01:36:37.540617 kubelet[2254]: I0913 01:36:37.540549 2254 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:37.559353 kubelet[2254]: E0913 01:36:37.559011 2254 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-bbx8z.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:37.559353 kubelet[2254]: I0913 01:36:37.559061 2254 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:37.562643 kubelet[2254]: E0913 01:36:37.561983 2254 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-bbx8z.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:37.563283 kubelet[2254]: I0913 01:36:37.563126 2254 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:37.567804 kubelet[2254]: E0913 01:36:37.567759 2254 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-bbx8z.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:37.611899 kubelet[2254]: I0913 01:36:37.611625 2254 apiserver.go:52] "Watching apiserver" Sep 13 01:36:37.639713 kubelet[2254]: I0913 01:36:37.639509 2254 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 01:36:39.682102 systemd[1]: Reloading requested from client PID 2543 ('systemctl') (unit session-9.scope)... Sep 13 01:36:39.683036 systemd[1]: Reloading... Sep 13 01:36:39.832428 zram_generator::config[2585]: No configuration found. Sep 13 01:36:40.019469 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 01:36:40.149952 systemd[1]: Reloading finished in 466 ms. Sep 13 01:36:40.223503 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:36:40.237060 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 01:36:40.237514 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:36:40.237655 systemd[1]: kubelet.service: Consumed 1.530s CPU time, 128.1M memory peak, 0B memory swap peak. Sep 13 01:36:40.249901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 01:36:40.571741 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 01:36:40.576275 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 01:36:40.678870 kubelet[2646]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:36:40.683393 kubelet[2646]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 13 01:36:40.683393 kubelet[2646]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 01:36:40.683393 kubelet[2646]: I0913 01:36:40.682564 2646 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 01:36:40.707323 kubelet[2646]: I0913 01:36:40.707264 2646 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 13 01:36:40.707323 kubelet[2646]: I0913 01:36:40.707307 2646 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 01:36:40.710306 kubelet[2646]: I0913 01:36:40.709775 2646 server.go:954] "Client rotation is on, will bootstrap in background" Sep 13 01:36:40.717307 kubelet[2646]: I0913 01:36:40.716953 2646 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 01:36:40.728046 kubelet[2646]: I0913 01:36:40.728007 2646 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 01:36:40.741281 kubelet[2646]: E0913 01:36:40.741219 2646 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 01:36:40.741281 kubelet[2646]: I0913 01:36:40.741274 2646 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 01:36:40.749171 kubelet[2646]: I0913 01:36:40.749147 2646 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 01:36:40.751108 kubelet[2646]: I0913 01:36:40.750544 2646 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 01:36:40.751108 kubelet[2646]: I0913 01:36:40.750638 2646 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-bbx8z.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 13 01:36:40.751108 kubelet[2646]: I0913 01:36:40.750929 2646 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 01:36:40.751108 kubelet[2646]: I0913 01:36:40.750946 2646 container_manager_linux.go:304] "Creating device plugin manager" Sep 13 01:36:40.753779 kubelet[2646]: I0913 01:36:40.753757 2646 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:36:40.754204 kubelet[2646]: I0913 01:36:40.754184 2646 kubelet.go:446] "Attempting to sync node with API server" Sep 13 01:36:40.754423 kubelet[2646]: I0913 01:36:40.754404 2646 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 01:36:40.754586 kubelet[2646]: I0913 01:36:40.754568 2646 kubelet.go:352] "Adding apiserver pod source" Sep 13 01:36:40.754716 kubelet[2646]: I0913 01:36:40.754697 2646 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 01:36:40.760559 kubelet[2646]: I0913 01:36:40.760527 2646 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 01:36:40.776735 kubelet[2646]: I0913 01:36:40.776031 2646 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 01:36:40.778886 sudo[2661]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 01:36:40.783802 kubelet[2646]: I0913 01:36:40.779982 2646 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 13 01:36:40.783802 kubelet[2646]: I0913 01:36:40.780043 2646 server.go:1287] "Started kubelet" Sep 13 01:36:40.779511 sudo[2661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 13 01:36:40.785934 kubelet[2646]: I0913 01:36:40.785844 2646 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 01:36:40.796744 kubelet[2646]: I0913 01:36:40.795765 2646 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 01:36:40.820851 kubelet[2646]: I0913 01:36:40.820757 2646 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 01:36:40.826417 kubelet[2646]: I0913 01:36:40.824592 2646 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 13 01:36:40.827137 kubelet[2646]: I0913 01:36:40.827115 2646 server.go:479] "Adding debug handlers to kubelet server" Sep 13 01:36:40.829849 kubelet[2646]: I0913 01:36:40.829462 2646 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 13 01:36:40.830246 kubelet[2646]: I0913 01:36:40.830228 2646 reconciler.go:26] "Reconciler: start to sync state" Sep 13 01:36:40.832454 kubelet[2646]: I0913 01:36:40.830413 2646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 01:36:40.833153 kubelet[2646]: I0913 01:36:40.833083 2646 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 01:36:40.833587 kubelet[2646]: I0913 01:36:40.833564 2646 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 01:36:40.836575 kubelet[2646]: I0913 01:36:40.835938 2646 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 01:36:40.858621 kubelet[2646]: I0913 01:36:40.858543 2646 factory.go:221] Registration of the containerd container factory successfully Sep 13 01:36:40.858958 kubelet[2646]: I0913 01:36:40.858936 2646 factory.go:221] Registration of the systemd container factory successfully Sep 13 01:36:40.860560 kubelet[2646]: I0913 01:36:40.860512 2646 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 01:36:40.860718 kubelet[2646]: I0913 01:36:40.860619 2646 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 13 01:36:40.860718 kubelet[2646]: I0913 01:36:40.860669 2646 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 13 01:36:40.860718 kubelet[2646]: I0913 01:36:40.860690 2646 kubelet.go:2382] "Starting kubelet main sync loop" Sep 13 01:36:40.860860 kubelet[2646]: E0913 01:36:40.860821 2646 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 01:36:40.863997 kubelet[2646]: E0913 01:36:40.862652 2646 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 01:36:40.961170 kubelet[2646]: E0913 01:36:40.960956 2646 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 13 01:36:41.010336 kubelet[2646]: I0913 01:36:41.009507 2646 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 13 01:36:41.010336 kubelet[2646]: I0913 01:36:41.009537 2646 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 13 01:36:41.010336 kubelet[2646]: I0913 01:36:41.009587 2646 state_mem.go:36] "Initialized new in-memory state store" Sep 13 01:36:41.010336 kubelet[2646]: I0913 01:36:41.009868 2646 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 01:36:41.010336 kubelet[2646]: I0913 01:36:41.009893 2646 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 01:36:41.010336 kubelet[2646]: I0913 01:36:41.009938 2646 policy_none.go:49] "None policy: Start" Sep 13 01:36:41.010336 kubelet[2646]: I0913 01:36:41.009969 2646 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 13 01:36:41.010336 kubelet[2646]: I0913 01:36:41.009999 2646 state_mem.go:35] "Initializing new in-memory state store" Sep 13 01:36:41.010336 kubelet[2646]: I0913 01:36:41.010160 2646 state_mem.go:75] "Updated machine memory state" Sep 13 01:36:41.023440 kubelet[2646]: I0913 01:36:41.022783 2646 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 01:36:41.023440 kubelet[2646]: I0913 01:36:41.023107 2646 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 01:36:41.023440 kubelet[2646]: I0913 01:36:41.023130 2646 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 01:36:41.028150 kubelet[2646]: I0913 01:36:41.028129 2646 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 01:36:41.032614 kubelet[2646]: E0913 01:36:41.032581 2646 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 13 01:36:41.159164 kubelet[2646]: I0913 01:36:41.158960 2646 kubelet_node_status.go:75] "Attempting to register node" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.162593 kubelet[2646]: I0913 01:36:41.162509 2646 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.165794 kubelet[2646]: I0913 01:36:41.165142 2646 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.165794 kubelet[2646]: I0913 01:36:41.165561 2646 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.180949 kubelet[2646]: W0913 01:36:41.180908 2646 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:36:41.182677 kubelet[2646]: W0913 01:36:41.182641 2646 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:36:41.188154 kubelet[2646]: W0913 01:36:41.188118 2646 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:36:41.191178 kubelet[2646]: I0913 01:36:41.191146 2646 kubelet_node_status.go:124] "Node was previously registered" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.191287 kubelet[2646]: I0913 01:36:41.191261 2646 kubelet_node_status.go:78] "Successfully registered node" node="srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.234777 kubelet[2646]: I0913 01:36:41.234704 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/666e930dc6f634320d83a1f8e8f0cdd2-k8s-certs\") pod \"kube-apiserver-srv-bbx8z.gb1.brightbox.com\" (UID: \"666e930dc6f634320d83a1f8e8f0cdd2\") " pod="kube-system/kube-apiserver-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.235030 kubelet[2646]: I0913 01:36:41.234783 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fa81a883f454eff649105381fbe6f90-k8s-certs\") pod \"kube-controller-manager-srv-bbx8z.gb1.brightbox.com\" (UID: \"0fa81a883f454eff649105381fbe6f90\") " pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.235030 kubelet[2646]: I0913 01:36:41.234843 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fa81a883f454eff649105381fbe6f90-kubeconfig\") pod \"kube-controller-manager-srv-bbx8z.gb1.brightbox.com\" (UID: \"0fa81a883f454eff649105381fbe6f90\") " pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.235030 kubelet[2646]: I0913 01:36:41.234871 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fa81a883f454eff649105381fbe6f90-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-bbx8z.gb1.brightbox.com\" (UID: \"0fa81a883f454eff649105381fbe6f90\") " pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.235030 kubelet[2646]: I0913 01:36:41.234938 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/666e930dc6f634320d83a1f8e8f0cdd2-ca-certs\") pod \"kube-apiserver-srv-bbx8z.gb1.brightbox.com\" (UID: \"666e930dc6f634320d83a1f8e8f0cdd2\") " pod="kube-system/kube-apiserver-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.235214 kubelet[2646]: I0913 01:36:41.234964 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fa81a883f454eff649105381fbe6f90-ca-certs\") pod \"kube-controller-manager-srv-bbx8z.gb1.brightbox.com\" (UID: \"0fa81a883f454eff649105381fbe6f90\") " pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.235214 kubelet[2646]: I0913 01:36:41.235153 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0fa81a883f454eff649105381fbe6f90-flexvolume-dir\") pod \"kube-controller-manager-srv-bbx8z.gb1.brightbox.com\" (UID: \"0fa81a883f454eff649105381fbe6f90\") " pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.235322 kubelet[2646]: I0913 01:36:41.235233 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df618bcac0f01f038a9daafb8d7e8e8b-kubeconfig\") pod \"kube-scheduler-srv-bbx8z.gb1.brightbox.com\" (UID: \"df618bcac0f01f038a9daafb8d7e8e8b\") " pod="kube-system/kube-scheduler-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.235322 kubelet[2646]: I0913 01:36:41.235304 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/666e930dc6f634320d83a1f8e8f0cdd2-usr-share-ca-certificates\") pod \"kube-apiserver-srv-bbx8z.gb1.brightbox.com\" (UID: \"666e930dc6f634320d83a1f8e8f0cdd2\") " pod="kube-system/kube-apiserver-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.658549 sudo[2661]: pam_unix(sudo:session): session closed for user root Sep 13 01:36:41.756318 kubelet[2646]: I0913 01:36:41.755861 2646 apiserver.go:52] "Watching apiserver" Sep 13 01:36:41.830953 kubelet[2646]: I0913 01:36:41.830860 2646 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 13 01:36:41.937172 kubelet[2646]: I0913 01:36:41.935397 2646 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.937172 kubelet[2646]: I0913 01:36:41.936171 2646 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.962425 kubelet[2646]: W0913 01:36:41.962101 2646 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:36:41.963645 kubelet[2646]: E0913 01:36:41.963080 2646 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-bbx8z.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:41.963645 kubelet[2646]: W0913 01:36:41.963404 2646 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 01:36:41.963645 kubelet[2646]: E0913 01:36:41.963462 2646 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-bbx8z.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-bbx8z.gb1.brightbox.com" Sep 13 01:36:42.004395 kubelet[2646]: I0913 01:36:42.003212 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-bbx8z.gb1.brightbox.com" podStartSLOduration=1.003148498 podStartE2EDuration="1.003148498s" podCreationTimestamp="2025-09-13 01:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:36:41.992802636 +0000 UTC m=+1.406737127" watchObservedRunningTime="2025-09-13 01:36:42.003148498 +0000 UTC m=+1.417082993" Sep 13 01:36:42.018690 kubelet[2646]: I0913 01:36:42.017020 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-bbx8z.gb1.brightbox.com" podStartSLOduration=1.01699469 podStartE2EDuration="1.01699469s" podCreationTimestamp="2025-09-13 01:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:36:42.00432906 +0000 UTC m=+1.418263552" watchObservedRunningTime="2025-09-13 01:36:42.01699469 +0000 UTC m=+1.430929176" Sep 13 01:36:42.045174 kubelet[2646]: I0913 01:36:42.045098 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-bbx8z.gb1.brightbox.com" podStartSLOduration=1.045031538 podStartE2EDuration="1.045031538s" podCreationTimestamp="2025-09-13 01:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:36:42.020580304 +0000 UTC m=+1.434514814" watchObservedRunningTime="2025-09-13 01:36:42.045031538 +0000 UTC m=+1.458966017" Sep 13 01:36:43.392254 sudo[1741]: pam_unix(sudo:session): session closed for user root Sep 13 01:36:43.541601 sshd[1738]: pam_unix(sshd:session): session closed for user core Sep 13 01:36:43.552886 systemd[1]: sshd@6-10.230.67.162:22-139.178.68.195:36560.service: Deactivated successfully. Sep 13 01:36:43.558498 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 01:36:43.559035 systemd[1]: session-9.scope: Consumed 6.620s CPU time, 142.8M memory peak, 0B memory swap peak. Sep 13 01:36:43.560136 systemd-logind[1482]: Session 9 logged out. Waiting for processes to exit. Sep 13 01:36:43.562759 systemd-logind[1482]: Removed session 9. Sep 13 01:36:44.205807 kubelet[2646]: I0913 01:36:44.205675 2646 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 01:36:44.208202 containerd[1501]: time="2025-09-13T01:36:44.208115492Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 01:36:44.209896 kubelet[2646]: I0913 01:36:44.208572 2646 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 01:36:45.006092 systemd[1]: Created slice kubepods-besteffort-podc204b06c_aa6e_489f_821b_977bd3883d28.slice - libcontainer container kubepods-besteffort-podc204b06c_aa6e_489f_821b_977bd3883d28.slice. Sep 13 01:36:45.029112 systemd[1]: Created slice kubepods-burstable-pod75cc7f28_73ff_48a9_abf4_badde281b764.slice - libcontainer container kubepods-burstable-pod75cc7f28_73ff_48a9_abf4_badde281b764.slice. Sep 13 01:36:45.071548 kubelet[2646]: I0913 01:36:45.071141 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-hostproc\") pod \"cilium-kp2w6\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " pod="kube-system/cilium-kp2w6" Sep 13 01:36:45.071548 kubelet[2646]: I0913 01:36:45.071212 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c204b06c-aa6e-489f-821b-977bd3883d28-kube-proxy\") pod \"kube-proxy-n5r6n\" (UID: \"c204b06c-aa6e-489f-821b-977bd3883d28\") " pod="kube-system/kube-proxy-n5r6n" Sep 13 01:36:45.071548 kubelet[2646]: I0913 01:36:45.071240 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-cilium-cgroup\") pod \"cilium-kp2w6\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " pod="kube-system/cilium-kp2w6" Sep 13 01:36:45.071548 kubelet[2646]: I0913 01:36:45.071271 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c204b06c-aa6e-489f-821b-977bd3883d28-xtables-lock\") pod \"kube-proxy-n5r6n\" (UID: \"c204b06c-aa6e-489f-821b-977bd3883d28\") " pod="kube-system/kube-proxy-n5r6n" Sep 13 01:36:45.071548 kubelet[2646]: I0913 01:36:45.071306 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzlxq\" (UniqueName: \"kubernetes.io/projected/c204b06c-aa6e-489f-821b-977bd3883d28-kube-api-access-bzlxq\") pod \"kube-proxy-n5r6n\" (UID: \"c204b06c-aa6e-489f-821b-977bd3883d28\") " pod="kube-system/kube-proxy-n5r6n" Sep 13 01:36:45.071548 kubelet[2646]: I0913 01:36:45.071335 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-cilium-run\") pod \"cilium-kp2w6\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " pod="kube-system/cilium-kp2w6" Sep 13 01:36:45.072031 kubelet[2646]: I0913 01:36:45.071359 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-bpf-maps\") pod \"cilium-kp2w6\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " pod="kube-system/cilium-kp2w6" Sep 13 01:36:45.072031 kubelet[2646]: I0913 01:36:45.071419 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c204b06c-aa6e-489f-821b-977bd3883d28-lib-modules\") pod \"kube-proxy-n5r6n\" (UID: \"c204b06c-aa6e-489f-821b-977bd3883d28\") " pod="kube-system/kube-proxy-n5r6n" Sep 13 01:36:45.072031 kubelet[2646]: I0913 01:36:45.071484 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-cni-path\") pod \"cilium-kp2w6\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " pod="kube-system/cilium-kp2w6" Sep 13 01:36:45.173781 kubelet[2646]: I0913 01:36:45.172445 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75cc7f28-73ff-48a9-abf4-badde281b764-hubble-tls\") pod \"cilium-kp2w6\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " pod="kube-system/cilium-kp2w6" Sep 13 01:36:45.173781 kubelet[2646]: I0913 01:36:45.172507 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-host-proc-sys-net\") pod \"cilium-kp2w6\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " pod="kube-system/cilium-kp2w6" Sep 13 01:36:45.173781 kubelet[2646]: I0913 01:36:45.172547 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-host-proc-sys-kernel\") pod \"cilium-kp2w6\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " pod="kube-system/cilium-kp2w6" Sep 13 01:36:45.173781 kubelet[2646]: I0913 01:36:45.172573 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-lib-modules\") pod \"cilium-kp2w6\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " pod="kube-system/cilium-kp2w6" Sep 13 01:36:45.173781 kubelet[2646]: I0913 01:36:45.172637 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-xtables-lock\") pod \"cilium-kp2w6\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " pod="kube-system/cilium-kp2w6" Sep 13 01:36:45.173781 kubelet[2646]: I0913 01:36:45.172689 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75cc7f28-73ff-48a9-abf4-badde281b764-clustermesh-secrets\") pod \"cilium-kp2w6\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " pod="kube-system/cilium-kp2w6" Sep 13 01:36:45.175953 kubelet[2646]: I0913 01:36:45.172714 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75cc7f28-73ff-48a9-abf4-badde281b764-cilium-config-path\") pod \"cilium-kp2w6\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " pod="kube-system/cilium-kp2w6" Sep 13 01:36:45.175953 kubelet[2646]: I0913 01:36:45.172754 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-etc-cni-netd\") pod \"cilium-kp2w6\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " pod="kube-system/cilium-kp2w6" Sep 13 01:36:45.175953 kubelet[2646]: I0913 01:36:45.172787 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdmzm\" (UniqueName: \"kubernetes.io/projected/75cc7f28-73ff-48a9-abf4-badde281b764-kube-api-access-rdmzm\") pod \"cilium-kp2w6\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " pod="kube-system/cilium-kp2w6" Sep 13 01:36:45.273746 kubelet[2646]: I0913 01:36:45.273552 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3e7d94a-3af0-4606-9e8e-6bcbc68d077e-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pr2fb\" (UID: \"f3e7d94a-3af0-4606-9e8e-6bcbc68d077e\") " pod="kube-system/cilium-operator-6c4d7847fc-pr2fb" Sep 13 01:36:45.276041 kubelet[2646]: I0913 01:36:45.275117 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czdlk\" (UniqueName: \"kubernetes.io/projected/f3e7d94a-3af0-4606-9e8e-6bcbc68d077e-kube-api-access-czdlk\") pod \"cilium-operator-6c4d7847fc-pr2fb\" (UID: \"f3e7d94a-3af0-4606-9e8e-6bcbc68d077e\") " pod="kube-system/cilium-operator-6c4d7847fc-pr2fb" Sep 13 01:36:45.302065 systemd[1]: Created slice kubepods-besteffort-podf3e7d94a_3af0_4606_9e8e_6bcbc68d077e.slice - libcontainer container kubepods-besteffort-podf3e7d94a_3af0_4606_9e8e_6bcbc68d077e.slice. Sep 13 01:36:45.323221 containerd[1501]: time="2025-09-13T01:36:45.323059756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n5r6n,Uid:c204b06c-aa6e-489f-821b-977bd3883d28,Namespace:kube-system,Attempt:0,}" Sep 13 01:36:45.413204 containerd[1501]: time="2025-09-13T01:36:45.411484167Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:36:45.413204 containerd[1501]: time="2025-09-13T01:36:45.412699584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:36:45.413204 containerd[1501]: time="2025-09-13T01:36:45.412740709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:36:45.413204 containerd[1501]: time="2025-09-13T01:36:45.413029809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:36:45.447671 systemd[1]: Started cri-containerd-76086be7f46d98cee1f85cca8189e72bc4aafa6a7e4b55514b05ab539251eb75.scope - libcontainer container 76086be7f46d98cee1f85cca8189e72bc4aafa6a7e4b55514b05ab539251eb75. Sep 13 01:36:45.483328 containerd[1501]: time="2025-09-13T01:36:45.483265015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n5r6n,Uid:c204b06c-aa6e-489f-821b-977bd3883d28,Namespace:kube-system,Attempt:0,} returns sandbox id \"76086be7f46d98cee1f85cca8189e72bc4aafa6a7e4b55514b05ab539251eb75\"" Sep 13 01:36:45.489283 containerd[1501]: time="2025-09-13T01:36:45.489249016Z" level=info msg="CreateContainer within sandbox \"76086be7f46d98cee1f85cca8189e72bc4aafa6a7e4b55514b05ab539251eb75\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 01:36:45.507703 containerd[1501]: time="2025-09-13T01:36:45.507631355Z" level=info msg="CreateContainer within sandbox \"76086be7f46d98cee1f85cca8189e72bc4aafa6a7e4b55514b05ab539251eb75\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ebe4e7a1de9706772552a58f3cf1ca30b446571153bb66204df9b02e18087e37\"" Sep 13 01:36:45.509432 containerd[1501]: time="2025-09-13T01:36:45.508577930Z" level=info msg="StartContainer for \"ebe4e7a1de9706772552a58f3cf1ca30b446571153bb66204df9b02e18087e37\"" Sep 13 01:36:45.542570 systemd[1]: Started cri-containerd-ebe4e7a1de9706772552a58f3cf1ca30b446571153bb66204df9b02e18087e37.scope - libcontainer container ebe4e7a1de9706772552a58f3cf1ca30b446571153bb66204df9b02e18087e37. Sep 13 01:36:45.583710 containerd[1501]: time="2025-09-13T01:36:45.583663048Z" level=info msg="StartContainer for \"ebe4e7a1de9706772552a58f3cf1ca30b446571153bb66204df9b02e18087e37\" returns successfully" Sep 13 01:36:45.615483 containerd[1501]: time="2025-09-13T01:36:45.615423311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pr2fb,Uid:f3e7d94a-3af0-4606-9e8e-6bcbc68d077e,Namespace:kube-system,Attempt:0,}" Sep 13 01:36:45.635971 containerd[1501]: time="2025-09-13T01:36:45.635531807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kp2w6,Uid:75cc7f28-73ff-48a9-abf4-badde281b764,Namespace:kube-system,Attempt:0,}" Sep 13 01:36:45.656741 containerd[1501]: time="2025-09-13T01:36:45.656454524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:36:45.656741 containerd[1501]: time="2025-09-13T01:36:45.656568431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:36:45.657114 containerd[1501]: time="2025-09-13T01:36:45.656704885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:36:45.657114 containerd[1501]: time="2025-09-13T01:36:45.656882251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:36:45.702688 systemd[1]: Started cri-containerd-bfe545158432c93c294b9ff65b13e1d5e08f8b702c0690dceb145ce27b34a0e4.scope - libcontainer container bfe545158432c93c294b9ff65b13e1d5e08f8b702c0690dceb145ce27b34a0e4. Sep 13 01:36:45.717211 containerd[1501]: time="2025-09-13T01:36:45.716110261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:36:45.717211 containerd[1501]: time="2025-09-13T01:36:45.716216504Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:36:45.717211 containerd[1501]: time="2025-09-13T01:36:45.716235506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:36:45.717804 containerd[1501]: time="2025-09-13T01:36:45.717533038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:36:45.759768 systemd[1]: Started cri-containerd-629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0.scope - libcontainer container 629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0. Sep 13 01:36:45.845072 containerd[1501]: time="2025-09-13T01:36:45.844225946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pr2fb,Uid:f3e7d94a-3af0-4606-9e8e-6bcbc68d077e,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfe545158432c93c294b9ff65b13e1d5e08f8b702c0690dceb145ce27b34a0e4\"" Sep 13 01:36:45.851327 containerd[1501]: time="2025-09-13T01:36:45.850835684Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 01:36:45.852765 containerd[1501]: time="2025-09-13T01:36:45.852718428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kp2w6,Uid:75cc7f28-73ff-48a9-abf4-badde281b764,Namespace:kube-system,Attempt:0,} returns sandbox id \"629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0\"" Sep 13 01:36:47.273417 kubelet[2646]: I0913 01:36:47.273289 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n5r6n" podStartSLOduration=3.273227569 podStartE2EDuration="3.273227569s" podCreationTimestamp="2025-09-13 01:36:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:36:45.971270608 +0000 UTC m=+5.385205098" watchObservedRunningTime="2025-09-13 01:36:47.273227569 +0000 UTC m=+6.687162067" Sep 13 01:36:47.833747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1380852668.mount: Deactivated successfully. Sep 13 01:36:48.591411 containerd[1501]: time="2025-09-13T01:36:48.590459521Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:48.593115 containerd[1501]: time="2025-09-13T01:36:48.593066925Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 13 01:36:48.593902 containerd[1501]: time="2025-09-13T01:36:48.593872128Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:48.597300 containerd[1501]: time="2025-09-13T01:36:48.597267302Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.74637059s" Sep 13 01:36:48.597493 containerd[1501]: time="2025-09-13T01:36:48.597462840Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 13 01:36:48.600399 containerd[1501]: time="2025-09-13T01:36:48.600195349Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 01:36:48.601734 containerd[1501]: time="2025-09-13T01:36:48.601618874Z" level=info msg="CreateContainer within sandbox \"bfe545158432c93c294b9ff65b13e1d5e08f8b702c0690dceb145ce27b34a0e4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 01:36:48.636155 containerd[1501]: time="2025-09-13T01:36:48.636074309Z" level=info msg="CreateContainer within sandbox \"bfe545158432c93c294b9ff65b13e1d5e08f8b702c0690dceb145ce27b34a0e4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f\"" Sep 13 01:36:48.640443 containerd[1501]: time="2025-09-13T01:36:48.639186814Z" level=info msg="StartContainer for \"f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f\"" Sep 13 01:36:48.699594 systemd[1]: Started cri-containerd-f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f.scope - libcontainer container f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f. Sep 13 01:36:48.741137 containerd[1501]: time="2025-09-13T01:36:48.740918322Z" level=info msg="StartContainer for \"f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f\" returns successfully" Sep 13 01:36:51.557470 kubelet[2646]: I0913 01:36:51.557174 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pr2fb" podStartSLOduration=3.806337173 podStartE2EDuration="6.557111835s" podCreationTimestamp="2025-09-13 01:36:45 +0000 UTC" firstStartedPulling="2025-09-13 01:36:45.848061353 +0000 UTC m=+5.261995827" lastFinishedPulling="2025-09-13 01:36:48.598836005 +0000 UTC m=+8.012770489" observedRunningTime="2025-09-13 01:36:48.986436643 +0000 UTC m=+8.400371139" watchObservedRunningTime="2025-09-13 01:36:51.557111835 +0000 UTC m=+10.971046321" Sep 13 01:36:55.845889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3930666453.mount: Deactivated successfully. Sep 13 01:36:58.930118 containerd[1501]: time="2025-09-13T01:36:58.929814015Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:58.932249 containerd[1501]: time="2025-09-13T01:36:58.931767151Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 13 01:36:58.933424 containerd[1501]: time="2025-09-13T01:36:58.932815522Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 01:36:58.935619 containerd[1501]: time="2025-09-13T01:36:58.935346157Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.335090467s" Sep 13 01:36:58.935619 containerd[1501]: time="2025-09-13T01:36:58.935449758Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 13 01:36:58.941413 containerd[1501]: time="2025-09-13T01:36:58.940592336Z" level=info msg="CreateContainer within sandbox \"629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:36:59.034256 containerd[1501]: time="2025-09-13T01:36:59.034179858Z" level=info msg="CreateContainer within sandbox \"629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb\"" Sep 13 01:36:59.036107 containerd[1501]: time="2025-09-13T01:36:59.036061336Z" level=info msg="StartContainer for \"4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb\"" Sep 13 01:36:59.266642 systemd[1]: Started cri-containerd-4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb.scope - libcontainer container 4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb. Sep 13 01:36:59.319153 containerd[1501]: time="2025-09-13T01:36:59.319103109Z" level=info msg="StartContainer for \"4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb\" returns successfully" Sep 13 01:36:59.341777 systemd[1]: cri-containerd-4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb.scope: Deactivated successfully. Sep 13 01:36:59.543867 containerd[1501]: time="2025-09-13T01:36:59.536339040Z" level=info msg="shim disconnected" id=4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb namespace=k8s.io Sep 13 01:36:59.543867 containerd[1501]: time="2025-09-13T01:36:59.543737817Z" level=warning msg="cleaning up after shim disconnected" id=4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb namespace=k8s.io Sep 13 01:36:59.543867 containerd[1501]: time="2025-09-13T01:36:59.543762366Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 01:37:00.021748 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb-rootfs.mount: Deactivated successfully. Sep 13 01:37:00.317695 containerd[1501]: time="2025-09-13T01:37:00.317547066Z" level=info msg="CreateContainer within sandbox \"629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 01:37:00.342990 containerd[1501]: time="2025-09-13T01:37:00.341402153Z" level=info msg="CreateContainer within sandbox \"629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f\"" Sep 13 01:37:00.344245 containerd[1501]: time="2025-09-13T01:37:00.344008139Z" level=info msg="StartContainer for \"afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f\"" Sep 13 01:37:00.409603 systemd[1]: Started cri-containerd-afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f.scope - libcontainer container afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f. Sep 13 01:37:00.448324 containerd[1501]: time="2025-09-13T01:37:00.448170208Z" level=info msg="StartContainer for \"afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f\" returns successfully" Sep 13 01:37:00.469589 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 01:37:00.470007 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 01:37:00.470152 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 13 01:37:00.477798 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 01:37:00.478141 systemd[1]: cri-containerd-afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f.scope: Deactivated successfully. Sep 13 01:37:00.521690 containerd[1501]: time="2025-09-13T01:37:00.521414436Z" level=info msg="shim disconnected" id=afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f namespace=k8s.io Sep 13 01:37:00.521690 containerd[1501]: time="2025-09-13T01:37:00.521510902Z" level=warning msg="cleaning up after shim disconnected" id=afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f namespace=k8s.io Sep 13 01:37:00.521690 containerd[1501]: time="2025-09-13T01:37:00.521526465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 01:37:00.556816 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 01:37:01.022008 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f-rootfs.mount: Deactivated successfully. Sep 13 01:37:01.326641 containerd[1501]: time="2025-09-13T01:37:01.325982278Z" level=info msg="CreateContainer within sandbox \"629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 01:37:01.376837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2581196107.mount: Deactivated successfully. Sep 13 01:37:01.387615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3691896548.mount: Deactivated successfully. Sep 13 01:37:01.397940 containerd[1501]: time="2025-09-13T01:37:01.397262203Z" level=info msg="CreateContainer within sandbox \"629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682\"" Sep 13 01:37:01.400522 containerd[1501]: time="2025-09-13T01:37:01.400478580Z" level=info msg="StartContainer for \"fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682\"" Sep 13 01:37:01.446626 systemd[1]: Started cri-containerd-fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682.scope - libcontainer container fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682. Sep 13 01:37:01.486341 containerd[1501]: time="2025-09-13T01:37:01.486291189Z" level=info msg="StartContainer for \"fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682\" returns successfully" Sep 13 01:37:01.493202 systemd[1]: cri-containerd-fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682.scope: Deactivated successfully. Sep 13 01:37:01.526135 containerd[1501]: time="2025-09-13T01:37:01.525851307Z" level=info msg="shim disconnected" id=fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682 namespace=k8s.io Sep 13 01:37:01.526135 containerd[1501]: time="2025-09-13T01:37:01.525913716Z" level=warning msg="cleaning up after shim disconnected" id=fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682 namespace=k8s.io Sep 13 01:37:01.526135 containerd[1501]: time="2025-09-13T01:37:01.525927946Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 01:37:02.332696 containerd[1501]: time="2025-09-13T01:37:02.332621648Z" level=info msg="CreateContainer within sandbox \"629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 01:37:02.359862 containerd[1501]: time="2025-09-13T01:37:02.359812468Z" level=info msg="CreateContainer within sandbox \"629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb\"" Sep 13 01:37:02.360929 containerd[1501]: time="2025-09-13T01:37:02.360894890Z" level=info msg="StartContainer for \"74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb\"" Sep 13 01:37:02.414592 systemd[1]: Started cri-containerd-74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb.scope - libcontainer container 74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb. Sep 13 01:37:02.452231 systemd[1]: cri-containerd-74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb.scope: Deactivated successfully. Sep 13 01:37:02.455166 containerd[1501]: time="2025-09-13T01:37:02.454945404Z" level=info msg="StartContainer for \"74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb\" returns successfully" Sep 13 01:37:02.484028 containerd[1501]: time="2025-09-13T01:37:02.483742480Z" level=info msg="shim disconnected" id=74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb namespace=k8s.io Sep 13 01:37:02.484028 containerd[1501]: time="2025-09-13T01:37:02.483843333Z" level=warning msg="cleaning up after shim disconnected" id=74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb namespace=k8s.io Sep 13 01:37:02.484028 containerd[1501]: time="2025-09-13T01:37:02.483863133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 01:37:03.022135 systemd[1]: run-containerd-runc-k8s.io-74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb-runc.LfMAGy.mount: Deactivated successfully. Sep 13 01:37:03.022324 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb-rootfs.mount: Deactivated successfully. Sep 13 01:37:03.336730 containerd[1501]: time="2025-09-13T01:37:03.336594302Z" level=info msg="CreateContainer within sandbox \"629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 01:37:03.366309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount192051967.mount: Deactivated successfully. Sep 13 01:37:03.367270 containerd[1501]: time="2025-09-13T01:37:03.367213384Z" level=info msg="CreateContainer within sandbox \"629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d\"" Sep 13 01:37:03.373753 containerd[1501]: time="2025-09-13T01:37:03.371546252Z" level=info msg="StartContainer for \"17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d\"" Sep 13 01:37:03.426618 systemd[1]: Started cri-containerd-17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d.scope - libcontainer container 17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d. Sep 13 01:37:03.468027 containerd[1501]: time="2025-09-13T01:37:03.467950357Z" level=info msg="StartContainer for \"17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d\" returns successfully" Sep 13 01:37:03.774480 kubelet[2646]: I0913 01:37:03.774220 2646 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 13 01:37:03.830530 kubelet[2646]: I0913 01:37:03.829824 2646 status_manager.go:890] "Failed to get status for pod" podUID="25472ae4-10be-4c49-9ef0-2060121ab478" pod="kube-system/coredns-668d6bf9bc-m2wfj" err="pods \"coredns-668d6bf9bc-m2wfj\" is forbidden: User \"system:node:srv-bbx8z.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-bbx8z.gb1.brightbox.com' and this object" Sep 13 01:37:03.830530 kubelet[2646]: W0913 01:37:03.830300 2646 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-bbx8z.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-bbx8z.gb1.brightbox.com' and this object Sep 13 01:37:03.832344 kubelet[2646]: E0913 01:37:03.832261 2646 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:srv-bbx8z.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-bbx8z.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 13 01:37:03.843216 systemd[1]: Created slice kubepods-burstable-pod25472ae4_10be_4c49_9ef0_2060121ab478.slice - libcontainer container kubepods-burstable-pod25472ae4_10be_4c49_9ef0_2060121ab478.slice. Sep 13 01:37:03.857757 systemd[1]: Created slice kubepods-burstable-pod18986b0f_1882_4896_8927_2673252cfe29.slice - libcontainer container kubepods-burstable-pod18986b0f_1882_4896_8927_2673252cfe29.slice. Sep 13 01:37:03.947180 kubelet[2646]: I0913 01:37:03.946705 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqw5x\" (UniqueName: \"kubernetes.io/projected/18986b0f-1882-4896-8927-2673252cfe29-kube-api-access-gqw5x\") pod \"coredns-668d6bf9bc-nb9xn\" (UID: \"18986b0f-1882-4896-8927-2673252cfe29\") " pod="kube-system/coredns-668d6bf9bc-nb9xn" Sep 13 01:37:03.947180 kubelet[2646]: I0913 01:37:03.946770 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/25472ae4-10be-4c49-9ef0-2060121ab478-config-volume\") pod \"coredns-668d6bf9bc-m2wfj\" (UID: \"25472ae4-10be-4c49-9ef0-2060121ab478\") " pod="kube-system/coredns-668d6bf9bc-m2wfj" Sep 13 01:37:03.947180 kubelet[2646]: I0913 01:37:03.946848 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5dsg\" (UniqueName: \"kubernetes.io/projected/25472ae4-10be-4c49-9ef0-2060121ab478-kube-api-access-q5dsg\") pod \"coredns-668d6bf9bc-m2wfj\" (UID: \"25472ae4-10be-4c49-9ef0-2060121ab478\") " pod="kube-system/coredns-668d6bf9bc-m2wfj" Sep 13 01:37:03.947180 kubelet[2646]: I0913 01:37:03.946889 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18986b0f-1882-4896-8927-2673252cfe29-config-volume\") pod \"coredns-668d6bf9bc-nb9xn\" (UID: \"18986b0f-1882-4896-8927-2673252cfe29\") " pod="kube-system/coredns-668d6bf9bc-nb9xn" Sep 13 01:37:04.022266 systemd[1]: run-containerd-runc-k8s.io-17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d-runc.G9tJFo.mount: Deactivated successfully. Sep 13 01:37:04.374299 kubelet[2646]: I0913 01:37:04.373231 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kp2w6" podStartSLOduration=7.291185036 podStartE2EDuration="20.37318874s" podCreationTimestamp="2025-09-13 01:36:44 +0000 UTC" firstStartedPulling="2025-09-13 01:36:45.855098014 +0000 UTC m=+5.269032486" lastFinishedPulling="2025-09-13 01:36:58.937101719 +0000 UTC m=+18.351036190" observedRunningTime="2025-09-13 01:37:04.372180849 +0000 UTC m=+23.786115337" watchObservedRunningTime="2025-09-13 01:37:04.37318874 +0000 UTC m=+23.787123226" Sep 13 01:37:05.048795 kubelet[2646]: E0913 01:37:05.048616 2646 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 13 01:37:05.049864 kubelet[2646]: E0913 01:37:05.048823 2646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/18986b0f-1882-4896-8927-2673252cfe29-config-volume podName:18986b0f-1882-4896-8927-2673252cfe29 nodeName:}" failed. No retries permitted until 2025-09-13 01:37:05.54878351 +0000 UTC m=+24.962717989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/18986b0f-1882-4896-8927-2673252cfe29-config-volume") pod "coredns-668d6bf9bc-nb9xn" (UID: "18986b0f-1882-4896-8927-2673252cfe29") : failed to sync configmap cache: timed out waiting for the condition Sep 13 01:37:05.049864 kubelet[2646]: E0913 01:37:05.048616 2646 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 13 01:37:05.049864 kubelet[2646]: E0913 01:37:05.049213 2646 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/25472ae4-10be-4c49-9ef0-2060121ab478-config-volume podName:25472ae4-10be-4c49-9ef0-2060121ab478 nodeName:}" failed. No retries permitted until 2025-09-13 01:37:05.549199938 +0000 UTC m=+24.963134409 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/25472ae4-10be-4c49-9ef0-2060121ab478-config-volume") pod "coredns-668d6bf9bc-m2wfj" (UID: "25472ae4-10be-4c49-9ef0-2060121ab478") : failed to sync configmap cache: timed out waiting for the condition Sep 13 01:37:05.653192 containerd[1501]: time="2025-09-13T01:37:05.653090534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m2wfj,Uid:25472ae4-10be-4c49-9ef0-2060121ab478,Namespace:kube-system,Attempt:0,}" Sep 13 01:37:05.662173 containerd[1501]: time="2025-09-13T01:37:05.661811341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nb9xn,Uid:18986b0f-1882-4896-8927-2673252cfe29,Namespace:kube-system,Attempt:0,}" Sep 13 01:37:06.018117 systemd-networkd[1415]: cilium_host: Link UP Sep 13 01:37:06.018932 systemd-networkd[1415]: cilium_net: Link UP Sep 13 01:37:06.018939 systemd-networkd[1415]: cilium_net: Gained carrier Sep 13 01:37:06.019682 systemd-networkd[1415]: cilium_host: Gained carrier Sep 13 01:37:06.021127 systemd-networkd[1415]: cilium_net: Gained IPv6LL Sep 13 01:37:06.186612 systemd-networkd[1415]: cilium_vxlan: Link UP Sep 13 01:37:06.188752 systemd-networkd[1415]: cilium_vxlan: Gained carrier Sep 13 01:37:06.710874 systemd-networkd[1415]: cilium_host: Gained IPv6LL Sep 13 01:37:06.750745 kernel: NET: Registered PF_ALG protocol family Sep 13 01:37:07.734562 systemd-networkd[1415]: cilium_vxlan: Gained IPv6LL Sep 13 01:37:07.774971 systemd-networkd[1415]: lxc_health: Link UP Sep 13 01:37:07.787576 systemd-networkd[1415]: lxc_health: Gained carrier Sep 13 01:37:08.271773 systemd-networkd[1415]: lxc017b25a5cd44: Link UP Sep 13 01:37:08.278562 kernel: eth0: renamed from tmp9c1fd Sep 13 01:37:08.287556 systemd-networkd[1415]: lxc39772aa251b5: Link UP Sep 13 01:37:08.293416 systemd-networkd[1415]: lxc017b25a5cd44: Gained carrier Sep 13 01:37:08.302911 kernel: eth0: renamed from tmp8bcb7 Sep 13 01:37:08.310760 systemd-networkd[1415]: lxc39772aa251b5: Gained carrier Sep 13 01:37:09.142629 systemd-networkd[1415]: lxc_health: Gained IPv6LL Sep 13 01:37:09.399607 systemd-networkd[1415]: lxc017b25a5cd44: Gained IPv6LL Sep 13 01:37:10.230615 systemd-networkd[1415]: lxc39772aa251b5: Gained IPv6LL Sep 13 01:37:13.848031 containerd[1501]: time="2025-09-13T01:37:13.844864158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:37:13.848031 containerd[1501]: time="2025-09-13T01:37:13.845147944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:37:13.848031 containerd[1501]: time="2025-09-13T01:37:13.845873365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:37:13.848031 containerd[1501]: time="2025-09-13T01:37:13.846426024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:37:13.920454 containerd[1501]: time="2025-09-13T01:37:13.919893471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:37:13.920454 containerd[1501]: time="2025-09-13T01:37:13.919980324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:37:13.920454 containerd[1501]: time="2025-09-13T01:37:13.920014351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:37:13.920454 containerd[1501]: time="2025-09-13T01:37:13.920130483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:37:13.955785 systemd[1]: Started cri-containerd-9c1fd6f01fd466dc460606d3fc4b607b976594c61c37b26c51f8b73e855fef09.scope - libcontainer container 9c1fd6f01fd466dc460606d3fc4b607b976594c61c37b26c51f8b73e855fef09. Sep 13 01:37:13.990563 systemd[1]: Started cri-containerd-8bcb7fb524c92e88ed27b993d96f098c2fde844810abe86917c57e768a35643b.scope - libcontainer container 8bcb7fb524c92e88ed27b993d96f098c2fde844810abe86917c57e768a35643b. Sep 13 01:37:14.099509 containerd[1501]: time="2025-09-13T01:37:14.098826946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-m2wfj,Uid:25472ae4-10be-4c49-9ef0-2060121ab478,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c1fd6f01fd466dc460606d3fc4b607b976594c61c37b26c51f8b73e855fef09\"" Sep 13 01:37:14.109074 containerd[1501]: time="2025-09-13T01:37:14.108816425Z" level=info msg="CreateContainer within sandbox \"9c1fd6f01fd466dc460606d3fc4b607b976594c61c37b26c51f8b73e855fef09\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:37:14.121191 containerd[1501]: time="2025-09-13T01:37:14.121145324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nb9xn,Uid:18986b0f-1882-4896-8927-2673252cfe29,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bcb7fb524c92e88ed27b993d96f098c2fde844810abe86917c57e768a35643b\"" Sep 13 01:37:14.135448 containerd[1501]: time="2025-09-13T01:37:14.135403444Z" level=info msg="CreateContainer within sandbox \"8bcb7fb524c92e88ed27b993d96f098c2fde844810abe86917c57e768a35643b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 01:37:14.148701 containerd[1501]: time="2025-09-13T01:37:14.148533845Z" level=info msg="CreateContainer within sandbox \"9c1fd6f01fd466dc460606d3fc4b607b976594c61c37b26c51f8b73e855fef09\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ce240966b94346d28c6134b1de3bd7f64f2fd025ce3fb8956f16815e94d59432\"" Sep 13 01:37:14.149755 containerd[1501]: time="2025-09-13T01:37:14.149547692Z" level=info msg="StartContainer for \"ce240966b94346d28c6134b1de3bd7f64f2fd025ce3fb8956f16815e94d59432\"" Sep 13 01:37:14.166710 containerd[1501]: time="2025-09-13T01:37:14.166592000Z" level=info msg="CreateContainer within sandbox \"8bcb7fb524c92e88ed27b993d96f098c2fde844810abe86917c57e768a35643b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f0ac8894d1b2d9d2f6043835750ffb61c7d5b80f75fd9877d69cacf512fb4688\"" Sep 13 01:37:14.169125 containerd[1501]: time="2025-09-13T01:37:14.167887740Z" level=info msg="StartContainer for \"f0ac8894d1b2d9d2f6043835750ffb61c7d5b80f75fd9877d69cacf512fb4688\"" Sep 13 01:37:14.226624 systemd[1]: Started cri-containerd-ce240966b94346d28c6134b1de3bd7f64f2fd025ce3fb8956f16815e94d59432.scope - libcontainer container ce240966b94346d28c6134b1de3bd7f64f2fd025ce3fb8956f16815e94d59432. Sep 13 01:37:14.246754 systemd[1]: Started cri-containerd-f0ac8894d1b2d9d2f6043835750ffb61c7d5b80f75fd9877d69cacf512fb4688.scope - libcontainer container f0ac8894d1b2d9d2f6043835750ffb61c7d5b80f75fd9877d69cacf512fb4688. Sep 13 01:37:14.295567 containerd[1501]: time="2025-09-13T01:37:14.295354015Z" level=info msg="StartContainer for \"ce240966b94346d28c6134b1de3bd7f64f2fd025ce3fb8956f16815e94d59432\" returns successfully" Sep 13 01:37:14.313072 containerd[1501]: time="2025-09-13T01:37:14.313027656Z" level=info msg="StartContainer for \"f0ac8894d1b2d9d2f6043835750ffb61c7d5b80f75fd9877d69cacf512fb4688\" returns successfully" Sep 13 01:37:14.428129 kubelet[2646]: I0913 01:37:14.427914 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nb9xn" podStartSLOduration=29.42783957 podStartE2EDuration="29.42783957s" podCreationTimestamp="2025-09-13 01:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:37:14.401180452 +0000 UTC m=+33.815114946" watchObservedRunningTime="2025-09-13 01:37:14.42783957 +0000 UTC m=+33.841774049" Sep 13 01:37:14.868423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3663100552.mount: Deactivated successfully. Sep 13 01:37:15.403182 kubelet[2646]: I0913 01:37:15.402536 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-m2wfj" podStartSLOduration=30.402498795 podStartE2EDuration="30.402498795s" podCreationTimestamp="2025-09-13 01:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:37:14.436496452 +0000 UTC m=+33.850430959" watchObservedRunningTime="2025-09-13 01:37:15.402498795 +0000 UTC m=+34.816433273" Sep 13 01:37:50.762960 systemd[1]: Started sshd@7-10.230.67.162:22-139.178.68.195:45124.service - OpenSSH per-connection server daemon (139.178.68.195:45124). Sep 13 01:37:51.691082 sshd[4027]: Accepted publickey for core from 139.178.68.195 port 45124 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:37:51.696201 sshd[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:37:51.708450 systemd-logind[1482]: New session 10 of user core. Sep 13 01:37:51.714577 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 01:37:52.895452 sshd[4027]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:52.899447 systemd[1]: sshd@7-10.230.67.162:22-139.178.68.195:45124.service: Deactivated successfully. Sep 13 01:37:52.902104 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 01:37:52.904318 systemd-logind[1482]: Session 10 logged out. Waiting for processes to exit. Sep 13 01:37:52.905868 systemd-logind[1482]: Removed session 10. Sep 13 01:37:58.055721 systemd[1]: Started sshd@8-10.230.67.162:22-139.178.68.195:45134.service - OpenSSH per-connection server daemon (139.178.68.195:45134). Sep 13 01:37:58.962887 sshd[4041]: Accepted publickey for core from 139.178.68.195 port 45134 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:37:58.964967 sshd[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:37:58.972151 systemd-logind[1482]: New session 11 of user core. Sep 13 01:37:58.975601 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 01:37:59.683447 sshd[4041]: pam_unix(sshd:session): session closed for user core Sep 13 01:37:59.689194 systemd[1]: sshd@8-10.230.67.162:22-139.178.68.195:45134.service: Deactivated successfully. Sep 13 01:37:59.692458 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 01:37:59.693796 systemd-logind[1482]: Session 11 logged out. Waiting for processes to exit. Sep 13 01:37:59.695339 systemd-logind[1482]: Removed session 11. Sep 13 01:38:04.853717 systemd[1]: Started sshd@9-10.230.67.162:22-139.178.68.195:34496.service - OpenSSH per-connection server daemon (139.178.68.195:34496). Sep 13 01:38:05.810319 sshd[4055]: Accepted publickey for core from 139.178.68.195 port 34496 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:38:05.813328 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:38:05.820603 systemd-logind[1482]: New session 12 of user core. Sep 13 01:38:05.829576 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 01:38:06.589727 sshd[4055]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:06.595400 systemd[1]: sshd@9-10.230.67.162:22-139.178.68.195:34496.service: Deactivated successfully. Sep 13 01:38:06.598177 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 01:38:06.599720 systemd-logind[1482]: Session 12 logged out. Waiting for processes to exit. Sep 13 01:38:06.600994 systemd-logind[1482]: Removed session 12. Sep 13 01:38:06.750720 systemd[1]: Started sshd@10-10.230.67.162:22-139.178.68.195:34508.service - OpenSSH per-connection server daemon (139.178.68.195:34508). Sep 13 01:38:07.662609 sshd[4068]: Accepted publickey for core from 139.178.68.195 port 34508 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:38:07.665072 sshd[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:38:07.672605 systemd-logind[1482]: New session 13 of user core. Sep 13 01:38:07.687623 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 01:38:08.467784 sshd[4068]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:08.476218 systemd-logind[1482]: Session 13 logged out. Waiting for processes to exit. Sep 13 01:38:08.477075 systemd[1]: sshd@10-10.230.67.162:22-139.178.68.195:34508.service: Deactivated successfully. Sep 13 01:38:08.480203 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 01:38:08.482857 systemd-logind[1482]: Removed session 13. Sep 13 01:38:08.634738 systemd[1]: Started sshd@11-10.230.67.162:22-139.178.68.195:34520.service - OpenSSH per-connection server daemon (139.178.68.195:34520). Sep 13 01:38:09.548049 sshd[4078]: Accepted publickey for core from 139.178.68.195 port 34520 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:38:09.551686 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:38:09.558891 systemd-logind[1482]: New session 14 of user core. Sep 13 01:38:09.567578 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 01:38:10.253040 sshd[4078]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:10.258084 systemd[1]: sshd@11-10.230.67.162:22-139.178.68.195:34520.service: Deactivated successfully. Sep 13 01:38:10.260569 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 01:38:10.261579 systemd-logind[1482]: Session 14 logged out. Waiting for processes to exit. Sep 13 01:38:10.265755 systemd-logind[1482]: Removed session 14. Sep 13 01:38:15.417701 systemd[1]: Started sshd@12-10.230.67.162:22-139.178.68.195:56666.service - OpenSSH per-connection server daemon (139.178.68.195:56666). Sep 13 01:38:16.318535 sshd[4092]: Accepted publickey for core from 139.178.68.195 port 56666 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:38:16.320577 sshd[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:38:16.327034 systemd-logind[1482]: New session 15 of user core. Sep 13 01:38:16.336660 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 01:38:17.020960 sshd[4092]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:17.024818 systemd-logind[1482]: Session 15 logged out. Waiting for processes to exit. Sep 13 01:38:17.025495 systemd[1]: sshd@12-10.230.67.162:22-139.178.68.195:56666.service: Deactivated successfully. Sep 13 01:38:17.028686 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 01:38:17.031534 systemd-logind[1482]: Removed session 15. Sep 13 01:38:22.184103 systemd[1]: Started sshd@13-10.230.67.162:22-139.178.68.195:43122.service - OpenSSH per-connection server daemon (139.178.68.195:43122). Sep 13 01:38:23.062988 sshd[4106]: Accepted publickey for core from 139.178.68.195 port 43122 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:38:23.065295 sshd[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:38:23.072286 systemd-logind[1482]: New session 16 of user core. Sep 13 01:38:23.085603 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 01:38:23.768317 sshd[4106]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:23.774556 systemd-logind[1482]: Session 16 logged out. Waiting for processes to exit. Sep 13 01:38:23.776103 systemd[1]: sshd@13-10.230.67.162:22-139.178.68.195:43122.service: Deactivated successfully. Sep 13 01:38:23.779042 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 01:38:23.780800 systemd-logind[1482]: Removed session 16. Sep 13 01:38:23.936779 systemd[1]: Started sshd@14-10.230.67.162:22-139.178.68.195:43128.service - OpenSSH per-connection server daemon (139.178.68.195:43128). Sep 13 01:38:24.827324 sshd[4119]: Accepted publickey for core from 139.178.68.195 port 43128 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:38:24.829471 sshd[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:38:24.835700 systemd-logind[1482]: New session 17 of user core. Sep 13 01:38:24.843584 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 01:38:25.857545 sshd[4119]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:25.865010 systemd-logind[1482]: Session 17 logged out. Waiting for processes to exit. Sep 13 01:38:25.865340 systemd[1]: sshd@14-10.230.67.162:22-139.178.68.195:43128.service: Deactivated successfully. Sep 13 01:38:25.868255 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 01:38:25.870766 systemd-logind[1482]: Removed session 17. Sep 13 01:38:26.014732 systemd[1]: Started sshd@15-10.230.67.162:22-139.178.68.195:43134.service - OpenSSH per-connection server daemon (139.178.68.195:43134). Sep 13 01:38:26.923982 sshd[4130]: Accepted publickey for core from 139.178.68.195 port 43134 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:38:26.926239 sshd[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:38:26.935306 systemd-logind[1482]: New session 18 of user core. Sep 13 01:38:26.939690 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 01:38:28.350071 sshd[4130]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:28.356314 systemd[1]: sshd@15-10.230.67.162:22-139.178.68.195:43134.service: Deactivated successfully. Sep 13 01:38:28.360041 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 01:38:28.361252 systemd-logind[1482]: Session 18 logged out. Waiting for processes to exit. Sep 13 01:38:28.363756 systemd-logind[1482]: Removed session 18. Sep 13 01:38:28.511787 systemd[1]: Started sshd@16-10.230.67.162:22-139.178.68.195:43140.service - OpenSSH per-connection server daemon (139.178.68.195:43140). Sep 13 01:38:29.420149 sshd[4148]: Accepted publickey for core from 139.178.68.195 port 43140 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:38:29.422380 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:38:29.429732 systemd-logind[1482]: New session 19 of user core. Sep 13 01:38:29.441125 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 01:38:30.345653 sshd[4148]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:30.350528 systemd[1]: sshd@16-10.230.67.162:22-139.178.68.195:43140.service: Deactivated successfully. Sep 13 01:38:30.356659 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 01:38:30.357782 systemd-logind[1482]: Session 19 logged out. Waiting for processes to exit. Sep 13 01:38:30.359292 systemd-logind[1482]: Removed session 19. Sep 13 01:38:30.505166 systemd[1]: Started sshd@17-10.230.67.162:22-139.178.68.195:44650.service - OpenSSH per-connection server daemon (139.178.68.195:44650). Sep 13 01:38:31.397705 sshd[4159]: Accepted publickey for core from 139.178.68.195 port 44650 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:38:31.399155 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:38:31.405624 systemd-logind[1482]: New session 20 of user core. Sep 13 01:38:31.418067 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 01:38:32.120355 sshd[4159]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:32.125960 systemd[1]: sshd@17-10.230.67.162:22-139.178.68.195:44650.service: Deactivated successfully. Sep 13 01:38:32.131577 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 01:38:32.134472 systemd-logind[1482]: Session 20 logged out. Waiting for processes to exit. Sep 13 01:38:32.136830 systemd-logind[1482]: Removed session 20. Sep 13 01:38:37.277834 systemd[1]: Started sshd@18-10.230.67.162:22-139.178.68.195:44662.service - OpenSSH per-connection server daemon (139.178.68.195:44662). Sep 13 01:38:38.157671 sshd[4174]: Accepted publickey for core from 139.178.68.195 port 44662 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:38:38.159581 sshd[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:38:38.167713 systemd-logind[1482]: New session 21 of user core. Sep 13 01:38:38.175731 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 01:38:38.870075 sshd[4174]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:38.874512 systemd[1]: sshd@18-10.230.67.162:22-139.178.68.195:44662.service: Deactivated successfully. Sep 13 01:38:38.877482 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 01:38:38.879294 systemd-logind[1482]: Session 21 logged out. Waiting for processes to exit. Sep 13 01:38:38.880858 systemd-logind[1482]: Removed session 21. Sep 13 01:38:44.033711 systemd[1]: Started sshd@19-10.230.67.162:22-139.178.68.195:34454.service - OpenSSH per-connection server daemon (139.178.68.195:34454). Sep 13 01:38:44.928216 sshd[4189]: Accepted publickey for core from 139.178.68.195 port 34454 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:38:44.930934 sshd[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:38:44.939671 systemd-logind[1482]: New session 22 of user core. Sep 13 01:38:44.946658 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 01:38:45.639341 sshd[4189]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:45.644264 systemd[1]: sshd@19-10.230.67.162:22-139.178.68.195:34454.service: Deactivated successfully. Sep 13 01:38:45.646594 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 01:38:45.647929 systemd-logind[1482]: Session 22 logged out. Waiting for processes to exit. Sep 13 01:38:45.649293 systemd-logind[1482]: Removed session 22. Sep 13 01:38:50.802706 systemd[1]: Started sshd@20-10.230.67.162:22-139.178.68.195:44522.service - OpenSSH per-connection server daemon (139.178.68.195:44522). Sep 13 01:38:51.692401 sshd[4204]: Accepted publickey for core from 139.178.68.195 port 44522 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:38:51.695360 sshd[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:38:51.704114 systemd-logind[1482]: New session 23 of user core. Sep 13 01:38:51.710650 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 01:38:52.400349 sshd[4204]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:52.406050 systemd[1]: sshd@20-10.230.67.162:22-139.178.68.195:44522.service: Deactivated successfully. Sep 13 01:38:52.409077 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 01:38:52.410475 systemd-logind[1482]: Session 23 logged out. Waiting for processes to exit. Sep 13 01:38:52.412083 systemd-logind[1482]: Removed session 23. Sep 13 01:38:52.556679 systemd[1]: Started sshd@21-10.230.67.162:22-139.178.68.195:44534.service - OpenSSH per-connection server daemon (139.178.68.195:44534). Sep 13 01:38:53.450421 sshd[4217]: Accepted publickey for core from 139.178.68.195 port 44534 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:38:53.452449 sshd[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:38:53.459407 systemd-logind[1482]: New session 24 of user core. Sep 13 01:38:53.470824 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 01:38:55.743473 systemd[1]: run-containerd-runc-k8s.io-17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d-runc.S0C9bF.mount: Deactivated successfully. Sep 13 01:38:55.770152 containerd[1501]: time="2025-09-13T01:38:55.768256701Z" level=info msg="StopContainer for \"f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f\" with timeout 30 (s)" Sep 13 01:38:55.772014 containerd[1501]: time="2025-09-13T01:38:55.771968025Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 01:38:55.773181 containerd[1501]: time="2025-09-13T01:38:55.773149942Z" level=info msg="Stop container \"f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f\" with signal terminated" Sep 13 01:38:55.783919 containerd[1501]: time="2025-09-13T01:38:55.783861260Z" level=info msg="StopContainer for \"17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d\" with timeout 2 (s)" Sep 13 01:38:55.784516 containerd[1501]: time="2025-09-13T01:38:55.784487671Z" level=info msg="Stop container \"17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d\" with signal terminated" Sep 13 01:38:55.798266 systemd[1]: cri-containerd-f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f.scope: Deactivated successfully. Sep 13 01:38:55.804319 systemd-networkd[1415]: lxc_health: Link DOWN Sep 13 01:38:55.804337 systemd-networkd[1415]: lxc_health: Lost carrier Sep 13 01:38:55.850965 systemd[1]: cri-containerd-17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d.scope: Deactivated successfully. Sep 13 01:38:55.851548 systemd[1]: cri-containerd-17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d.scope: Consumed 10.044s CPU time. Sep 13 01:38:55.867461 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f-rootfs.mount: Deactivated successfully. Sep 13 01:38:55.883601 containerd[1501]: time="2025-09-13T01:38:55.883225465Z" level=info msg="shim disconnected" id=f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f namespace=k8s.io Sep 13 01:38:55.883976 containerd[1501]: time="2025-09-13T01:38:55.883940807Z" level=warning msg="cleaning up after shim disconnected" id=f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f namespace=k8s.io Sep 13 01:38:55.884145 containerd[1501]: time="2025-09-13T01:38:55.884086920Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 01:38:55.894694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d-rootfs.mount: Deactivated successfully. Sep 13 01:38:55.903614 containerd[1501]: time="2025-09-13T01:38:55.903549113Z" level=info msg="shim disconnected" id=17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d namespace=k8s.io Sep 13 01:38:55.903979 containerd[1501]: time="2025-09-13T01:38:55.903828157Z" level=warning msg="cleaning up after shim disconnected" id=17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d namespace=k8s.io Sep 13 01:38:55.904154 containerd[1501]: time="2025-09-13T01:38:55.903874100Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 01:38:55.929922 containerd[1501]: time="2025-09-13T01:38:55.929722532Z" level=info msg="StopContainer for \"f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f\" returns successfully" Sep 13 01:38:55.935028 containerd[1501]: time="2025-09-13T01:38:55.934832428Z" level=info msg="StopContainer for \"17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d\" returns successfully" Sep 13 01:38:55.935974 containerd[1501]: time="2025-09-13T01:38:55.935932672Z" level=info msg="StopPodSandbox for \"629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0\"" Sep 13 01:38:55.936057 containerd[1501]: time="2025-09-13T01:38:55.936005191Z" level=info msg="Container to stop \"fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:55.936057 containerd[1501]: time="2025-09-13T01:38:55.936026884Z" level=info msg="Container to stop \"17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:55.936057 containerd[1501]: time="2025-09-13T01:38:55.936044864Z" level=info msg="Container to stop \"4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:55.936253 containerd[1501]: time="2025-09-13T01:38:55.936059279Z" level=info msg="Container to stop \"afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:55.936253 containerd[1501]: time="2025-09-13T01:38:55.936073384Z" level=info msg="Container to stop \"74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:55.938331 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0-shm.mount: Deactivated successfully. Sep 13 01:38:55.939743 containerd[1501]: time="2025-09-13T01:38:55.939691139Z" level=info msg="StopPodSandbox for \"bfe545158432c93c294b9ff65b13e1d5e08f8b702c0690dceb145ce27b34a0e4\"" Sep 13 01:38:55.939823 containerd[1501]: time="2025-09-13T01:38:55.939743804Z" level=info msg="Container to stop \"f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 01:38:55.950011 systemd[1]: cri-containerd-629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0.scope: Deactivated successfully. Sep 13 01:38:55.960047 systemd[1]: cri-containerd-bfe545158432c93c294b9ff65b13e1d5e08f8b702c0690dceb145ce27b34a0e4.scope: Deactivated successfully. Sep 13 01:38:55.987306 containerd[1501]: time="2025-09-13T01:38:55.987120747Z" level=info msg="shim disconnected" id=629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0 namespace=k8s.io Sep 13 01:38:55.987306 containerd[1501]: time="2025-09-13T01:38:55.987190140Z" level=warning msg="cleaning up after shim disconnected" id=629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0 namespace=k8s.io Sep 13 01:38:55.987306 containerd[1501]: time="2025-09-13T01:38:55.987205696Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 01:38:55.999940 containerd[1501]: time="2025-09-13T01:38:55.999608797Z" level=info msg="shim disconnected" id=bfe545158432c93c294b9ff65b13e1d5e08f8b702c0690dceb145ce27b34a0e4 namespace=k8s.io Sep 13 01:38:55.999940 containerd[1501]: time="2025-09-13T01:38:55.999674018Z" level=warning msg="cleaning up after shim disconnected" id=bfe545158432c93c294b9ff65b13e1d5e08f8b702c0690dceb145ce27b34a0e4 namespace=k8s.io Sep 13 01:38:55.999940 containerd[1501]: time="2025-09-13T01:38:55.999688717Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 01:38:56.031403 containerd[1501]: time="2025-09-13T01:38:56.030226411Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:38:56Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 01:38:56.035583 containerd[1501]: time="2025-09-13T01:38:56.035133663Z" level=info msg="TearDown network for sandbox \"bfe545158432c93c294b9ff65b13e1d5e08f8b702c0690dceb145ce27b34a0e4\" successfully" Sep 13 01:38:56.035583 containerd[1501]: time="2025-09-13T01:38:56.035173268Z" level=info msg="StopPodSandbox for \"bfe545158432c93c294b9ff65b13e1d5e08f8b702c0690dceb145ce27b34a0e4\" returns successfully" Sep 13 01:38:56.037894 containerd[1501]: time="2025-09-13T01:38:56.037275973Z" level=info msg="TearDown network for sandbox \"629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0\" successfully" Sep 13 01:38:56.037894 containerd[1501]: time="2025-09-13T01:38:56.037325091Z" level=info msg="StopPodSandbox for \"629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0\" returns successfully" Sep 13 01:38:56.083562 kubelet[2646]: E0913 01:38:56.083464 2646 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 01:38:56.161598 kubelet[2646]: I0913 01:38:56.161525 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czdlk\" (UniqueName: \"kubernetes.io/projected/f3e7d94a-3af0-4606-9e8e-6bcbc68d077e-kube-api-access-czdlk\") pod \"f3e7d94a-3af0-4606-9e8e-6bcbc68d077e\" (UID: \"f3e7d94a-3af0-4606-9e8e-6bcbc68d077e\") " Sep 13 01:38:56.161598 kubelet[2646]: I0913 01:38:56.161617 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-cilium-run\") pod \"75cc7f28-73ff-48a9-abf4-badde281b764\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " Sep 13 01:38:56.161886 kubelet[2646]: I0913 01:38:56.161658 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75cc7f28-73ff-48a9-abf4-badde281b764-hubble-tls\") pod \"75cc7f28-73ff-48a9-abf4-badde281b764\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " Sep 13 01:38:56.161886 kubelet[2646]: I0913 01:38:56.161682 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-lib-modules\") pod \"75cc7f28-73ff-48a9-abf4-badde281b764\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " Sep 13 01:38:56.161886 kubelet[2646]: I0913 01:38:56.161731 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75cc7f28-73ff-48a9-abf4-badde281b764-clustermesh-secrets\") pod \"75cc7f28-73ff-48a9-abf4-badde281b764\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " Sep 13 01:38:56.161886 kubelet[2646]: I0913 01:38:56.161773 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75cc7f28-73ff-48a9-abf4-badde281b764-cilium-config-path\") pod \"75cc7f28-73ff-48a9-abf4-badde281b764\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " Sep 13 01:38:56.161886 kubelet[2646]: I0913 01:38:56.161801 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-host-proc-sys-net\") pod \"75cc7f28-73ff-48a9-abf4-badde281b764\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " Sep 13 01:38:56.161886 kubelet[2646]: I0913 01:38:56.161833 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-xtables-lock\") pod \"75cc7f28-73ff-48a9-abf4-badde281b764\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " Sep 13 01:38:56.162182 kubelet[2646]: I0913 01:38:56.161869 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-cilium-cgroup\") pod \"75cc7f28-73ff-48a9-abf4-badde281b764\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " Sep 13 01:38:56.162182 kubelet[2646]: I0913 01:38:56.161895 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-bpf-maps\") pod \"75cc7f28-73ff-48a9-abf4-badde281b764\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " Sep 13 01:38:56.162182 kubelet[2646]: I0913 01:38:56.161921 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-cni-path\") pod \"75cc7f28-73ff-48a9-abf4-badde281b764\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " Sep 13 01:38:56.162182 kubelet[2646]: I0913 01:38:56.161951 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-hostproc\") pod \"75cc7f28-73ff-48a9-abf4-badde281b764\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " Sep 13 01:38:56.162182 kubelet[2646]: I0913 01:38:56.161975 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rdmzm\" (UniqueName: \"kubernetes.io/projected/75cc7f28-73ff-48a9-abf4-badde281b764-kube-api-access-rdmzm\") pod \"75cc7f28-73ff-48a9-abf4-badde281b764\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " Sep 13 01:38:56.162182 kubelet[2646]: I0913 01:38:56.161997 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-host-proc-sys-kernel\") pod \"75cc7f28-73ff-48a9-abf4-badde281b764\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " Sep 13 01:38:56.162903 kubelet[2646]: I0913 01:38:56.162017 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-etc-cni-netd\") pod \"75cc7f28-73ff-48a9-abf4-badde281b764\" (UID: \"75cc7f28-73ff-48a9-abf4-badde281b764\") " Sep 13 01:38:56.162903 kubelet[2646]: I0913 01:38:56.162112 2646 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3e7d94a-3af0-4606-9e8e-6bcbc68d077e-cilium-config-path\") pod \"f3e7d94a-3af0-4606-9e8e-6bcbc68d077e\" (UID: \"f3e7d94a-3af0-4606-9e8e-6bcbc68d077e\") " Sep 13 01:38:56.169832 kubelet[2646]: I0913 01:38:56.168199 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f3e7d94a-3af0-4606-9e8e-6bcbc68d077e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f3e7d94a-3af0-4606-9e8e-6bcbc68d077e" (UID: "f3e7d94a-3af0-4606-9e8e-6bcbc68d077e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 01:38:56.169832 kubelet[2646]: I0913 01:38:56.169649 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "75cc7f28-73ff-48a9-abf4-badde281b764" (UID: "75cc7f28-73ff-48a9-abf4-badde281b764"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:56.169832 kubelet[2646]: I0913 01:38:56.169701 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "75cc7f28-73ff-48a9-abf4-badde281b764" (UID: "75cc7f28-73ff-48a9-abf4-badde281b764"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:56.169832 kubelet[2646]: I0913 01:38:56.169734 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-cni-path" (OuterVolumeSpecName: "cni-path") pod "75cc7f28-73ff-48a9-abf4-badde281b764" (UID: "75cc7f28-73ff-48a9-abf4-badde281b764"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:56.169832 kubelet[2646]: I0913 01:38:56.169761 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-hostproc" (OuterVolumeSpecName: "hostproc") pod "75cc7f28-73ff-48a9-abf4-badde281b764" (UID: "75cc7f28-73ff-48a9-abf4-badde281b764"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:56.172214 kubelet[2646]: I0913 01:38:56.171655 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "75cc7f28-73ff-48a9-abf4-badde281b764" (UID: "75cc7f28-73ff-48a9-abf4-badde281b764"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:56.177419 kubelet[2646]: I0913 01:38:56.177355 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75cc7f28-73ff-48a9-abf4-badde281b764-kube-api-access-rdmzm" (OuterVolumeSpecName: "kube-api-access-rdmzm") pod "75cc7f28-73ff-48a9-abf4-badde281b764" (UID: "75cc7f28-73ff-48a9-abf4-badde281b764"). InnerVolumeSpecName "kube-api-access-rdmzm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:38:56.177533 kubelet[2646]: I0913 01:38:56.177374 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "75cc7f28-73ff-48a9-abf4-badde281b764" (UID: "75cc7f28-73ff-48a9-abf4-badde281b764"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:56.177533 kubelet[2646]: I0913 01:38:56.177460 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "75cc7f28-73ff-48a9-abf4-badde281b764" (UID: "75cc7f28-73ff-48a9-abf4-badde281b764"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:56.177533 kubelet[2646]: I0913 01:38:56.177491 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "75cc7f28-73ff-48a9-abf4-badde281b764" (UID: "75cc7f28-73ff-48a9-abf4-badde281b764"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:56.177533 kubelet[2646]: I0913 01:38:56.177511 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e7d94a-3af0-4606-9e8e-6bcbc68d077e-kube-api-access-czdlk" (OuterVolumeSpecName: "kube-api-access-czdlk") pod "f3e7d94a-3af0-4606-9e8e-6bcbc68d077e" (UID: "f3e7d94a-3af0-4606-9e8e-6bcbc68d077e"). InnerVolumeSpecName "kube-api-access-czdlk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:38:56.180454 kubelet[2646]: I0913 01:38:56.180422 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75cc7f28-73ff-48a9-abf4-badde281b764-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "75cc7f28-73ff-48a9-abf4-badde281b764" (UID: "75cc7f28-73ff-48a9-abf4-badde281b764"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 13 01:38:56.180633 kubelet[2646]: I0913 01:38:56.180479 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "75cc7f28-73ff-48a9-abf4-badde281b764" (UID: "75cc7f28-73ff-48a9-abf4-badde281b764"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:56.180995 kubelet[2646]: I0913 01:38:56.180914 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75cc7f28-73ff-48a9-abf4-badde281b764-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "75cc7f28-73ff-48a9-abf4-badde281b764" (UID: "75cc7f28-73ff-48a9-abf4-badde281b764"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 13 01:38:56.180995 kubelet[2646]: I0913 01:38:56.180963 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "75cc7f28-73ff-48a9-abf4-badde281b764" (UID: "75cc7f28-73ff-48a9-abf4-badde281b764"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 13 01:38:56.184266 kubelet[2646]: I0913 01:38:56.184138 2646 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75cc7f28-73ff-48a9-abf4-badde281b764-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "75cc7f28-73ff-48a9-abf4-badde281b764" (UID: "75cc7f28-73ff-48a9-abf4-badde281b764"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 13 01:38:56.263598 kubelet[2646]: I0913 01:38:56.263110 2646 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-cni-path\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.263598 kubelet[2646]: I0913 01:38:56.263160 2646 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-cilium-cgroup\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.263598 kubelet[2646]: I0913 01:38:56.263189 2646 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-bpf-maps\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.263598 kubelet[2646]: I0913 01:38:56.263205 2646 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-host-proc-sys-kernel\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.263598 kubelet[2646]: I0913 01:38:56.263223 2646 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-etc-cni-netd\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.263598 kubelet[2646]: I0913 01:38:56.263240 2646 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f3e7d94a-3af0-4606-9e8e-6bcbc68d077e-cilium-config-path\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.263598 kubelet[2646]: I0913 01:38:56.263254 2646 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-hostproc\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.263598 kubelet[2646]: I0913 01:38:56.263280 2646 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rdmzm\" (UniqueName: \"kubernetes.io/projected/75cc7f28-73ff-48a9-abf4-badde281b764-kube-api-access-rdmzm\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.264078 kubelet[2646]: I0913 01:38:56.263295 2646 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-czdlk\" (UniqueName: \"kubernetes.io/projected/f3e7d94a-3af0-4606-9e8e-6bcbc68d077e-kube-api-access-czdlk\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.264078 kubelet[2646]: I0913 01:38:56.263309 2646 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-cilium-run\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.264078 kubelet[2646]: I0913 01:38:56.263323 2646 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/75cc7f28-73ff-48a9-abf4-badde281b764-hubble-tls\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.264078 kubelet[2646]: I0913 01:38:56.263337 2646 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-lib-modules\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.264078 kubelet[2646]: I0913 01:38:56.263362 2646 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/75cc7f28-73ff-48a9-abf4-badde281b764-clustermesh-secrets\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.264078 kubelet[2646]: I0913 01:38:56.263402 2646 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-host-proc-sys-net\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.264078 kubelet[2646]: I0913 01:38:56.263434 2646 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/75cc7f28-73ff-48a9-abf4-badde281b764-xtables-lock\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.264078 kubelet[2646]: I0913 01:38:56.263459 2646 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/75cc7f28-73ff-48a9-abf4-badde281b764-cilium-config-path\") on node \"srv-bbx8z.gb1.brightbox.com\" DevicePath \"\"" Sep 13 01:38:56.651457 systemd[1]: Removed slice kubepods-besteffort-podf3e7d94a_3af0_4606_9e8e_6bcbc68d077e.slice - libcontainer container kubepods-besteffort-podf3e7d94a_3af0_4606_9e8e_6bcbc68d077e.slice. Sep 13 01:38:56.655333 kubelet[2646]: I0913 01:38:56.655230 2646 scope.go:117] "RemoveContainer" containerID="f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f" Sep 13 01:38:56.660430 containerd[1501]: time="2025-09-13T01:38:56.660305807Z" level=info msg="RemoveContainer for \"f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f\"" Sep 13 01:38:56.667944 containerd[1501]: time="2025-09-13T01:38:56.667806309Z" level=info msg="RemoveContainer for \"f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f\" returns successfully" Sep 13 01:38:56.671123 kubelet[2646]: I0913 01:38:56.671021 2646 scope.go:117] "RemoveContainer" containerID="f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f" Sep 13 01:38:56.677358 systemd[1]: Removed slice kubepods-burstable-pod75cc7f28_73ff_48a9_abf4_badde281b764.slice - libcontainer container kubepods-burstable-pod75cc7f28_73ff_48a9_abf4_badde281b764.slice. Sep 13 01:38:56.677552 systemd[1]: kubepods-burstable-pod75cc7f28_73ff_48a9_abf4_badde281b764.slice: Consumed 10.167s CPU time. Sep 13 01:38:56.684347 containerd[1501]: time="2025-09-13T01:38:56.674329561Z" level=error msg="ContainerStatus for \"f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f\": not found" Sep 13 01:38:56.694492 kubelet[2646]: E0913 01:38:56.693158 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f\": not found" containerID="f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f" Sep 13 01:38:56.697906 kubelet[2646]: I0913 01:38:56.697757 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f"} err="failed to get container status \"f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f\": rpc error: code = NotFound desc = an error occurred when try to find container \"f03be77b386ccd1f6b3512a3becac5543ce36728ca9e7a79764e1127ac292c2f\": not found" Sep 13 01:38:56.698035 kubelet[2646]: I0913 01:38:56.698014 2646 scope.go:117] "RemoveContainer" containerID="17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d" Sep 13 01:38:56.700216 containerd[1501]: time="2025-09-13T01:38:56.700175042Z" level=info msg="RemoveContainer for \"17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d\"" Sep 13 01:38:56.704278 containerd[1501]: time="2025-09-13T01:38:56.704240128Z" level=info msg="RemoveContainer for \"17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d\" returns successfully" Sep 13 01:38:56.704690 kubelet[2646]: I0913 01:38:56.704588 2646 scope.go:117] "RemoveContainer" containerID="74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb" Sep 13 01:38:56.707917 containerd[1501]: time="2025-09-13T01:38:56.707883883Z" level=info msg="RemoveContainer for \"74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb\"" Sep 13 01:38:56.712952 containerd[1501]: time="2025-09-13T01:38:56.712754784Z" level=info msg="RemoveContainer for \"74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb\" returns successfully" Sep 13 01:38:56.713602 kubelet[2646]: I0913 01:38:56.713566 2646 scope.go:117] "RemoveContainer" containerID="fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682" Sep 13 01:38:56.716424 containerd[1501]: time="2025-09-13T01:38:56.716183935Z" level=info msg="RemoveContainer for \"fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682\"" Sep 13 01:38:56.719165 containerd[1501]: time="2025-09-13T01:38:56.719131077Z" level=info msg="RemoveContainer for \"fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682\" returns successfully" Sep 13 01:38:56.720581 kubelet[2646]: I0913 01:38:56.719597 2646 scope.go:117] "RemoveContainer" containerID="afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f" Sep 13 01:38:56.723061 containerd[1501]: time="2025-09-13T01:38:56.723011579Z" level=info msg="RemoveContainer for \"afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f\"" Sep 13 01:38:56.727397 containerd[1501]: time="2025-09-13T01:38:56.726749332Z" level=info msg="RemoveContainer for \"afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f\" returns successfully" Sep 13 01:38:56.727532 kubelet[2646]: I0913 01:38:56.727259 2646 scope.go:117] "RemoveContainer" containerID="4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb" Sep 13 01:38:56.734537 containerd[1501]: time="2025-09-13T01:38:56.730694152Z" level=info msg="RemoveContainer for \"4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb\"" Sep 13 01:38:56.734537 containerd[1501]: time="2025-09-13T01:38:56.734166929Z" level=info msg="RemoveContainer for \"4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb\" returns successfully" Sep 13 01:38:56.738648 kubelet[2646]: I0913 01:38:56.736394 2646 scope.go:117] "RemoveContainer" containerID="17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d" Sep 13 01:38:56.738648 kubelet[2646]: E0913 01:38:56.736972 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d\": not found" containerID="17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d" Sep 13 01:38:56.738648 kubelet[2646]: I0913 01:38:56.737005 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d"} err="failed to get container status \"17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d\": rpc error: code = NotFound desc = an error occurred when try to find container \"17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d\": not found" Sep 13 01:38:56.738648 kubelet[2646]: I0913 01:38:56.737045 2646 scope.go:117] "RemoveContainer" containerID="74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb" Sep 13 01:38:56.738648 kubelet[2646]: E0913 01:38:56.738001 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb\": not found" containerID="74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb" Sep 13 01:38:56.738648 kubelet[2646]: I0913 01:38:56.738039 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb"} err="failed to get container status \"74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb\": rpc error: code = NotFound desc = an error occurred when try to find container \"74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb\": not found" Sep 13 01:38:56.738648 kubelet[2646]: I0913 01:38:56.738061 2646 scope.go:117] "RemoveContainer" containerID="fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682" Sep 13 01:38:56.739202 containerd[1501]: time="2025-09-13T01:38:56.736687743Z" level=error msg="ContainerStatus for \"17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"17c707d611fe8729b2769dc5bbee862618698d5ffdc465a879173cff1370c14d\": not found" Sep 13 01:38:56.739202 containerd[1501]: time="2025-09-13T01:38:56.737328431Z" level=error msg="ContainerStatus for \"74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"74aad50b7762af8e44aef2e7e8ba72eea4f625d999f174ccc741dd52f2c7fdeb\": not found" Sep 13 01:38:56.739202 containerd[1501]: time="2025-09-13T01:38:56.738227203Z" level=error msg="ContainerStatus for \"fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682\": not found" Sep 13 01:38:56.739202 containerd[1501]: time="2025-09-13T01:38:56.738830617Z" level=error msg="ContainerStatus for \"afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f\": not found" Sep 13 01:38:56.736689 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-629b6d46df3246cf54876977ef17e1c7110b5299c40694b6e7b2f62a2c040ab0-rootfs.mount: Deactivated successfully. Sep 13 01:38:56.739586 kubelet[2646]: E0913 01:38:56.738354 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682\": not found" containerID="fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682" Sep 13 01:38:56.739586 kubelet[2646]: I0913 01:38:56.738396 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682"} err="failed to get container status \"fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682\": rpc error: code = NotFound desc = an error occurred when try to find container \"fa67f8bd2572c24d0d1c8cdba121abde75e2d5eaf921e0bf8df4f68f28986682\": not found" Sep 13 01:38:56.739586 kubelet[2646]: I0913 01:38:56.738428 2646 scope.go:117] "RemoveContainer" containerID="afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f" Sep 13 01:38:56.739586 kubelet[2646]: E0913 01:38:56.739102 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f\": not found" containerID="afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f" Sep 13 01:38:56.739586 kubelet[2646]: I0913 01:38:56.739138 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f"} err="failed to get container status \"afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"afb5cf8e2e6f9bd81ccdc11bdae56fd6a95bdc82ec05929810ab0562b3c97e3f\": not found" Sep 13 01:38:56.739586 kubelet[2646]: I0913 01:38:56.739190 2646 scope.go:117] "RemoveContainer" containerID="4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb" Sep 13 01:38:56.736874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfe545158432c93c294b9ff65b13e1d5e08f8b702c0690dceb145ce27b34a0e4-rootfs.mount: Deactivated successfully. Sep 13 01:38:56.746845 containerd[1501]: time="2025-09-13T01:38:56.739681201Z" level=error msg="ContainerStatus for \"4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb\": not found" Sep 13 01:38:56.746924 kubelet[2646]: E0913 01:38:56.739936 2646 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb\": not found" containerID="4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb" Sep 13 01:38:56.746924 kubelet[2646]: I0913 01:38:56.740548 2646 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb"} err="failed to get container status \"4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"4eb14b354e6cdc37345603cc1fe7d2af277170701f095f4b12b9c637516354bb\": not found" Sep 13 01:38:56.736999 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfe545158432c93c294b9ff65b13e1d5e08f8b702c0690dceb145ce27b34a0e4-shm.mount: Deactivated successfully. Sep 13 01:38:56.737107 systemd[1]: var-lib-kubelet-pods-f3e7d94a\x2d3af0\x2d4606\x2d9e8e\x2d6bcbc68d077e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dczdlk.mount: Deactivated successfully. Sep 13 01:38:56.737221 systemd[1]: var-lib-kubelet-pods-75cc7f28\x2d73ff\x2d48a9\x2dabf4\x2dbadde281b764-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drdmzm.mount: Deactivated successfully. Sep 13 01:38:56.737353 systemd[1]: var-lib-kubelet-pods-75cc7f28\x2d73ff\x2d48a9\x2dabf4\x2dbadde281b764-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 01:38:56.737530 systemd[1]: var-lib-kubelet-pods-75cc7f28\x2d73ff\x2d48a9\x2dabf4\x2dbadde281b764-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 01:38:56.865276 kubelet[2646]: I0913 01:38:56.865218 2646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75cc7f28-73ff-48a9-abf4-badde281b764" path="/var/lib/kubelet/pods/75cc7f28-73ff-48a9-abf4-badde281b764/volumes" Sep 13 01:38:56.866682 kubelet[2646]: I0913 01:38:56.866654 2646 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e7d94a-3af0-4606-9e8e-6bcbc68d077e" path="/var/lib/kubelet/pods/f3e7d94a-3af0-4606-9e8e-6bcbc68d077e/volumes" Sep 13 01:38:57.761602 sshd[4217]: pam_unix(sshd:session): session closed for user core Sep 13 01:38:57.766217 systemd[1]: sshd@21-10.230.67.162:22-139.178.68.195:44534.service: Deactivated successfully. Sep 13 01:38:57.769174 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 01:38:57.769478 systemd[1]: session-24.scope: Consumed 1.107s CPU time. Sep 13 01:38:57.771177 systemd-logind[1482]: Session 24 logged out. Waiting for processes to exit. Sep 13 01:38:57.772971 systemd-logind[1482]: Removed session 24. Sep 13 01:38:57.920858 systemd[1]: Started sshd@22-10.230.67.162:22-139.178.68.195:44536.service - OpenSSH per-connection server daemon (139.178.68.195:44536). Sep 13 01:38:58.335711 systemd[1]: Started sshd@23-2a02:1348:179:90e8:24:19ff:fee6:43a2:22-2001:470:1:fb5::200:20736.service - OpenSSH per-connection server daemon ([2001:470:1:fb5::200]:20736). Sep 13 01:38:58.806046 sshd[4375]: Accepted publickey for core from 139.178.68.195 port 44536 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:38:58.808438 sshd[4375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:38:58.816323 systemd-logind[1482]: New session 25 of user core. Sep 13 01:38:58.824643 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 01:38:59.064450 sshd[4378]: Invalid user from 2001:470:1:fb5::200 port 20736 Sep 13 01:39:00.203509 kubelet[2646]: I0913 01:39:00.203403 2646 memory_manager.go:355] "RemoveStaleState removing state" podUID="f3e7d94a-3af0-4606-9e8e-6bcbc68d077e" containerName="cilium-operator" Sep 13 01:39:00.203509 kubelet[2646]: I0913 01:39:00.203589 2646 memory_manager.go:355] "RemoveStaleState removing state" podUID="75cc7f28-73ff-48a9-abf4-badde281b764" containerName="cilium-agent" Sep 13 01:39:00.240083 systemd[1]: Created slice kubepods-burstable-pod82bbe2dd_2848_42f0_ae8c_bb47b96e1df9.slice - libcontainer container kubepods-burstable-pod82bbe2dd_2848_42f0_ae8c_bb47b96e1df9.slice. Sep 13 01:39:00.281718 sshd[4375]: pam_unix(sshd:session): session closed for user core Sep 13 01:39:00.288109 systemd-logind[1482]: Session 25 logged out. Waiting for processes to exit. Sep 13 01:39:00.292717 systemd[1]: sshd@22-10.230.67.162:22-139.178.68.195:44536.service: Deactivated successfully. Sep 13 01:39:00.297949 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 01:39:00.300969 systemd-logind[1482]: Removed session 25. Sep 13 01:39:00.309102 kubelet[2646]: I0913 01:39:00.308493 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82bbe2dd-2848-42f0-ae8c-bb47b96e1df9-cni-path\") pod \"cilium-8nl9m\" (UID: \"82bbe2dd-2848-42f0-ae8c-bb47b96e1df9\") " pod="kube-system/cilium-8nl9m" Sep 13 01:39:00.309102 kubelet[2646]: I0913 01:39:00.308551 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82bbe2dd-2848-42f0-ae8c-bb47b96e1df9-lib-modules\") pod \"cilium-8nl9m\" (UID: \"82bbe2dd-2848-42f0-ae8c-bb47b96e1df9\") " pod="kube-system/cilium-8nl9m" Sep 13 01:39:00.309102 kubelet[2646]: I0913 01:39:00.308581 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82bbe2dd-2848-42f0-ae8c-bb47b96e1df9-cilium-config-path\") pod \"cilium-8nl9m\" (UID: \"82bbe2dd-2848-42f0-ae8c-bb47b96e1df9\") " pod="kube-system/cilium-8nl9m" Sep 13 01:39:00.309102 kubelet[2646]: I0913 01:39:00.308604 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82bbe2dd-2848-42f0-ae8c-bb47b96e1df9-bpf-maps\") pod \"cilium-8nl9m\" (UID: \"82bbe2dd-2848-42f0-ae8c-bb47b96e1df9\") " pod="kube-system/cilium-8nl9m" Sep 13 01:39:00.309102 kubelet[2646]: I0913 01:39:00.308628 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82bbe2dd-2848-42f0-ae8c-bb47b96e1df9-hubble-tls\") pod \"cilium-8nl9m\" (UID: \"82bbe2dd-2848-42f0-ae8c-bb47b96e1df9\") " pod="kube-system/cilium-8nl9m" Sep 13 01:39:00.309102 kubelet[2646]: I0913 01:39:00.308659 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82bbe2dd-2848-42f0-ae8c-bb47b96e1df9-hostproc\") pod \"cilium-8nl9m\" (UID: \"82bbe2dd-2848-42f0-ae8c-bb47b96e1df9\") " pod="kube-system/cilium-8nl9m" Sep 13 01:39:00.309619 kubelet[2646]: I0913 01:39:00.308694 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82bbe2dd-2848-42f0-ae8c-bb47b96e1df9-etc-cni-netd\") pod \"cilium-8nl9m\" (UID: \"82bbe2dd-2848-42f0-ae8c-bb47b96e1df9\") " pod="kube-system/cilium-8nl9m" Sep 13 01:39:00.309619 kubelet[2646]: I0913 01:39:00.308726 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82bbe2dd-2848-42f0-ae8c-bb47b96e1df9-host-proc-sys-kernel\") pod \"cilium-8nl9m\" (UID: \"82bbe2dd-2848-42f0-ae8c-bb47b96e1df9\") " pod="kube-system/cilium-8nl9m" Sep 13 01:39:00.309619 kubelet[2646]: I0913 01:39:00.308762 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82bbe2dd-2848-42f0-ae8c-bb47b96e1df9-xtables-lock\") pod \"cilium-8nl9m\" (UID: \"82bbe2dd-2848-42f0-ae8c-bb47b96e1df9\") " pod="kube-system/cilium-8nl9m" Sep 13 01:39:00.309619 kubelet[2646]: I0913 01:39:00.308787 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82bbe2dd-2848-42f0-ae8c-bb47b96e1df9-cilium-cgroup\") pod \"cilium-8nl9m\" (UID: \"82bbe2dd-2848-42f0-ae8c-bb47b96e1df9\") " pod="kube-system/cilium-8nl9m" Sep 13 01:39:00.309619 kubelet[2646]: I0913 01:39:00.308811 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82bbe2dd-2848-42f0-ae8c-bb47b96e1df9-host-proc-sys-net\") pod \"cilium-8nl9m\" (UID: \"82bbe2dd-2848-42f0-ae8c-bb47b96e1df9\") " pod="kube-system/cilium-8nl9m" Sep 13 01:39:00.309870 kubelet[2646]: I0913 01:39:00.308855 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v57pl\" (UniqueName: \"kubernetes.io/projected/82bbe2dd-2848-42f0-ae8c-bb47b96e1df9-kube-api-access-v57pl\") pod \"cilium-8nl9m\" (UID: \"82bbe2dd-2848-42f0-ae8c-bb47b96e1df9\") " pod="kube-system/cilium-8nl9m" Sep 13 01:39:00.309870 kubelet[2646]: I0913 01:39:00.308892 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82bbe2dd-2848-42f0-ae8c-bb47b96e1df9-clustermesh-secrets\") pod \"cilium-8nl9m\" (UID: \"82bbe2dd-2848-42f0-ae8c-bb47b96e1df9\") " pod="kube-system/cilium-8nl9m" Sep 13 01:39:00.309870 kubelet[2646]: I0913 01:39:00.308919 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82bbe2dd-2848-42f0-ae8c-bb47b96e1df9-cilium-run\") pod \"cilium-8nl9m\" (UID: \"82bbe2dd-2848-42f0-ae8c-bb47b96e1df9\") " pod="kube-system/cilium-8nl9m" Sep 13 01:39:00.309870 kubelet[2646]: I0913 01:39:00.308943 2646 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/82bbe2dd-2848-42f0-ae8c-bb47b96e1df9-cilium-ipsec-secrets\") pod \"cilium-8nl9m\" (UID: \"82bbe2dd-2848-42f0-ae8c-bb47b96e1df9\") " pod="kube-system/cilium-8nl9m" Sep 13 01:39:00.440753 systemd[1]: Started sshd@24-10.230.67.162:22-139.178.68.195:48358.service - OpenSSH per-connection server daemon (139.178.68.195:48358). Sep 13 01:39:00.566183 containerd[1501]: time="2025-09-13T01:39:00.564314542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8nl9m,Uid:82bbe2dd-2848-42f0-ae8c-bb47b96e1df9,Namespace:kube-system,Attempt:0,}" Sep 13 01:39:00.601288 containerd[1501]: time="2025-09-13T01:39:00.601056326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 01:39:00.601573 containerd[1501]: time="2025-09-13T01:39:00.601248305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 01:39:00.602528 containerd[1501]: time="2025-09-13T01:39:00.602301352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:39:00.602528 containerd[1501]: time="2025-09-13T01:39:00.602463686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 01:39:00.642640 systemd[1]: Started cri-containerd-32ca443ed02a58fdba8dadc6c6d3ef7d4606f84b755f210c78f4e5ac11fd117f.scope - libcontainer container 32ca443ed02a58fdba8dadc6c6d3ef7d4606f84b755f210c78f4e5ac11fd117f. Sep 13 01:39:00.681754 containerd[1501]: time="2025-09-13T01:39:00.681668780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8nl9m,Uid:82bbe2dd-2848-42f0-ae8c-bb47b96e1df9,Namespace:kube-system,Attempt:0,} returns sandbox id \"32ca443ed02a58fdba8dadc6c6d3ef7d4606f84b755f210c78f4e5ac11fd117f\"" Sep 13 01:39:00.687093 containerd[1501]: time="2025-09-13T01:39:00.687058900Z" level=info msg="CreateContainer within sandbox \"32ca443ed02a58fdba8dadc6c6d3ef7d4606f84b755f210c78f4e5ac11fd117f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 01:39:00.701690 containerd[1501]: time="2025-09-13T01:39:00.701639174Z" level=info msg="CreateContainer within sandbox \"32ca443ed02a58fdba8dadc6c6d3ef7d4606f84b755f210c78f4e5ac11fd117f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c13e9d0405fa390d67882370366f5e48985818b37a756497830eecc6deecc759\"" Sep 13 01:39:00.702652 containerd[1501]: time="2025-09-13T01:39:00.702611016Z" level=info msg="StartContainer for \"c13e9d0405fa390d67882370366f5e48985818b37a756497830eecc6deecc759\"" Sep 13 01:39:00.737623 systemd[1]: Started cri-containerd-c13e9d0405fa390d67882370366f5e48985818b37a756497830eecc6deecc759.scope - libcontainer container c13e9d0405fa390d67882370366f5e48985818b37a756497830eecc6deecc759. Sep 13 01:39:00.780431 containerd[1501]: time="2025-09-13T01:39:00.780079932Z" level=info msg="StartContainer for \"c13e9d0405fa390d67882370366f5e48985818b37a756497830eecc6deecc759\" returns successfully" Sep 13 01:39:00.799917 systemd[1]: cri-containerd-c13e9d0405fa390d67882370366f5e48985818b37a756497830eecc6deecc759.scope: Deactivated successfully. Sep 13 01:39:00.843128 containerd[1501]: time="2025-09-13T01:39:00.842623341Z" level=info msg="shim disconnected" id=c13e9d0405fa390d67882370366f5e48985818b37a756497830eecc6deecc759 namespace=k8s.io Sep 13 01:39:00.843128 containerd[1501]: time="2025-09-13T01:39:00.842736331Z" level=warning msg="cleaning up after shim disconnected" id=c13e9d0405fa390d67882370366f5e48985818b37a756497830eecc6deecc759 namespace=k8s.io Sep 13 01:39:00.843128 containerd[1501]: time="2025-09-13T01:39:00.842757901Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 01:39:01.084804 kubelet[2646]: E0913 01:39:01.084706 2646 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 01:39:01.359125 sshd[4393]: Accepted publickey for core from 139.178.68.195 port 48358 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:39:01.361336 sshd[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:39:01.370128 systemd-logind[1482]: New session 26 of user core. Sep 13 01:39:01.374585 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 01:39:01.450432 systemd[1]: run-containerd-runc-k8s.io-32ca443ed02a58fdba8dadc6c6d3ef7d4606f84b755f210c78f4e5ac11fd117f-runc.ROMBeY.mount: Deactivated successfully. Sep 13 01:39:01.691910 containerd[1501]: time="2025-09-13T01:39:01.690908708Z" level=info msg="CreateContainer within sandbox \"32ca443ed02a58fdba8dadc6c6d3ef7d4606f84b755f210c78f4e5ac11fd117f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 01:39:01.712247 containerd[1501]: time="2025-09-13T01:39:01.712069502Z" level=info msg="CreateContainer within sandbox \"32ca443ed02a58fdba8dadc6c6d3ef7d4606f84b755f210c78f4e5ac11fd117f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e3f49ac803f0fd3ef4942c4f10e70fce050ce7886dea7c864ed2cbc253523cc9\"" Sep 13 01:39:01.713596 containerd[1501]: time="2025-09-13T01:39:01.713284766Z" level=info msg="StartContainer for \"e3f49ac803f0fd3ef4942c4f10e70fce050ce7886dea7c864ed2cbc253523cc9\"" Sep 13 01:39:01.773653 systemd[1]: Started cri-containerd-e3f49ac803f0fd3ef4942c4f10e70fce050ce7886dea7c864ed2cbc253523cc9.scope - libcontainer container e3f49ac803f0fd3ef4942c4f10e70fce050ce7886dea7c864ed2cbc253523cc9. Sep 13 01:39:01.820497 containerd[1501]: time="2025-09-13T01:39:01.820350399Z" level=info msg="StartContainer for \"e3f49ac803f0fd3ef4942c4f10e70fce050ce7886dea7c864ed2cbc253523cc9\" returns successfully" Sep 13 01:39:01.845756 systemd[1]: cri-containerd-e3f49ac803f0fd3ef4942c4f10e70fce050ce7886dea7c864ed2cbc253523cc9.scope: Deactivated successfully. Sep 13 01:39:01.879933 containerd[1501]: time="2025-09-13T01:39:01.879858794Z" level=info msg="shim disconnected" id=e3f49ac803f0fd3ef4942c4f10e70fce050ce7886dea7c864ed2cbc253523cc9 namespace=k8s.io Sep 13 01:39:01.879933 containerd[1501]: time="2025-09-13T01:39:01.879931069Z" level=warning msg="cleaning up after shim disconnected" id=e3f49ac803f0fd3ef4942c4f10e70fce050ce7886dea7c864ed2cbc253523cc9 namespace=k8s.io Sep 13 01:39:01.880210 containerd[1501]: time="2025-09-13T01:39:01.879945784Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 01:39:01.901254 containerd[1501]: time="2025-09-13T01:39:01.901152414Z" level=warning msg="cleanup warnings time=\"2025-09-13T01:39:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 01:39:01.988615 sshd[4393]: pam_unix(sshd:session): session closed for user core Sep 13 01:39:01.992271 systemd-logind[1482]: Session 26 logged out. Waiting for processes to exit. Sep 13 01:39:01.992988 systemd[1]: sshd@24-10.230.67.162:22-139.178.68.195:48358.service: Deactivated successfully. Sep 13 01:39:01.995499 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 01:39:01.998134 systemd-logind[1482]: Removed session 26. Sep 13 01:39:02.146740 systemd[1]: Started sshd@25-10.230.67.162:22-139.178.68.195:48368.service - OpenSSH per-connection server daemon (139.178.68.195:48368). Sep 13 01:39:02.450527 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3f49ac803f0fd3ef4942c4f10e70fce050ce7886dea7c864ed2cbc253523cc9-rootfs.mount: Deactivated successfully. Sep 13 01:39:02.694732 containerd[1501]: time="2025-09-13T01:39:02.694024939Z" level=info msg="CreateContainer within sandbox \"32ca443ed02a58fdba8dadc6c6d3ef7d4606f84b755f210c78f4e5ac11fd117f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 01:39:02.714775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1373862803.mount: Deactivated successfully. Sep 13 01:39:02.719245 containerd[1501]: time="2025-09-13T01:39:02.718883524Z" level=info msg="CreateContainer within sandbox \"32ca443ed02a58fdba8dadc6c6d3ef7d4606f84b755f210c78f4e5ac11fd117f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6f1986bfdd75e6b50b5ef135a3cff7f746d9c2956e755371d14b11ded41ba165\"" Sep 13 01:39:02.720611 containerd[1501]: time="2025-09-13T01:39:02.720580991Z" level=info msg="StartContainer for \"6f1986bfdd75e6b50b5ef135a3cff7f746d9c2956e755371d14b11ded41ba165\"" Sep 13 01:39:02.774648 systemd[1]: Started cri-containerd-6f1986bfdd75e6b50b5ef135a3cff7f746d9c2956e755371d14b11ded41ba165.scope - libcontainer container 6f1986bfdd75e6b50b5ef135a3cff7f746d9c2956e755371d14b11ded41ba165. Sep 13 01:39:02.822419 containerd[1501]: time="2025-09-13T01:39:02.820641335Z" level=info msg="StartContainer for \"6f1986bfdd75e6b50b5ef135a3cff7f746d9c2956e755371d14b11ded41ba165\" returns successfully" Sep 13 01:39:02.832218 systemd[1]: cri-containerd-6f1986bfdd75e6b50b5ef135a3cff7f746d9c2956e755371d14b11ded41ba165.scope: Deactivated successfully. Sep 13 01:39:02.871221 containerd[1501]: time="2025-09-13T01:39:02.871141404Z" level=info msg="shim disconnected" id=6f1986bfdd75e6b50b5ef135a3cff7f746d9c2956e755371d14b11ded41ba165 namespace=k8s.io Sep 13 01:39:02.871574 containerd[1501]: time="2025-09-13T01:39:02.871507029Z" level=warning msg="cleaning up after shim disconnected" id=6f1986bfdd75e6b50b5ef135a3cff7f746d9c2956e755371d14b11ded41ba165 namespace=k8s.io Sep 13 01:39:02.871574 containerd[1501]: time="2025-09-13T01:39:02.871531539Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 01:39:03.045557 sshd[4567]: Accepted publickey for core from 139.178.68.195 port 48368 ssh2: RSA SHA256:nCFR9BVD/sBsaMzu6piX/nSqoN/UcYzTi/UCsy9A7bQ Sep 13 01:39:03.047176 sshd[4567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 01:39:03.054650 systemd-logind[1482]: New session 27 of user core. Sep 13 01:39:03.064582 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 13 01:39:03.339225 sshd[4378]: Connection closed by invalid user 2001:470:1:fb5::200 port 20736 [preauth] Sep 13 01:39:03.340670 systemd[1]: sshd@23-2a02:1348:179:90e8:24:19ff:fee6:43a2:22-2001:470:1:fb5::200:20736.service: Deactivated successfully. Sep 13 01:39:03.450911 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f1986bfdd75e6b50b5ef135a3cff7f746d9c2956e755371d14b11ded41ba165-rootfs.mount: Deactivated successfully. Sep 13 01:39:03.703118 containerd[1501]: time="2025-09-13T01:39:03.703074539Z" level=info msg="CreateContainer within sandbox \"32ca443ed02a58fdba8dadc6c6d3ef7d4606f84b755f210c78f4e5ac11fd117f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 01:39:03.733313 containerd[1501]: time="2025-09-13T01:39:03.733156756Z" level=info msg="CreateContainer within sandbox \"32ca443ed02a58fdba8dadc6c6d3ef7d4606f84b755f210c78f4e5ac11fd117f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8848d4a8d6b16d9f9a8afd4d68b93f06f6a48c98c8bf2435394e7d1cb7f9e1f1\"" Sep 13 01:39:03.735001 containerd[1501]: time="2025-09-13T01:39:03.734303011Z" level=info msg="StartContainer for \"8848d4a8d6b16d9f9a8afd4d68b93f06f6a48c98c8bf2435394e7d1cb7f9e1f1\"" Sep 13 01:39:03.777587 systemd[1]: Started cri-containerd-8848d4a8d6b16d9f9a8afd4d68b93f06f6a48c98c8bf2435394e7d1cb7f9e1f1.scope - libcontainer container 8848d4a8d6b16d9f9a8afd4d68b93f06f6a48c98c8bf2435394e7d1cb7f9e1f1. Sep 13 01:39:03.813585 systemd[1]: cri-containerd-8848d4a8d6b16d9f9a8afd4d68b93f06f6a48c98c8bf2435394e7d1cb7f9e1f1.scope: Deactivated successfully. Sep 13 01:39:03.817552 containerd[1501]: time="2025-09-13T01:39:03.816661725Z" level=info msg="StartContainer for \"8848d4a8d6b16d9f9a8afd4d68b93f06f6a48c98c8bf2435394e7d1cb7f9e1f1\" returns successfully" Sep 13 01:39:03.856573 containerd[1501]: time="2025-09-13T01:39:03.856332508Z" level=info msg="shim disconnected" id=8848d4a8d6b16d9f9a8afd4d68b93f06f6a48c98c8bf2435394e7d1cb7f9e1f1 namespace=k8s.io Sep 13 01:39:03.856573 containerd[1501]: time="2025-09-13T01:39:03.856562905Z" level=warning msg="cleaning up after shim disconnected" id=8848d4a8d6b16d9f9a8afd4d68b93f06f6a48c98c8bf2435394e7d1cb7f9e1f1 namespace=k8s.io Sep 13 01:39:03.856573 containerd[1501]: time="2025-09-13T01:39:03.856584547Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 01:39:04.411630 kubelet[2646]: I0913 01:39:04.410965 2646 setters.go:602] "Node became not ready" node="srv-bbx8z.gb1.brightbox.com" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T01:39:04Z","lastTransitionTime":"2025-09-13T01:39:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 01:39:04.451705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8848d4a8d6b16d9f9a8afd4d68b93f06f6a48c98c8bf2435394e7d1cb7f9e1f1-rootfs.mount: Deactivated successfully. Sep 13 01:39:04.707292 containerd[1501]: time="2025-09-13T01:39:04.707213886Z" level=info msg="CreateContainer within sandbox \"32ca443ed02a58fdba8dadc6c6d3ef7d4606f84b755f210c78f4e5ac11fd117f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 01:39:04.730670 containerd[1501]: time="2025-09-13T01:39:04.730606810Z" level=info msg="CreateContainer within sandbox \"32ca443ed02a58fdba8dadc6c6d3ef7d4606f84b755f210c78f4e5ac11fd117f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fa3cc320ecb44263c9cc399a515337f7af7ffa2c757bb5d411d5af02876bb327\"" Sep 13 01:39:04.732713 containerd[1501]: time="2025-09-13T01:39:04.732674240Z" level=info msg="StartContainer for \"fa3cc320ecb44263c9cc399a515337f7af7ffa2c757bb5d411d5af02876bb327\"" Sep 13 01:39:04.776601 systemd[1]: Started cri-containerd-fa3cc320ecb44263c9cc399a515337f7af7ffa2c757bb5d411d5af02876bb327.scope - libcontainer container fa3cc320ecb44263c9cc399a515337f7af7ffa2c757bb5d411d5af02876bb327. Sep 13 01:39:04.819793 containerd[1501]: time="2025-09-13T01:39:04.819616410Z" level=info msg="StartContainer for \"fa3cc320ecb44263c9cc399a515337f7af7ffa2c757bb5d411d5af02876bb327\" returns successfully" Sep 13 01:39:05.557416 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 13 01:39:05.735015 kubelet[2646]: I0913 01:39:05.734904 2646 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8nl9m" podStartSLOduration=5.734847759 podStartE2EDuration="5.734847759s" podCreationTimestamp="2025-09-13 01:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 01:39:05.73393316 +0000 UTC m=+145.147867661" watchObservedRunningTime="2025-09-13 01:39:05.734847759 +0000 UTC m=+145.148782244" Sep 13 01:39:08.169770 kubelet[2646]: E0913 01:39:08.168433 2646 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:35526->127.0.0.1:43837: write tcp 127.0.0.1:35526->127.0.0.1:43837: write: broken pipe Sep 13 01:39:09.420592 systemd-networkd[1415]: lxc_health: Link UP Sep 13 01:39:09.421683 systemd-networkd[1415]: lxc_health: Gained carrier Sep 13 01:39:10.324555 systemd[1]: run-containerd-runc-k8s.io-fa3cc320ecb44263c9cc399a515337f7af7ffa2c757bb5d411d5af02876bb327-runc.PQE0ey.mount: Deactivated successfully. Sep 13 01:39:11.382887 systemd-networkd[1415]: lxc_health: Gained IPv6LL Sep 13 01:39:14.822968 systemd[1]: run-containerd-runc-k8s.io-fa3cc320ecb44263c9cc399a515337f7af7ffa2c757bb5d411d5af02876bb327-runc.kVJdIV.mount: Deactivated successfully. Sep 13 01:39:17.041299 systemd[1]: run-containerd-runc-k8s.io-fa3cc320ecb44263c9cc399a515337f7af7ffa2c757bb5d411d5af02876bb327-runc.3fe4ym.mount: Deactivated successfully. Sep 13 01:39:17.256393 sshd[4567]: pam_unix(sshd:session): session closed for user core Sep 13 01:39:17.262181 systemd[1]: sshd@25-10.230.67.162:22-139.178.68.195:48368.service: Deactivated successfully. Sep 13 01:39:17.266660 systemd[1]: session-27.scope: Deactivated successfully. Sep 13 01:39:17.269271 systemd-logind[1482]: Session 27 logged out. Waiting for processes to exit. Sep 13 01:39:17.271970 systemd-logind[1482]: Removed session 27.