Sep 9 03:20:36.044223 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:41:17 -00 2025 Sep 9 03:20:36.044270 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 03:20:36.044285 kernel: BIOS-provided physical RAM map: Sep 9 03:20:36.044303 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 9 03:20:36.044313 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 9 03:20:36.044324 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 9 03:20:36.044336 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Sep 9 03:20:36.044346 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Sep 9 03:20:36.044357 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 9 03:20:36.044368 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 9 03:20:36.044379 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 9 03:20:36.044389 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 9 03:20:36.044405 kernel: NX (Execute Disable) protection: active Sep 9 03:20:36.044416 kernel: APIC: Static calls initialized Sep 9 03:20:36.044429 kernel: SMBIOS 2.8 present. Sep 9 03:20:36.044441 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Sep 9 03:20:36.044453 kernel: Hypervisor detected: KVM Sep 9 03:20:36.044469 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 03:20:36.044480 kernel: kvm-clock: using sched offset of 4437865223 cycles Sep 9 03:20:36.044493 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 03:20:36.044505 kernel: tsc: Detected 2499.998 MHz processor Sep 9 03:20:36.044517 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 03:20:36.044529 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 03:20:36.044541 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Sep 9 03:20:36.044553 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 9 03:20:36.044576 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 03:20:36.044595 kernel: Using GB pages for direct mapping Sep 9 03:20:36.044607 kernel: ACPI: Early table checksum verification disabled Sep 9 03:20:36.044619 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Sep 9 03:20:36.044631 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 03:20:36.044642 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 03:20:36.044654 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 03:20:36.044666 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Sep 9 03:20:36.044677 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 03:20:36.044689 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 03:20:36.044706 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 03:20:36.044718 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 03:20:36.044730 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Sep 9 03:20:36.044741 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Sep 9 03:20:36.044753 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Sep 9 03:20:36.044771 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Sep 9 03:20:36.044784 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Sep 9 03:20:36.044801 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Sep 9 03:20:36.044813 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Sep 9 03:20:36.044826 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 9 03:20:36.044838 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 9 03:20:36.044850 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 9 03:20:36.044862 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Sep 9 03:20:36.044875 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 9 03:20:36.044887 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Sep 9 03:20:36.044904 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 9 03:20:36.044916 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Sep 9 03:20:36.044928 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 9 03:20:36.044940 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Sep 9 03:20:36.044953 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 9 03:20:36.044965 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Sep 9 03:20:36.044977 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 9 03:20:36.044989 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Sep 9 03:20:36.045001 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 9 03:20:36.045018 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Sep 9 03:20:36.045030 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 9 03:20:36.045043 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 9 03:20:36.045055 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Sep 9 03:20:36.045068 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Sep 9 03:20:36.045081 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Sep 9 03:20:36.045093 kernel: Zone ranges: Sep 9 03:20:36.045106 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 03:20:36.045118 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Sep 9 03:20:36.045135 kernel: Normal empty Sep 9 03:20:36.045147 kernel: Movable zone start for each node Sep 9 03:20:36.045160 kernel: Early memory node ranges Sep 9 03:20:36.045863 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 9 03:20:36.045879 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Sep 9 03:20:36.045892 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Sep 9 03:20:36.045904 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 03:20:36.045917 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 9 03:20:36.045929 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Sep 9 03:20:36.045941 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 03:20:36.045962 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 03:20:36.045975 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 03:20:36.045987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 03:20:36.045999 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 03:20:36.046012 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 03:20:36.046025 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 03:20:36.046037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 03:20:36.046050 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 03:20:36.046062 kernel: TSC deadline timer available Sep 9 03:20:36.046079 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Sep 9 03:20:36.046092 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 03:20:36.046104 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 9 03:20:36.046117 kernel: Booting paravirtualized kernel on KVM Sep 9 03:20:36.046129 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 03:20:36.046142 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 9 03:20:36.046154 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 9 03:20:36.046178 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 9 03:20:36.046193 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 9 03:20:36.046211 kernel: kvm-guest: PV spinlocks enabled Sep 9 03:20:36.046224 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 03:20:36.046238 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 03:20:36.046251 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 03:20:36.046264 kernel: random: crng init done Sep 9 03:20:36.046276 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 03:20:36.046288 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 9 03:20:36.046301 kernel: Fallback order for Node 0: 0 Sep 9 03:20:36.046318 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Sep 9 03:20:36.046331 kernel: Policy zone: DMA32 Sep 9 03:20:36.046343 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 03:20:36.046356 kernel: software IO TLB: area num 16. Sep 9 03:20:36.046368 kernel: Memory: 1901528K/2096616K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42880K init, 2316K bss, 194828K reserved, 0K cma-reserved) Sep 9 03:20:36.046381 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 9 03:20:36.046393 kernel: Kernel/User page tables isolation: enabled Sep 9 03:20:36.046406 kernel: ftrace: allocating 37969 entries in 149 pages Sep 9 03:20:36.046418 kernel: ftrace: allocated 149 pages with 4 groups Sep 9 03:20:36.046435 kernel: Dynamic Preempt: voluntary Sep 9 03:20:36.046448 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 03:20:36.046461 kernel: rcu: RCU event tracing is enabled. Sep 9 03:20:36.046474 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 9 03:20:36.046487 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 03:20:36.046511 kernel: Rude variant of Tasks RCU enabled. Sep 9 03:20:36.046529 kernel: Tracing variant of Tasks RCU enabled. Sep 9 03:20:36.046542 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 03:20:36.046555 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 9 03:20:36.046578 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Sep 9 03:20:36.046592 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 03:20:36.046605 kernel: Console: colour VGA+ 80x25 Sep 9 03:20:36.046624 kernel: printk: console [tty0] enabled Sep 9 03:20:36.046637 kernel: printk: console [ttyS0] enabled Sep 9 03:20:36.046650 kernel: ACPI: Core revision 20230628 Sep 9 03:20:36.046664 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 03:20:36.046677 kernel: x2apic enabled Sep 9 03:20:36.046695 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 03:20:36.046708 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 9 03:20:36.046722 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 9 03:20:36.046735 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 03:20:36.046748 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 9 03:20:36.046761 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 9 03:20:36.046773 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 03:20:36.046786 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 03:20:36.046799 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 03:20:36.046812 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 9 03:20:36.046830 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 03:20:36.046843 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 03:20:36.046856 kernel: MDS: Mitigation: Clear CPU buffers Sep 9 03:20:36.046868 kernel: MMIO Stale Data: Unknown: No mitigations Sep 9 03:20:36.046881 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 9 03:20:36.046894 kernel: active return thunk: its_return_thunk Sep 9 03:20:36.046907 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 9 03:20:36.046920 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 03:20:36.046933 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 03:20:36.046945 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 03:20:36.046963 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 03:20:36.046976 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 9 03:20:36.046989 kernel: Freeing SMP alternatives memory: 32K Sep 9 03:20:36.047002 kernel: pid_max: default: 32768 minimum: 301 Sep 9 03:20:36.047015 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 03:20:36.047027 kernel: landlock: Up and running. Sep 9 03:20:36.047040 kernel: SELinux: Initializing. Sep 9 03:20:36.047053 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 03:20:36.047066 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 9 03:20:36.047079 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Sep 9 03:20:36.047092 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 9 03:20:36.047110 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 9 03:20:36.047123 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 9 03:20:36.047137 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Sep 9 03:20:36.047150 kernel: signal: max sigframe size: 1776 Sep 9 03:20:36.047163 kernel: rcu: Hierarchical SRCU implementation. Sep 9 03:20:36.052245 kernel: rcu: Max phase no-delay instances is 400. Sep 9 03:20:36.052263 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 9 03:20:36.052277 kernel: smp: Bringing up secondary CPUs ... Sep 9 03:20:36.052291 kernel: smpboot: x86: Booting SMP configuration: Sep 9 03:20:36.052313 kernel: .... node #0, CPUs: #1 Sep 9 03:20:36.052327 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 9 03:20:36.052340 kernel: smp: Brought up 1 node, 2 CPUs Sep 9 03:20:36.052353 kernel: smpboot: Max logical packages: 16 Sep 9 03:20:36.052367 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 9 03:20:36.052380 kernel: devtmpfs: initialized Sep 9 03:20:36.052393 kernel: x86/mm: Memory block size: 128MB Sep 9 03:20:36.052407 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 03:20:36.052420 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 9 03:20:36.052434 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 03:20:36.052452 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 03:20:36.052465 kernel: audit: initializing netlink subsys (disabled) Sep 9 03:20:36.052479 kernel: audit: type=2000 audit(1757388034.099:1): state=initialized audit_enabled=0 res=1 Sep 9 03:20:36.052492 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 03:20:36.052505 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 03:20:36.052518 kernel: cpuidle: using governor menu Sep 9 03:20:36.052532 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 03:20:36.052545 kernel: dca service started, version 1.12.1 Sep 9 03:20:36.052558 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 9 03:20:36.052590 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 9 03:20:36.052603 kernel: PCI: Using configuration type 1 for base access Sep 9 03:20:36.052616 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 03:20:36.052630 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 03:20:36.052643 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 03:20:36.052656 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 03:20:36.052669 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 03:20:36.052682 kernel: ACPI: Added _OSI(Module Device) Sep 9 03:20:36.052700 kernel: ACPI: Added _OSI(Processor Device) Sep 9 03:20:36.052714 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 03:20:36.052728 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 03:20:36.052741 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 9 03:20:36.052754 kernel: ACPI: Interpreter enabled Sep 9 03:20:36.052767 kernel: ACPI: PM: (supports S0 S5) Sep 9 03:20:36.052780 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 03:20:36.052793 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 03:20:36.052806 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 03:20:36.052824 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 03:20:36.052838 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 03:20:36.053147 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 03:20:36.053374 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 03:20:36.053578 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 03:20:36.053599 kernel: PCI host bridge to bus 0000:00 Sep 9 03:20:36.053796 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 03:20:36.053967 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 03:20:36.054124 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 03:20:36.057352 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Sep 9 03:20:36.057522 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 03:20:36.057698 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Sep 9 03:20:36.057856 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 03:20:36.058052 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 9 03:20:36.060307 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Sep 9 03:20:36.060495 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Sep 9 03:20:36.060687 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Sep 9 03:20:36.060861 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Sep 9 03:20:36.061033 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 03:20:36.061263 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 9 03:20:36.061454 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Sep 9 03:20:36.061658 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 9 03:20:36.061833 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Sep 9 03:20:36.062017 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 9 03:20:36.062205 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Sep 9 03:20:36.062393 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 9 03:20:36.062578 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Sep 9 03:20:36.062779 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 9 03:20:36.062954 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Sep 9 03:20:36.063136 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 9 03:20:36.065355 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Sep 9 03:20:36.065554 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 9 03:20:36.065754 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Sep 9 03:20:36.065962 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 9 03:20:36.066141 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Sep 9 03:20:36.068367 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 9 03:20:36.068551 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 9 03:20:36.068743 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Sep 9 03:20:36.068918 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Sep 9 03:20:36.069102 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Sep 9 03:20:36.069338 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 9 03:20:36.069513 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 9 03:20:36.069701 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Sep 9 03:20:36.069873 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Sep 9 03:20:36.070054 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 9 03:20:36.070243 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 03:20:36.070434 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 9 03:20:36.070624 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Sep 9 03:20:36.070797 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Sep 9 03:20:36.070979 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 9 03:20:36.071153 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 9 03:20:36.073403 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Sep 9 03:20:36.073602 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Sep 9 03:20:36.073793 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 9 03:20:36.073965 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 9 03:20:36.074137 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 9 03:20:36.074343 kernel: pci_bus 0000:02: extended config space not accessible Sep 9 03:20:36.074550 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Sep 9 03:20:36.074760 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Sep 9 03:20:36.074941 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 9 03:20:36.075119 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 9 03:20:36.080806 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 9 03:20:36.081004 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Sep 9 03:20:36.081206 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 9 03:20:36.081384 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 9 03:20:36.081558 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 9 03:20:36.081778 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 9 03:20:36.081961 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Sep 9 03:20:36.082139 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 9 03:20:36.082330 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 9 03:20:36.082504 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 9 03:20:36.082700 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 9 03:20:36.082874 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 9 03:20:36.083056 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 9 03:20:36.083289 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 9 03:20:36.083487 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 9 03:20:36.083685 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 9 03:20:36.083874 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 9 03:20:36.084064 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 9 03:20:36.085309 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 9 03:20:36.085491 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 9 03:20:36.085688 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 9 03:20:36.085859 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 9 03:20:36.086034 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 9 03:20:36.086225 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 9 03:20:36.086399 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 9 03:20:36.086419 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 03:20:36.086433 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 03:20:36.086447 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 03:20:36.086460 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 03:20:36.086481 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 03:20:36.086494 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 03:20:36.086507 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 03:20:36.086520 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 03:20:36.086533 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 03:20:36.086547 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 03:20:36.086560 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 03:20:36.086586 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 03:20:36.086600 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 03:20:36.086619 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 03:20:36.086632 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 03:20:36.086646 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 03:20:36.086659 kernel: iommu: Default domain type: Translated Sep 9 03:20:36.086672 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 03:20:36.086685 kernel: PCI: Using ACPI for IRQ routing Sep 9 03:20:36.086698 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 03:20:36.086711 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 9 03:20:36.086724 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Sep 9 03:20:36.086900 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 03:20:36.087072 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 03:20:36.090508 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 03:20:36.090533 kernel: vgaarb: loaded Sep 9 03:20:36.090548 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 03:20:36.090561 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 03:20:36.090586 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 03:20:36.090600 kernel: pnp: PnP ACPI init Sep 9 03:20:36.090796 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 9 03:20:36.090819 kernel: pnp: PnP ACPI: found 5 devices Sep 9 03:20:36.090833 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 03:20:36.090846 kernel: NET: Registered PF_INET protocol family Sep 9 03:20:36.090859 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 03:20:36.090873 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 9 03:20:36.090886 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 03:20:36.090899 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 9 03:20:36.090919 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 9 03:20:36.090933 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 9 03:20:36.090946 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 03:20:36.090960 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 9 03:20:36.090973 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 03:20:36.090986 kernel: NET: Registered PF_XDP protocol family Sep 9 03:20:36.091160 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Sep 9 03:20:36.091359 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 9 03:20:36.091552 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 9 03:20:36.091740 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 9 03:20:36.091915 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 9 03:20:36.092086 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 9 03:20:36.099149 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 9 03:20:36.099369 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 9 03:20:36.099590 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 9 03:20:36.099773 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 9 03:20:36.099948 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 9 03:20:36.100124 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 9 03:20:36.100330 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 9 03:20:36.100505 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 9 03:20:36.100694 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 9 03:20:36.100866 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 9 03:20:36.101074 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 9 03:20:36.101289 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 9 03:20:36.101467 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 9 03:20:36.101659 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 9 03:20:36.101836 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 9 03:20:36.102011 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 9 03:20:36.102203 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 9 03:20:36.102379 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 9 03:20:36.102573 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 9 03:20:36.102753 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 9 03:20:36.102934 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 9 03:20:36.103117 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 9 03:20:36.103316 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 9 03:20:36.103501 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 9 03:20:36.103699 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 9 03:20:36.103875 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 9 03:20:36.104050 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 9 03:20:36.104272 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 9 03:20:36.104445 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 9 03:20:36.104635 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 9 03:20:36.104808 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 9 03:20:36.104980 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 9 03:20:36.105150 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 9 03:20:36.106438 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 9 03:20:36.106625 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 9 03:20:36.106798 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 9 03:20:36.106969 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 9 03:20:36.107140 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 9 03:20:36.111466 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 9 03:20:36.111679 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 9 03:20:36.111858 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 9 03:20:36.112034 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 9 03:20:36.112232 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 9 03:20:36.112407 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 9 03:20:36.112584 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 03:20:36.112748 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 03:20:36.112906 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 03:20:36.113076 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Sep 9 03:20:36.113253 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 9 03:20:36.113410 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Sep 9 03:20:36.113611 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 9 03:20:36.113779 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Sep 9 03:20:36.113946 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 9 03:20:36.114130 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 9 03:20:36.114335 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Sep 9 03:20:36.114504 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 9 03:20:36.114687 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 9 03:20:36.114866 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Sep 9 03:20:36.115033 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 9 03:20:36.115222 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 9 03:20:36.115417 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 9 03:20:36.115608 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 9 03:20:36.115780 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 9 03:20:36.115967 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Sep 9 03:20:36.116133 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 9 03:20:36.118432 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 9 03:20:36.118631 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Sep 9 03:20:36.118809 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 9 03:20:36.118972 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 9 03:20:36.119147 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Sep 9 03:20:36.119325 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Sep 9 03:20:36.119487 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 9 03:20:36.119675 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Sep 9 03:20:36.119840 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 9 03:20:36.120012 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 9 03:20:36.120034 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 03:20:36.120048 kernel: PCI: CLS 0 bytes, default 64 Sep 9 03:20:36.120063 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 9 03:20:36.120077 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Sep 9 03:20:36.120092 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 9 03:20:36.120106 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 9 03:20:36.120120 kernel: Initialise system trusted keyrings Sep 9 03:20:36.120141 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 9 03:20:36.120155 kernel: Key type asymmetric registered Sep 9 03:20:36.121775 kernel: Asymmetric key parser 'x509' registered Sep 9 03:20:36.121797 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 9 03:20:36.121811 kernel: io scheduler mq-deadline registered Sep 9 03:20:36.121825 kernel: io scheduler kyber registered Sep 9 03:20:36.121847 kernel: io scheduler bfq registered Sep 9 03:20:36.122042 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 9 03:20:36.122241 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 9 03:20:36.122428 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 03:20:36.122624 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 9 03:20:36.122802 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 9 03:20:36.122976 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 03:20:36.123154 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 9 03:20:36.124041 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 9 03:20:36.124342 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 03:20:36.124524 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 9 03:20:36.124713 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 9 03:20:36.124886 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 03:20:36.125061 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 9 03:20:36.125257 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 9 03:20:36.125441 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 03:20:36.125630 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 9 03:20:36.125803 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 9 03:20:36.125976 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 03:20:36.126149 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 9 03:20:36.126371 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 9 03:20:36.126552 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 03:20:36.126743 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 9 03:20:36.126916 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 9 03:20:36.127088 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 9 03:20:36.127110 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 03:20:36.127126 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 03:20:36.127140 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 03:20:36.127161 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 03:20:36.127188 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 03:20:36.127203 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 03:20:36.127217 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 03:20:36.127231 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 03:20:36.127245 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 03:20:36.127424 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 9 03:20:36.127604 kernel: rtc_cmos 00:03: registered as rtc0 Sep 9 03:20:36.127777 kernel: rtc_cmos 00:03: setting system clock to 2025-09-09T03:20:35 UTC (1757388035) Sep 9 03:20:36.127942 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 9 03:20:36.127962 kernel: intel_pstate: CPU model not supported Sep 9 03:20:36.127976 kernel: NET: Registered PF_INET6 protocol family Sep 9 03:20:36.127990 kernel: Segment Routing with IPv6 Sep 9 03:20:36.128004 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 03:20:36.128017 kernel: NET: Registered PF_PACKET protocol family Sep 9 03:20:36.128031 kernel: Key type dns_resolver registered Sep 9 03:20:36.128051 kernel: IPI shorthand broadcast: enabled Sep 9 03:20:36.128066 kernel: sched_clock: Marking stable (1178014796, 233158135)->(1656946254, -245773323) Sep 9 03:20:36.128080 kernel: registered taskstats version 1 Sep 9 03:20:36.128094 kernel: Loading compiled-in X.509 certificates Sep 9 03:20:36.128108 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: cc5240ef94b546331b2896cdc739274c03278c51' Sep 9 03:20:36.128122 kernel: Key type .fscrypt registered Sep 9 03:20:36.128135 kernel: Key type fscrypt-provisioning registered Sep 9 03:20:36.128149 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 03:20:36.128163 kernel: ima: Allocated hash algorithm: sha1 Sep 9 03:20:36.128198 kernel: ima: No architecture policies found Sep 9 03:20:36.128213 kernel: clk: Disabling unused clocks Sep 9 03:20:36.128227 kernel: Freeing unused kernel image (initmem) memory: 42880K Sep 9 03:20:36.128241 kernel: Write protecting the kernel read-only data: 36864k Sep 9 03:20:36.128254 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 9 03:20:36.128268 kernel: Run /init as init process Sep 9 03:20:36.128287 kernel: with arguments: Sep 9 03:20:36.128301 kernel: /init Sep 9 03:20:36.128315 kernel: with environment: Sep 9 03:20:36.128332 kernel: HOME=/ Sep 9 03:20:36.128346 kernel: TERM=linux Sep 9 03:20:36.128360 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 03:20:36.128378 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 03:20:36.128395 systemd[1]: Detected virtualization kvm. Sep 9 03:20:36.128410 systemd[1]: Detected architecture x86-64. Sep 9 03:20:36.128425 systemd[1]: Running in initrd. Sep 9 03:20:36.128439 systemd[1]: No hostname configured, using default hostname. Sep 9 03:20:36.128459 systemd[1]: Hostname set to . Sep 9 03:20:36.128474 systemd[1]: Initializing machine ID from VM UUID. Sep 9 03:20:36.128488 systemd[1]: Queued start job for default target initrd.target. Sep 9 03:20:36.128503 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 03:20:36.128518 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 03:20:36.128533 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 03:20:36.128548 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 03:20:36.128573 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 03:20:36.128597 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 03:20:36.128613 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 03:20:36.128628 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 03:20:36.128643 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 03:20:36.128658 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 03:20:36.128673 systemd[1]: Reached target paths.target - Path Units. Sep 9 03:20:36.128687 systemd[1]: Reached target slices.target - Slice Units. Sep 9 03:20:36.128708 systemd[1]: Reached target swap.target - Swaps. Sep 9 03:20:36.128722 systemd[1]: Reached target timers.target - Timer Units. Sep 9 03:20:36.128737 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 03:20:36.128752 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 03:20:36.128767 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 03:20:36.128782 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 9 03:20:36.128797 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 03:20:36.128811 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 03:20:36.128830 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 03:20:36.128845 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 03:20:36.128860 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 03:20:36.128875 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 03:20:36.128890 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 03:20:36.128905 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 03:20:36.128919 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 03:20:36.128934 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 03:20:36.128949 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 03:20:36.128968 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 03:20:36.128984 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 03:20:36.128998 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 03:20:36.129051 systemd-journald[200]: Collecting audit messages is disabled. Sep 9 03:20:36.129091 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 03:20:36.129107 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 03:20:36.129122 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 03:20:36.129137 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 03:20:36.129157 kernel: Bridge firewalling registered Sep 9 03:20:36.129191 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 03:20:36.129208 systemd-journald[200]: Journal started Sep 9 03:20:36.129234 systemd-journald[200]: Runtime Journal (/run/log/journal/0c6892465c66490e867432a382ff544a) is 4.7M, max 38.0M, 33.2M free. Sep 9 03:20:36.043219 systemd-modules-load[201]: Inserted module 'overlay' Sep 9 03:20:36.138548 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 03:20:36.123262 systemd-modules-load[201]: Inserted module 'br_netfilter' Sep 9 03:20:36.139645 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 03:20:36.140937 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 03:20:36.150435 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 03:20:36.153355 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 03:20:36.161372 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 03:20:36.173127 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 03:20:36.185460 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 03:20:36.188119 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 03:20:36.190389 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 03:20:36.202427 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 03:20:36.204406 dracut-cmdline[232]: dracut-dracut-053 Sep 9 03:20:36.208809 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 03:20:36.241063 systemd-resolved[238]: Positive Trust Anchors: Sep 9 03:20:36.242093 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 03:20:36.242138 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 03:20:36.250237 systemd-resolved[238]: Defaulting to hostname 'linux'. Sep 9 03:20:36.252912 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 03:20:36.254029 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 03:20:36.305207 kernel: SCSI subsystem initialized Sep 9 03:20:36.317195 kernel: Loading iSCSI transport class v2.0-870. Sep 9 03:20:36.331231 kernel: iscsi: registered transport (tcp) Sep 9 03:20:36.357304 kernel: iscsi: registered transport (qla4xxx) Sep 9 03:20:36.357387 kernel: QLogic iSCSI HBA Driver Sep 9 03:20:36.413666 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 03:20:36.420381 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 03:20:36.453818 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 03:20:36.453899 kernel: device-mapper: uevent: version 1.0.3 Sep 9 03:20:36.456213 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 03:20:36.504284 kernel: raid6: sse2x4 gen() 13612 MB/s Sep 9 03:20:36.522251 kernel: raid6: sse2x2 gen() 9382 MB/s Sep 9 03:20:36.540814 kernel: raid6: sse2x1 gen() 10190 MB/s Sep 9 03:20:36.540887 kernel: raid6: using algorithm sse2x4 gen() 13612 MB/s Sep 9 03:20:36.559923 kernel: raid6: .... xor() 7660 MB/s, rmw enabled Sep 9 03:20:36.559983 kernel: raid6: using ssse3x2 recovery algorithm Sep 9 03:20:36.586233 kernel: xor: automatically using best checksumming function avx Sep 9 03:20:36.785208 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 03:20:36.800476 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 03:20:36.809458 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 03:20:36.827887 systemd-udevd[419]: Using default interface naming scheme 'v255'. Sep 9 03:20:36.834944 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 03:20:36.842457 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 03:20:36.872115 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Sep 9 03:20:36.912735 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 03:20:36.921428 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 03:20:37.036707 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 03:20:37.044577 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 03:20:37.077623 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 03:20:37.079899 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 03:20:37.082041 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 03:20:37.084476 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 03:20:37.091360 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 03:20:37.121402 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 03:20:37.175100 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Sep 9 03:20:37.191194 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 03:20:37.194285 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 9 03:20:37.204225 kernel: ACPI: bus type USB registered Sep 9 03:20:37.210290 kernel: usbcore: registered new interface driver usbfs Sep 9 03:20:37.218202 kernel: usbcore: registered new interface driver hub Sep 9 03:20:37.223195 kernel: usbcore: registered new device driver usb Sep 9 03:20:37.228615 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 03:20:37.228650 kernel: GPT:17805311 != 125829119 Sep 9 03:20:37.228669 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 03:20:37.230186 kernel: GPT:17805311 != 125829119 Sep 9 03:20:37.231234 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 03:20:37.232689 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 03:20:37.235318 kernel: AVX version of gcm_enc/dec engaged. Sep 9 03:20:37.236648 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 03:20:37.238559 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 03:20:37.240398 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 03:20:37.241130 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 03:20:37.246416 kernel: AES CTR mode by8 optimization enabled Sep 9 03:20:37.242204 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 03:20:37.243293 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 03:20:37.259194 kernel: libata version 3.00 loaded. Sep 9 03:20:37.261432 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 03:20:37.291281 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 03:20:37.291596 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 03:20:37.294857 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 9 03:20:37.295110 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Sep 9 03:20:37.299472 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 9 03:20:37.303731 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 9 03:20:37.303996 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Sep 9 03:20:37.304248 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Sep 9 03:20:37.307290 kernel: hub 1-0:1.0: USB hub found Sep 9 03:20:37.311007 kernel: hub 1-0:1.0: 4 ports detected Sep 9 03:20:37.311259 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 9 03:20:37.311504 kernel: hub 2-0:1.0: USB hub found Sep 9 03:20:37.311746 kernel: hub 2-0:1.0: 4 ports detected Sep 9 03:20:37.317266 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 9 03:20:37.317512 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 03:20:37.330593 kernel: scsi host0: ahci Sep 9 03:20:37.335190 kernel: scsi host1: ahci Sep 9 03:20:37.336344 kernel: scsi host2: ahci Sep 9 03:20:37.341185 kernel: scsi host3: ahci Sep 9 03:20:37.341416 kernel: scsi host4: ahci Sep 9 03:20:37.346946 kernel: scsi host5: ahci Sep 9 03:20:37.347241 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Sep 9 03:20:37.347266 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Sep 9 03:20:37.347285 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Sep 9 03:20:37.347303 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Sep 9 03:20:37.347331 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Sep 9 03:20:37.347351 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Sep 9 03:20:37.353649 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 03:20:37.441104 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (476) Sep 9 03:20:37.441142 kernel: BTRFS: device fsid 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (465) Sep 9 03:20:37.442446 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 03:20:37.450280 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 03:20:37.461444 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 03:20:37.462288 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 03:20:37.470110 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 03:20:37.477369 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 03:20:37.481349 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 03:20:37.489660 disk-uuid[561]: Primary Header is updated. Sep 9 03:20:37.489660 disk-uuid[561]: Secondary Entries is updated. Sep 9 03:20:37.489660 disk-uuid[561]: Secondary Header is updated. Sep 9 03:20:37.499059 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 03:20:37.509185 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 03:20:37.515208 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 03:20:37.517045 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 03:20:37.542204 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 9 03:20:37.667667 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 03:20:37.667997 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 9 03:20:37.668025 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 03:20:37.668044 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 03:20:37.668062 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 03:20:37.668080 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 03:20:37.717196 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 03:20:37.724301 kernel: usbcore: registered new interface driver usbhid Sep 9 03:20:37.724339 kernel: usbhid: USB HID core driver Sep 9 03:20:37.732209 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Sep 9 03:20:37.732268 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Sep 9 03:20:38.514197 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 03:20:38.514637 disk-uuid[563]: The operation has completed successfully. Sep 9 03:20:38.579661 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 03:20:38.579822 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 03:20:38.597381 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 03:20:38.604128 sh[584]: Success Sep 9 03:20:38.621215 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Sep 9 03:20:38.682249 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 03:20:38.704296 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 03:20:38.707129 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 03:20:38.734403 kernel: BTRFS info (device dm-0): first mount of filesystem 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a Sep 9 03:20:38.734464 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 03:20:38.734484 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 03:20:38.736799 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 03:20:38.739951 kernel: BTRFS info (device dm-0): using free space tree Sep 9 03:20:38.748338 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 03:20:38.750675 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 03:20:38.756360 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 03:20:38.762375 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 03:20:38.781837 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 03:20:38.781891 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 03:20:38.784649 kernel: BTRFS info (device vda6): using free space tree Sep 9 03:20:38.790247 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 03:20:38.804097 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 03:20:38.807673 kernel: BTRFS info (device vda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 03:20:38.816827 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 03:20:38.824367 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 03:20:38.938857 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 03:20:38.948762 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 03:20:38.968606 ignition[692]: Ignition 2.19.0 Sep 9 03:20:38.969235 ignition[692]: Stage: fetch-offline Sep 9 03:20:38.969322 ignition[692]: no configs at "/usr/lib/ignition/base.d" Sep 9 03:20:38.969347 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 03:20:38.969903 ignition[692]: parsed url from cmdline: "" Sep 9 03:20:38.973571 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 03:20:38.969910 ignition[692]: no config URL provided Sep 9 03:20:38.969921 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 03:20:38.969938 ignition[692]: no config at "/usr/lib/ignition/user.ign" Sep 9 03:20:38.969947 ignition[692]: failed to fetch config: resource requires networking Sep 9 03:20:38.970467 ignition[692]: Ignition finished successfully Sep 9 03:20:38.989100 systemd-networkd[771]: lo: Link UP Sep 9 03:20:38.989122 systemd-networkd[771]: lo: Gained carrier Sep 9 03:20:38.991519 systemd-networkd[771]: Enumeration completed Sep 9 03:20:38.991659 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 03:20:38.992621 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 03:20:38.992626 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 03:20:38.993960 systemd[1]: Reached target network.target - Network. Sep 9 03:20:38.994085 systemd-networkd[771]: eth0: Link UP Sep 9 03:20:38.994091 systemd-networkd[771]: eth0: Gained carrier Sep 9 03:20:38.994102 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 03:20:39.001365 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 9 03:20:39.023706 ignition[774]: Ignition 2.19.0 Sep 9 03:20:39.023728 ignition[774]: Stage: fetch Sep 9 03:20:39.024041 ignition[774]: no configs at "/usr/lib/ignition/base.d" Sep 9 03:20:39.025298 systemd-networkd[771]: eth0: DHCPv4 address 10.230.34.194/30, gateway 10.230.34.193 acquired from 10.230.34.193 Sep 9 03:20:39.024063 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 03:20:39.024236 ignition[774]: parsed url from cmdline: "" Sep 9 03:20:39.024243 ignition[774]: no config URL provided Sep 9 03:20:39.024253 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 03:20:39.024270 ignition[774]: no config at "/usr/lib/ignition/user.ign" Sep 9 03:20:39.024532 ignition[774]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 9 03:20:39.024579 ignition[774]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 9 03:20:39.024716 ignition[774]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 9 03:20:39.025049 ignition[774]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Sep 9 03:20:39.225299 ignition[774]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Sep 9 03:20:39.241064 ignition[774]: GET result: OK Sep 9 03:20:39.241853 ignition[774]: parsing config with SHA512: 79699771d57f65f575d9a90e9512125e45a5424c786703d821f75e134028d187ea6a5aa86f23fb31f9adac32a0ba6254aafc978a7701980937c43d1491ad8b8b Sep 9 03:20:39.247809 unknown[774]: fetched base config from "system" Sep 9 03:20:39.247826 unknown[774]: fetched base config from "system" Sep 9 03:20:39.251315 ignition[774]: fetch: fetch complete Sep 9 03:20:39.247836 unknown[774]: fetched user config from "openstack" Sep 9 03:20:39.251325 ignition[774]: fetch: fetch passed Sep 9 03:20:39.253574 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 9 03:20:39.251398 ignition[774]: Ignition finished successfully Sep 9 03:20:39.265345 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 03:20:39.285240 ignition[781]: Ignition 2.19.0 Sep 9 03:20:39.285261 ignition[781]: Stage: kargs Sep 9 03:20:39.285528 ignition[781]: no configs at "/usr/lib/ignition/base.d" Sep 9 03:20:39.285549 ignition[781]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 03:20:39.288579 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 03:20:39.287121 ignition[781]: kargs: kargs passed Sep 9 03:20:39.287206 ignition[781]: Ignition finished successfully Sep 9 03:20:39.295425 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 03:20:39.313975 ignition[788]: Ignition 2.19.0 Sep 9 03:20:39.313989 ignition[788]: Stage: disks Sep 9 03:20:39.314238 ignition[788]: no configs at "/usr/lib/ignition/base.d" Sep 9 03:20:39.314258 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 03:20:39.315441 ignition[788]: disks: disks passed Sep 9 03:20:39.315541 ignition[788]: Ignition finished successfully Sep 9 03:20:39.318891 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 03:20:39.320734 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 03:20:39.321742 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 03:20:39.323347 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 03:20:39.324895 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 03:20:39.326315 systemd[1]: Reached target basic.target - Basic System. Sep 9 03:20:39.337425 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 03:20:39.355122 systemd-fsck[796]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 9 03:20:39.477651 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 03:20:39.487349 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 03:20:39.606202 kernel: EXT4-fs (vda9): mounted filesystem ee55a213-d578-493d-a79b-e10c399cd35c r/w with ordered data mode. Quota mode: none. Sep 9 03:20:39.606854 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 03:20:39.608198 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 03:20:39.615300 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 03:20:39.618307 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 03:20:39.620566 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 03:20:39.622135 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Sep 9 03:20:39.624057 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 03:20:39.631542 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (804) Sep 9 03:20:39.624101 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 03:20:39.642241 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 03:20:39.642292 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 03:20:39.642314 kernel: BTRFS info (device vda6): using free space tree Sep 9 03:20:39.642338 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 03:20:39.642105 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 03:20:39.644349 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 03:20:39.652402 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 03:20:39.746186 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 03:20:39.755493 initrd-setup-root[839]: cut: /sysroot/etc/group: No such file or directory Sep 9 03:20:39.762419 initrd-setup-root[846]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 03:20:39.768233 initrd-setup-root[853]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 03:20:39.875404 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 03:20:39.887340 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 03:20:39.891105 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 03:20:39.899122 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 03:20:39.901354 kernel: BTRFS info (device vda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 03:20:39.936844 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 03:20:39.941646 ignition[920]: INFO : Ignition 2.19.0 Sep 9 03:20:39.942656 ignition[920]: INFO : Stage: mount Sep 9 03:20:39.944925 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 03:20:39.944925 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 03:20:39.944925 ignition[920]: INFO : mount: mount passed Sep 9 03:20:39.944925 ignition[920]: INFO : Ignition finished successfully Sep 9 03:20:39.948256 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 03:20:40.138555 systemd-networkd[771]: eth0: Gained IPv6LL Sep 9 03:20:41.646329 systemd-networkd[771]: eth0: Ignoring DHCPv6 address 2a02:1348:179:88b0:24:19ff:fee6:22c2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:88b0:24:19ff:fee6:22c2/64 assigned by NDisc. Sep 9 03:20:41.646346 systemd-networkd[771]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 9 03:20:46.809223 coreos-metadata[806]: Sep 09 03:20:46.809 WARN failed to locate config-drive, using the metadata service API instead Sep 9 03:20:46.833473 coreos-metadata[806]: Sep 09 03:20:46.833 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 9 03:20:46.848801 coreos-metadata[806]: Sep 09 03:20:46.848 INFO Fetch successful Sep 9 03:20:46.849741 coreos-metadata[806]: Sep 09 03:20:46.849 INFO wrote hostname srv-hr091.gb1.brightbox.com to /sysroot/etc/hostname Sep 9 03:20:46.851376 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 9 03:20:46.851554 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Sep 9 03:20:46.859320 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 03:20:46.883582 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 03:20:46.895196 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Sep 9 03:20:46.900467 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 03:20:46.900536 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 03:20:46.902300 kernel: BTRFS info (device vda6): using free space tree Sep 9 03:20:46.908195 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 03:20:46.910431 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 03:20:46.939123 ignition[956]: INFO : Ignition 2.19.0 Sep 9 03:20:46.939123 ignition[956]: INFO : Stage: files Sep 9 03:20:46.940936 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 03:20:46.940936 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 03:20:46.940936 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Sep 9 03:20:46.943708 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 03:20:46.943708 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 03:20:46.945914 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 03:20:46.945914 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 03:20:46.947940 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 03:20:46.945959 unknown[956]: wrote ssh authorized keys file for user: core Sep 9 03:20:46.950055 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 03:20:46.950055 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 9 03:20:47.164530 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 03:20:48.023393 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 9 03:20:48.023393 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 03:20:48.023393 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Sep 9 03:20:48.284664 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 03:20:48.623851 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 03:20:48.623851 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 03:20:48.626972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 03:20:48.626972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 03:20:48.626972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 03:20:48.626972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 03:20:48.626972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 03:20:48.626972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 03:20:48.626972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 03:20:48.626972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 03:20:48.626972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 03:20:48.626972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 03:20:48.626972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 03:20:48.626972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 03:20:48.626972 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 9 03:20:48.909545 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 03:20:51.402389 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 9 03:20:51.402389 ignition[956]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 03:20:51.406324 ignition[956]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 03:20:51.406324 ignition[956]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 03:20:51.406324 ignition[956]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 03:20:51.406324 ignition[956]: INFO : files: op(e): [started] setting preset to enabled for "prepare-helm.service" Sep 9 03:20:51.406324 ignition[956]: INFO : files: op(e): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 03:20:51.406324 ignition[956]: INFO : files: createResultFile: createFiles: op(f): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 03:20:51.406324 ignition[956]: INFO : files: createResultFile: createFiles: op(f): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 03:20:51.418298 ignition[956]: INFO : files: files passed Sep 9 03:20:51.418298 ignition[956]: INFO : Ignition finished successfully Sep 9 03:20:51.410502 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 03:20:51.423485 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 03:20:51.426414 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 03:20:51.448629 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 03:20:51.449383 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 03:20:51.458339 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 03:20:51.458339 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 03:20:51.461580 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 03:20:51.464065 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 03:20:51.465343 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 03:20:51.478447 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 03:20:51.511098 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 03:20:51.511335 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 03:20:51.513215 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 03:20:51.514597 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 03:20:51.516249 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 03:20:51.532400 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 03:20:51.550012 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 03:20:51.565468 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 03:20:51.578353 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 03:20:51.579306 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 03:20:51.581159 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 03:20:51.583021 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 03:20:51.583222 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 03:20:51.585509 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 03:20:51.586471 systemd[1]: Stopped target basic.target - Basic System. Sep 9 03:20:51.588103 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 03:20:51.589539 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 03:20:51.590940 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 03:20:51.592707 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 03:20:51.594302 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 03:20:51.595944 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 03:20:51.597486 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 03:20:51.599104 systemd[1]: Stopped target swap.target - Swaps. Sep 9 03:20:51.600530 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 03:20:51.600748 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 03:20:51.602593 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 03:20:51.603524 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 03:20:51.604995 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 03:20:51.605206 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 03:20:51.606731 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 03:20:51.606979 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 03:20:51.608953 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 03:20:51.609138 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 03:20:51.610352 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 03:20:51.610519 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 03:20:51.619919 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 03:20:51.620679 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 03:20:51.620939 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 03:20:51.624476 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 03:20:51.636599 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 03:20:51.637020 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 03:20:51.638640 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 03:20:51.639137 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 03:20:51.651668 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 03:20:51.651844 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 03:20:51.664195 ignition[1009]: INFO : Ignition 2.19.0 Sep 9 03:20:51.664195 ignition[1009]: INFO : Stage: umount Sep 9 03:20:51.664195 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 03:20:51.664195 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 9 03:20:51.670238 ignition[1009]: INFO : umount: umount passed Sep 9 03:20:51.670238 ignition[1009]: INFO : Ignition finished successfully Sep 9 03:20:51.672025 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 03:20:51.673139 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 03:20:51.673369 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 03:20:51.675600 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 03:20:51.675810 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 03:20:51.677655 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 03:20:51.677738 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 03:20:51.679142 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 9 03:20:51.679255 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 9 03:20:51.680570 systemd[1]: Stopped target network.target - Network. Sep 9 03:20:51.681864 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 03:20:51.681955 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 03:20:51.683403 systemd[1]: Stopped target paths.target - Path Units. Sep 9 03:20:51.684692 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 03:20:51.688227 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 03:20:51.689750 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 03:20:51.691316 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 03:20:51.693204 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 03:20:51.693275 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 03:20:51.694538 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 03:20:51.694619 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 03:20:51.695947 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 03:20:51.696028 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 03:20:51.697395 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 03:20:51.697487 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 03:20:51.699110 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 03:20:51.703258 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 03:20:51.706576 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 03:20:51.706818 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 03:20:51.709538 systemd-networkd[771]: eth0: DHCPv6 lease lost Sep 9 03:20:51.711044 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 03:20:51.711136 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 03:20:51.713449 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 03:20:51.713646 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 03:20:51.718097 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 03:20:51.718619 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 03:20:51.721955 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 03:20:51.722327 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 03:20:51.728326 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 03:20:51.729063 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 03:20:51.729136 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 03:20:51.731457 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 03:20:51.731553 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 03:20:51.734369 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 03:20:51.734472 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 03:20:51.736536 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 03:20:51.736608 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 03:20:51.738297 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 03:20:51.750257 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 03:20:51.750615 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 03:20:51.752929 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 03:20:51.753094 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 03:20:51.754946 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 03:20:51.755042 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 03:20:51.756716 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 03:20:51.756776 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 03:20:51.758318 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 03:20:51.758392 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 03:20:51.760417 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 03:20:51.760484 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 03:20:51.761955 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 03:20:51.762030 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 03:20:51.768418 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 03:20:51.769263 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 03:20:51.769338 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 03:20:51.771060 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 03:20:51.771139 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 03:20:51.793522 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 03:20:51.793715 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 03:20:51.796443 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 03:20:51.803364 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 03:20:51.814815 systemd[1]: Switching root. Sep 9 03:20:51.850193 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Sep 9 03:20:51.850346 systemd-journald[200]: Journal stopped Sep 9 03:20:53.384040 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 03:20:53.386875 kernel: SELinux: policy capability open_perms=1 Sep 9 03:20:53.386911 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 03:20:53.386940 kernel: SELinux: policy capability always_check_network=0 Sep 9 03:20:53.386959 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 03:20:53.386979 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 03:20:53.387024 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 03:20:53.387046 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 03:20:53.387075 kernel: audit: type=1403 audit(1757388052.143:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 03:20:53.387104 systemd[1]: Successfully loaded SELinux policy in 62.902ms. Sep 9 03:20:53.387184 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.211ms. Sep 9 03:20:53.387215 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 03:20:53.387237 systemd[1]: Detected virtualization kvm. Sep 9 03:20:53.387267 systemd[1]: Detected architecture x86-64. Sep 9 03:20:53.387303 systemd[1]: Detected first boot. Sep 9 03:20:53.387327 systemd[1]: Hostname set to . Sep 9 03:20:53.387355 systemd[1]: Initializing machine ID from VM UUID. Sep 9 03:20:53.387378 zram_generator::config[1053]: No configuration found. Sep 9 03:20:53.387415 systemd[1]: Populated /etc with preset unit settings. Sep 9 03:20:53.387439 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 03:20:53.387459 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 03:20:53.387480 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 03:20:53.387516 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 03:20:53.387548 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 03:20:53.387570 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 03:20:53.387598 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 03:20:53.387620 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 03:20:53.387642 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 03:20:53.387664 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 03:20:53.387684 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 03:20:53.387718 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 03:20:53.387742 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 03:20:53.387763 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 03:20:53.387784 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 03:20:53.387805 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 03:20:53.387826 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 03:20:53.387855 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 03:20:53.387878 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 03:20:53.387899 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 03:20:53.387937 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 03:20:53.387961 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 03:20:53.387982 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 03:20:53.388003 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 03:20:53.388024 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 03:20:53.388063 systemd[1]: Reached target slices.target - Slice Units. Sep 9 03:20:53.388109 systemd[1]: Reached target swap.target - Swaps. Sep 9 03:20:53.388143 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 03:20:53.390078 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 03:20:53.390116 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 03:20:53.390153 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 03:20:53.390913 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 03:20:53.390944 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 03:20:53.390984 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 03:20:53.391007 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 03:20:53.391028 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 03:20:53.391048 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 03:20:53.391076 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 03:20:53.391098 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 03:20:53.391143 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 03:20:53.393133 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 03:20:53.393208 systemd[1]: Reached target machines.target - Containers. Sep 9 03:20:53.393235 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 03:20:53.393257 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 03:20:53.393278 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 03:20:53.393299 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 03:20:53.393320 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 03:20:53.393341 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 03:20:53.393370 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 03:20:53.393393 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 03:20:53.393431 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 03:20:53.393455 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 03:20:53.393484 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 03:20:53.393507 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 03:20:53.393528 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 03:20:53.393549 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 03:20:53.393569 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 03:20:53.393592 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 03:20:53.393637 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 03:20:53.393672 kernel: fuse: init (API version 7.39) Sep 9 03:20:53.393700 kernel: loop: module loaded Sep 9 03:20:53.393733 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 03:20:53.393755 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 03:20:53.393776 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 03:20:53.393805 systemd[1]: Stopped verity-setup.service. Sep 9 03:20:53.393825 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 03:20:53.393846 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 03:20:53.393890 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 03:20:53.393913 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 03:20:53.393934 kernel: ACPI: bus type drm_connector registered Sep 9 03:20:53.393957 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 03:20:53.393979 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 03:20:53.394013 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 03:20:53.394049 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 03:20:53.394069 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 03:20:53.394089 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 03:20:53.394131 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 03:20:53.394154 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 03:20:53.394210 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 03:20:53.394250 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 03:20:53.396292 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 03:20:53.396359 systemd-journald[1149]: Collecting audit messages is disabled. Sep 9 03:20:53.396406 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 03:20:53.396446 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 03:20:53.396470 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 03:20:53.396505 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 03:20:53.396528 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 03:20:53.396549 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 03:20:53.396571 systemd-journald[1149]: Journal started Sep 9 03:20:53.396621 systemd-journald[1149]: Runtime Journal (/run/log/journal/0c6892465c66490e867432a382ff544a) is 4.7M, max 38.0M, 33.2M free. Sep 9 03:20:52.955275 systemd[1]: Queued start job for default target multi-user.target. Sep 9 03:20:52.976253 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 03:20:52.976982 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 03:20:53.401248 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 03:20:53.403493 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 03:20:53.405993 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 03:20:53.407325 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 03:20:53.424344 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 03:20:53.433240 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 03:20:53.441292 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 03:20:53.444282 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 03:20:53.444351 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 03:20:53.446808 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 9 03:20:53.456381 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 03:20:53.461325 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 03:20:53.462280 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 03:20:53.470478 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 03:20:53.475025 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 03:20:53.477294 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 03:20:53.483377 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 03:20:53.485291 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 03:20:53.492375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 03:20:53.496467 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 03:20:53.500801 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 03:20:53.503667 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 03:20:53.504616 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 03:20:53.505747 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 03:20:53.534304 systemd-journald[1149]: Time spent on flushing to /var/log/journal/0c6892465c66490e867432a382ff544a is 86.981ms for 1145 entries. Sep 9 03:20:53.534304 systemd-journald[1149]: System Journal (/var/log/journal/0c6892465c66490e867432a382ff544a) is 8.0M, max 584.8M, 576.8M free. Sep 9 03:20:53.642937 systemd-journald[1149]: Received client request to flush runtime journal. Sep 9 03:20:53.643897 kernel: loop0: detected capacity change from 0 to 8 Sep 9 03:20:53.643976 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 03:20:53.605230 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 03:20:53.606515 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 03:20:53.620421 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 9 03:20:53.621685 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 03:20:53.650470 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 03:20:53.653820 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 03:20:53.660838 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 9 03:20:53.663481 kernel: loop1: detected capacity change from 0 to 140768 Sep 9 03:20:53.688241 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 03:20:53.706473 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 03:20:53.736190 kernel: loop2: detected capacity change from 0 to 142488 Sep 9 03:20:53.763461 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 03:20:53.774372 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 03:20:53.802271 kernel: loop3: detected capacity change from 0 to 221472 Sep 9 03:20:53.816229 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Sep 9 03:20:53.816256 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Sep 9 03:20:53.832402 udevadm[1209]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 9 03:20:53.839018 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 03:20:53.864318 kernel: loop4: detected capacity change from 0 to 8 Sep 9 03:20:53.870331 kernel: loop5: detected capacity change from 0 to 140768 Sep 9 03:20:53.900196 kernel: loop6: detected capacity change from 0 to 142488 Sep 9 03:20:53.925221 kernel: loop7: detected capacity change from 0 to 221472 Sep 9 03:20:53.937370 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Sep 9 03:20:53.938226 (sd-merge)[1212]: Merged extensions into '/usr'. Sep 9 03:20:53.947488 systemd[1]: Reloading requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 03:20:53.947525 systemd[1]: Reloading... Sep 9 03:20:54.110588 zram_generator::config[1238]: No configuration found. Sep 9 03:20:54.255717 ldconfig[1181]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 03:20:54.413834 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 03:20:54.484875 systemd[1]: Reloading finished in 536 ms. Sep 9 03:20:54.513653 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 03:20:54.514962 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 03:20:54.527506 systemd[1]: Starting ensure-sysext.service... Sep 9 03:20:54.530481 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 03:20:54.548778 systemd[1]: Reloading requested from client PID 1294 ('systemctl') (unit ensure-sysext.service)... Sep 9 03:20:54.548816 systemd[1]: Reloading... Sep 9 03:20:54.599297 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 03:20:54.599970 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 03:20:54.603673 systemd-tmpfiles[1295]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 03:20:54.604103 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Sep 9 03:20:54.605935 systemd-tmpfiles[1295]: ACLs are not supported, ignoring. Sep 9 03:20:54.616360 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 03:20:54.616380 systemd-tmpfiles[1295]: Skipping /boot Sep 9 03:20:54.634671 systemd-tmpfiles[1295]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 03:20:54.634701 systemd-tmpfiles[1295]: Skipping /boot Sep 9 03:20:54.640232 zram_generator::config[1323]: No configuration found. Sep 9 03:20:54.842801 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 03:20:54.913124 systemd[1]: Reloading finished in 363 ms. Sep 9 03:20:54.938771 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 03:20:54.952843 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 03:20:54.967515 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 03:20:54.972498 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 03:20:54.975342 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 03:20:54.990502 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 03:20:54.996140 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 03:20:55.004771 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 03:20:55.010677 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 03:20:55.010970 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 03:20:55.020542 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 03:20:55.024483 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 03:20:55.031498 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 03:20:55.032824 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 03:20:55.043952 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 03:20:55.045394 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 03:20:55.053485 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 03:20:55.053720 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 03:20:55.057071 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 03:20:55.058333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 03:20:55.058584 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 03:20:55.058748 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 03:20:55.061754 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 03:20:55.065256 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 03:20:55.078595 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 03:20:55.078919 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 03:20:55.086480 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 03:20:55.089729 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 03:20:55.090856 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 03:20:55.100456 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 03:20:55.101286 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 03:20:55.104393 systemd[1]: Finished ensure-sysext.service. Sep 9 03:20:55.115487 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 03:20:55.125331 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Sep 9 03:20:55.144235 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 03:20:55.145267 augenrules[1413]: No rules Sep 9 03:20:55.144751 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 03:20:55.146809 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 03:20:55.147935 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 03:20:55.148141 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 03:20:55.153928 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 03:20:55.155704 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 03:20:55.166659 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 03:20:55.167276 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 03:20:55.168596 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 03:20:55.174271 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 03:20:55.176249 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 03:20:55.186298 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 03:20:55.195425 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 03:20:55.206408 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 03:20:55.224682 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 03:20:55.226547 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 03:20:55.399976 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 03:20:55.401029 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 03:20:55.402984 systemd-resolved[1385]: Positive Trust Anchors: Sep 9 03:20:55.404545 systemd-resolved[1385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 03:20:55.404684 systemd-resolved[1385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 03:20:55.409120 systemd-networkd[1429]: lo: Link UP Sep 9 03:20:55.409132 systemd-networkd[1429]: lo: Gained carrier Sep 9 03:20:55.410448 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 03:20:55.410762 systemd-timesyncd[1411]: No network connectivity, watching for changes. Sep 9 03:20:55.411035 systemd-networkd[1429]: Enumeration completed Sep 9 03:20:55.411877 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 03:20:55.419389 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 03:20:55.427268 systemd-resolved[1385]: Using system hostname 'srv-hr091.gb1.brightbox.com'. Sep 9 03:20:55.430354 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 03:20:55.432287 systemd[1]: Reached target network.target - Network. Sep 9 03:20:55.432939 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 03:20:55.452358 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 03:20:55.452372 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 03:20:55.455323 systemd-networkd[1429]: eth0: Link UP Sep 9 03:20:55.455338 systemd-networkd[1429]: eth0: Gained carrier Sep 9 03:20:55.455358 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 03:20:55.471202 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1443) Sep 9 03:20:55.490278 systemd-networkd[1429]: eth0: DHCPv4 address 10.230.34.194/30, gateway 10.230.34.193 acquired from 10.230.34.193 Sep 9 03:20:55.491886 systemd-timesyncd[1411]: Network configuration changed, trying to establish connection. Sep 9 03:20:55.548237 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 9 03:20:55.565246 kernel: ACPI: button: Power Button [PWRF] Sep 9 03:20:55.575672 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 03:20:55.581189 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 03:20:55.586555 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 03:20:55.617083 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 03:20:55.634231 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 03:20:55.642451 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 9 03:20:55.643016 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 03:20:55.643275 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 9 03:20:55.693341 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 03:20:55.701741 systemd-timesyncd[1411]: Contacted time server 176.58.109.199:123 (2.flatcar.pool.ntp.org). Sep 9 03:20:55.701854 systemd-timesyncd[1411]: Initial clock synchronization to Tue 2025-09-09 03:20:55.839018 UTC. Sep 9 03:20:55.921966 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 03:20:55.961610 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 03:20:55.971446 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 03:20:55.986275 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 03:20:56.014367 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 03:20:56.016129 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 03:20:56.016946 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 03:20:56.017817 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 03:20:56.018832 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 03:20:56.019948 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 03:20:56.020903 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 03:20:56.021742 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 03:20:56.022542 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 03:20:56.022593 systemd[1]: Reached target paths.target - Path Units. Sep 9 03:20:56.023266 systemd[1]: Reached target timers.target - Timer Units. Sep 9 03:20:56.025501 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 03:20:56.028098 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 03:20:56.045943 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 03:20:56.048647 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 03:20:56.054104 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 03:20:56.055030 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 03:20:56.055737 systemd[1]: Reached target basic.target - Basic System. Sep 9 03:20:56.056504 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 03:20:56.056564 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 03:20:56.063362 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 03:20:56.068899 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 9 03:20:56.072438 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 03:20:56.076449 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 03:20:56.077670 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 03:20:56.080415 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 03:20:56.081170 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 03:20:56.086028 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 03:20:56.095364 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 03:20:56.100411 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 03:20:56.109408 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 03:20:56.117397 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 03:20:56.119921 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 03:20:56.120633 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 03:20:56.123809 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 03:20:56.133345 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 03:20:56.152805 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 03:20:56.154957 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 03:20:56.165875 extend-filesystems[1478]: Found loop4 Sep 9 03:20:56.165875 extend-filesystems[1478]: Found loop5 Sep 9 03:20:56.165875 extend-filesystems[1478]: Found loop6 Sep 9 03:20:56.165875 extend-filesystems[1478]: Found loop7 Sep 9 03:20:56.165875 extend-filesystems[1478]: Found vda Sep 9 03:20:56.165875 extend-filesystems[1478]: Found vda1 Sep 9 03:20:56.165875 extend-filesystems[1478]: Found vda2 Sep 9 03:20:56.165875 extend-filesystems[1478]: Found vda3 Sep 9 03:20:56.165875 extend-filesystems[1478]: Found usr Sep 9 03:20:56.165875 extend-filesystems[1478]: Found vda4 Sep 9 03:20:56.165875 extend-filesystems[1478]: Found vda6 Sep 9 03:20:56.165875 extend-filesystems[1478]: Found vda7 Sep 9 03:20:56.165875 extend-filesystems[1478]: Found vda9 Sep 9 03:20:56.165875 extend-filesystems[1478]: Checking size of /dev/vda9 Sep 9 03:20:56.187879 jq[1477]: false Sep 9 03:20:56.167274 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 03:20:56.188887 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 03:20:56.189170 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 03:20:56.191099 dbus-daemon[1476]: [system] SELinux support is enabled Sep 9 03:20:56.191606 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 03:20:56.197122 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 03:20:56.197172 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 03:20:56.197915 dbus-daemon[1476]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1429 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 9 03:20:56.199138 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 03:20:56.199178 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 03:20:56.203247 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 9 03:20:56.203916 (ntainerd)[1500]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 03:20:56.219436 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 9 03:20:56.232224 jq[1487]: true Sep 9 03:20:56.249248 tar[1490]: linux-amd64/helm Sep 9 03:20:56.264786 extend-filesystems[1478]: Resized partition /dev/vda9 Sep 9 03:20:56.266752 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 03:20:56.267011 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 03:20:56.276672 extend-filesystems[1514]: resize2fs 1.47.1 (20-May-2024) Sep 9 03:20:56.297110 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1442) Sep 9 03:20:56.299215 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Sep 9 03:20:56.323436 jq[1510]: true Sep 9 03:20:56.361099 update_engine[1486]: I20250909 03:20:56.358696 1486 main.cc:92] Flatcar Update Engine starting Sep 9 03:20:56.382729 systemd[1]: Started update-engine.service - Update Engine. Sep 9 03:20:56.386617 update_engine[1486]: I20250909 03:20:56.385326 1486 update_check_scheduler.cc:74] Next update check in 3m42s Sep 9 03:20:56.397448 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 03:20:56.489469 systemd-logind[1485]: Watching system buttons on /dev/input/event2 (Power Button) Sep 9 03:20:56.489721 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 03:20:56.492418 systemd-logind[1485]: New seat seat0. Sep 9 03:20:56.498716 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 03:20:56.523258 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Sep 9 03:20:56.524599 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 03:20:56.535881 systemd[1]: Starting sshkeys.service... Sep 9 03:20:56.605072 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 9 03:20:56.616580 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 9 03:20:56.646557 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 9 03:20:56.646922 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 9 03:20:56.653645 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1507 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 9 03:20:56.670653 systemd[1]: Starting polkit.service - Authorization Manager... Sep 9 03:20:56.687533 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 03:20:56.719360 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 9 03:20:56.723014 extend-filesystems[1514]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 03:20:56.723014 extend-filesystems[1514]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 9 03:20:56.723014 extend-filesystems[1514]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 9 03:20:56.736210 extend-filesystems[1478]: Resized filesystem in /dev/vda9 Sep 9 03:20:56.733319 polkitd[1545]: Started polkitd version 121 Sep 9 03:20:56.724605 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 03:20:56.724906 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 03:20:56.771001 polkitd[1545]: Loading rules from directory /etc/polkit-1/rules.d Sep 9 03:20:56.774143 polkitd[1545]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 9 03:20:56.784066 polkitd[1545]: Finished loading, compiling and executing 2 rules Sep 9 03:20:56.786951 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 9 03:20:56.787272 systemd[1]: Started polkit.service - Authorization Manager. Sep 9 03:20:56.790276 polkitd[1545]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 9 03:20:56.828617 systemd-hostnamed[1507]: Hostname set to (static) Sep 9 03:20:56.954400 containerd[1500]: time="2025-09-09T03:20:56.952124978Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 9 03:20:56.970425 systemd-networkd[1429]: eth0: Gained IPv6LL Sep 9 03:20:56.981245 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 03:20:56.985438 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 03:20:56.998705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 03:20:57.006846 sshd_keygen[1516]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 03:20:57.010596 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 03:20:57.060942 containerd[1500]: time="2025-09-09T03:20:57.060592917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 03:20:57.067302 containerd[1500]: time="2025-09-09T03:20:57.066292912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 03:20:57.067302 containerd[1500]: time="2025-09-09T03:20:57.066336875Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 03:20:57.067302 containerd[1500]: time="2025-09-09T03:20:57.066361826Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 03:20:57.067302 containerd[1500]: time="2025-09-09T03:20:57.066590206Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 03:20:57.067302 containerd[1500]: time="2025-09-09T03:20:57.066627851Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 03:20:57.067302 containerd[1500]: time="2025-09-09T03:20:57.066758109Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 03:20:57.067302 containerd[1500]: time="2025-09-09T03:20:57.066781824Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 03:20:57.067302 containerd[1500]: time="2025-09-09T03:20:57.067016070Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 03:20:57.067302 containerd[1500]: time="2025-09-09T03:20:57.067042483Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 03:20:57.067302 containerd[1500]: time="2025-09-09T03:20:57.067063721Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 03:20:57.067302 containerd[1500]: time="2025-09-09T03:20:57.067080525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 03:20:57.067711 containerd[1500]: time="2025-09-09T03:20:57.067252953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 03:20:57.067711 containerd[1500]: time="2025-09-09T03:20:57.067646172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 03:20:57.067817 containerd[1500]: time="2025-09-09T03:20:57.067780570Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 03:20:57.067817 containerd[1500]: time="2025-09-09T03:20:57.067812969Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 03:20:57.070271 containerd[1500]: time="2025-09-09T03:20:57.069021441Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 03:20:57.070271 containerd[1500]: time="2025-09-09T03:20:57.069131138Z" level=info msg="metadata content store policy set" policy=shared Sep 9 03:20:57.079655 containerd[1500]: time="2025-09-09T03:20:57.079560773Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 03:20:57.079655 containerd[1500]: time="2025-09-09T03:20:57.079645488Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 03:20:57.079784 containerd[1500]: time="2025-09-09T03:20:57.079674181Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 03:20:57.079784 containerd[1500]: time="2025-09-09T03:20:57.079698054Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 03:20:57.079784 containerd[1500]: time="2025-09-09T03:20:57.079720525Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 03:20:57.082260 containerd[1500]: time="2025-09-09T03:20:57.079923771Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 03:20:57.082260 containerd[1500]: time="2025-09-09T03:20:57.080471044Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 03:20:57.082260 containerd[1500]: time="2025-09-09T03:20:57.080659985Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 03:20:57.082260 containerd[1500]: time="2025-09-09T03:20:57.080693970Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 03:20:57.082260 containerd[1500]: time="2025-09-09T03:20:57.080714737Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 03:20:57.082260 containerd[1500]: time="2025-09-09T03:20:57.080735703Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 03:20:57.082260 containerd[1500]: time="2025-09-09T03:20:57.080773561Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 03:20:57.082260 containerd[1500]: time="2025-09-09T03:20:57.080814477Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 03:20:57.082260 containerd[1500]: time="2025-09-09T03:20:57.080839238Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 03:20:57.082260 containerd[1500]: time="2025-09-09T03:20:57.080861832Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 03:20:57.082260 containerd[1500]: time="2025-09-09T03:20:57.080886984Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 03:20:57.082260 containerd[1500]: time="2025-09-09T03:20:57.080906816Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 03:20:57.082260 containerd[1500]: time="2025-09-09T03:20:57.080927693Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 03:20:57.082260 containerd[1500]: time="2025-09-09T03:20:57.080980666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.082582 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 03:20:57.090642 containerd[1500]: time="2025-09-09T03:20:57.081003316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.090642 containerd[1500]: time="2025-09-09T03:20:57.081022690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.090642 containerd[1500]: time="2025-09-09T03:20:57.081075467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.090642 containerd[1500]: time="2025-09-09T03:20:57.081130524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.090642 containerd[1500]: time="2025-09-09T03:20:57.081162686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.090642 containerd[1500]: time="2025-09-09T03:20:57.081258196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.090642 containerd[1500]: time="2025-09-09T03:20:57.081288430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.090642 containerd[1500]: time="2025-09-09T03:20:57.081309899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.090642 containerd[1500]: time="2025-09-09T03:20:57.081333170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.090642 containerd[1500]: time="2025-09-09T03:20:57.081356019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.090642 containerd[1500]: time="2025-09-09T03:20:57.081375406Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.090642 containerd[1500]: time="2025-09-09T03:20:57.081409308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.090642 containerd[1500]: time="2025-09-09T03:20:57.081440271Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 03:20:57.090642 containerd[1500]: time="2025-09-09T03:20:57.081478673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.090642 containerd[1500]: time="2025-09-09T03:20:57.081506800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.095367 containerd[1500]: time="2025-09-09T03:20:57.082400126Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 03:20:57.095367 containerd[1500]: time="2025-09-09T03:20:57.082500423Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 03:20:57.095367 containerd[1500]: time="2025-09-09T03:20:57.082539959Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 03:20:57.095367 containerd[1500]: time="2025-09-09T03:20:57.082561069Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 03:20:57.095367 containerd[1500]: time="2025-09-09T03:20:57.082580612Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 03:20:57.095367 containerd[1500]: time="2025-09-09T03:20:57.082598128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.095367 containerd[1500]: time="2025-09-09T03:20:57.082623516Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 03:20:57.095367 containerd[1500]: time="2025-09-09T03:20:57.082653751Z" level=info msg="NRI interface is disabled by configuration." Sep 9 03:20:57.095367 containerd[1500]: time="2025-09-09T03:20:57.082676658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 03:20:57.094667 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 03:20:57.095821 containerd[1500]: time="2025-09-09T03:20:57.083082884Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 03:20:57.095821 containerd[1500]: time="2025-09-09T03:20:57.083179091Z" level=info msg="Connect containerd service" Sep 9 03:20:57.095821 containerd[1500]: time="2025-09-09T03:20:57.084099239Z" level=info msg="using legacy CRI server" Sep 9 03:20:57.095821 containerd[1500]: time="2025-09-09T03:20:57.084229090Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 03:20:57.095821 containerd[1500]: time="2025-09-09T03:20:57.085495887Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 03:20:57.095821 containerd[1500]: time="2025-09-09T03:20:57.086973048Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 03:20:57.095821 containerd[1500]: time="2025-09-09T03:20:57.087160107Z" level=info msg="Start subscribing containerd event" Sep 9 03:20:57.095821 containerd[1500]: time="2025-09-09T03:20:57.087306523Z" level=info msg="Start recovering state" Sep 9 03:20:57.095821 containerd[1500]: time="2025-09-09T03:20:57.087585344Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 03:20:57.095821 containerd[1500]: time="2025-09-09T03:20:57.087673016Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 03:20:57.095821 containerd[1500]: time="2025-09-09T03:20:57.088108085Z" level=info msg="Start event monitor" Sep 9 03:20:57.095821 containerd[1500]: time="2025-09-09T03:20:57.088164806Z" level=info msg="Start snapshots syncer" Sep 9 03:20:57.095821 containerd[1500]: time="2025-09-09T03:20:57.089266657Z" level=info msg="Start cni network conf syncer for default" Sep 9 03:20:57.095821 containerd[1500]: time="2025-09-09T03:20:57.090339092Z" level=info msg="Start streaming server" Sep 9 03:20:57.103709 containerd[1500]: time="2025-09-09T03:20:57.096596716Z" level=info msg="containerd successfully booted in 0.149185s" Sep 9 03:20:57.096891 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 03:20:57.100251 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 03:20:57.129074 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 03:20:57.130418 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 03:20:57.141977 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 03:20:57.177554 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 03:20:57.188965 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 03:20:57.201722 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 03:20:57.205597 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 03:20:57.362246 tar[1490]: linux-amd64/LICENSE Sep 9 03:20:57.362246 tar[1490]: linux-amd64/README.md Sep 9 03:20:57.378168 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 03:20:57.617171 systemd-networkd[1429]: eth0: Ignoring DHCPv6 address 2a02:1348:179:88b0:24:19ff:fee6:22c2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:88b0:24:19ff:fee6:22c2/64 assigned by NDisc. Sep 9 03:20:57.617501 systemd-networkd[1429]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 9 03:20:58.223417 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 03:20:58.223950 (kubelet)[1601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 03:20:58.879381 kubelet[1601]: E0909 03:20:58.879281 1601 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 03:20:58.882491 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 03:20:58.882785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 03:20:58.883482 systemd[1]: kubelet.service: Consumed 1.089s CPU time. Sep 9 03:21:01.047340 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 03:21:01.058875 systemd[1]: Started sshd@0-10.230.34.194:22-147.75.109.163:46660.service - OpenSSH per-connection server daemon (147.75.109.163:46660). Sep 9 03:21:01.963820 sshd[1611]: Accepted publickey for core from 147.75.109.163 port 46660 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:21:01.967011 sshd[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:21:01.983164 systemd-logind[1485]: New session 1 of user core. Sep 9 03:21:01.986206 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 03:21:01.992755 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 03:21:02.029676 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 03:21:02.042754 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 03:21:02.144927 (systemd)[1615]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 03:21:02.274836 login[1589]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 03:21:02.279100 login[1588]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 9 03:21:02.290009 systemd-logind[1485]: New session 2 of user core. Sep 9 03:21:02.296722 systemd-logind[1485]: New session 3 of user core. Sep 9 03:21:02.389809 systemd[1615]: Queued start job for default target default.target. Sep 9 03:21:02.398820 systemd[1615]: Created slice app.slice - User Application Slice. Sep 9 03:21:02.398871 systemd[1615]: Reached target paths.target - Paths. Sep 9 03:21:02.398895 systemd[1615]: Reached target timers.target - Timers. Sep 9 03:21:02.401369 systemd[1615]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 03:21:02.419743 systemd[1615]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 03:21:02.419977 systemd[1615]: Reached target sockets.target - Sockets. Sep 9 03:21:02.420011 systemd[1615]: Reached target basic.target - Basic System. Sep 9 03:21:02.420083 systemd[1615]: Reached target default.target - Main User Target. Sep 9 03:21:02.420152 systemd[1615]: Startup finished in 262ms. Sep 9 03:21:02.420301 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 03:21:02.431737 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 03:21:02.433743 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 03:21:02.435118 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 03:21:03.089628 systemd[1]: Started sshd@1-10.230.34.194:22-147.75.109.163:46662.service - OpenSSH per-connection server daemon (147.75.109.163:46662). Sep 9 03:21:03.648922 coreos-metadata[1475]: Sep 09 03:21:03.648 WARN failed to locate config-drive, using the metadata service API instead Sep 9 03:21:03.678360 coreos-metadata[1475]: Sep 09 03:21:03.678 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Sep 9 03:21:03.688071 coreos-metadata[1475]: Sep 09 03:21:03.687 INFO Fetch failed with 404: resource not found Sep 9 03:21:03.688071 coreos-metadata[1475]: Sep 09 03:21:03.688 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 9 03:21:03.688674 coreos-metadata[1475]: Sep 09 03:21:03.688 INFO Fetch successful Sep 9 03:21:03.688975 coreos-metadata[1475]: Sep 09 03:21:03.688 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Sep 9 03:21:03.704548 coreos-metadata[1475]: Sep 09 03:21:03.704 INFO Fetch successful Sep 9 03:21:03.704821 coreos-metadata[1475]: Sep 09 03:21:03.704 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Sep 9 03:21:03.720091 coreos-metadata[1475]: Sep 09 03:21:03.719 INFO Fetch successful Sep 9 03:21:03.720498 coreos-metadata[1475]: Sep 09 03:21:03.720 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Sep 9 03:21:03.729663 coreos-metadata[1537]: Sep 09 03:21:03.729 WARN failed to locate config-drive, using the metadata service API instead Sep 9 03:21:03.736918 coreos-metadata[1475]: Sep 09 03:21:03.736 INFO Fetch successful Sep 9 03:21:03.737187 coreos-metadata[1475]: Sep 09 03:21:03.737 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Sep 9 03:21:03.752408 coreos-metadata[1537]: Sep 09 03:21:03.752 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 9 03:21:03.756258 coreos-metadata[1475]: Sep 09 03:21:03.756 INFO Fetch successful Sep 9 03:21:03.793227 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 9 03:21:03.795045 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 03:21:03.816075 coreos-metadata[1537]: Sep 09 03:21:03.816 INFO Fetch successful Sep 9 03:21:03.816411 coreos-metadata[1537]: Sep 09 03:21:03.816 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 9 03:21:03.850589 coreos-metadata[1537]: Sep 09 03:21:03.850 INFO Fetch successful Sep 9 03:21:03.853235 unknown[1537]: wrote ssh authorized keys file for user: core Sep 9 03:21:03.881339 update-ssh-keys[1664]: Updated "/home/core/.ssh/authorized_keys" Sep 9 03:21:03.883787 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 9 03:21:03.886736 systemd[1]: Finished sshkeys.service. Sep 9 03:21:03.890283 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 03:21:03.890643 systemd[1]: Startup finished in 1.354s (kernel) + 16.379s (initrd) + 11.809s (userspace) = 29.543s. Sep 9 03:21:03.984520 sshd[1652]: Accepted publickey for core from 147.75.109.163 port 46662 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:21:03.986689 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:21:03.995255 systemd-logind[1485]: New session 4 of user core. Sep 9 03:21:04.002428 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 03:21:04.604480 sshd[1652]: pam_unix(sshd:session): session closed for user core Sep 9 03:21:04.609513 systemd[1]: sshd@1-10.230.34.194:22-147.75.109.163:46662.service: Deactivated successfully. Sep 9 03:21:04.611789 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 03:21:04.612767 systemd-logind[1485]: Session 4 logged out. Waiting for processes to exit. Sep 9 03:21:04.614369 systemd-logind[1485]: Removed session 4. Sep 9 03:21:04.774654 systemd[1]: Started sshd@2-10.230.34.194:22-147.75.109.163:46672.service - OpenSSH per-connection server daemon (147.75.109.163:46672). Sep 9 03:21:05.668341 sshd[1672]: Accepted publickey for core from 147.75.109.163 port 46672 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:21:05.670329 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:21:05.678079 systemd-logind[1485]: New session 5 of user core. Sep 9 03:21:05.687515 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 03:21:06.286574 sshd[1672]: pam_unix(sshd:session): session closed for user core Sep 9 03:21:06.290460 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Sep 9 03:21:06.291727 systemd[1]: sshd@2-10.230.34.194:22-147.75.109.163:46672.service: Deactivated successfully. Sep 9 03:21:06.293816 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 03:21:06.296249 systemd-logind[1485]: Removed session 5. Sep 9 03:21:06.439829 systemd[1]: Started sshd@3-10.230.34.194:22-147.75.109.163:46678.service - OpenSSH per-connection server daemon (147.75.109.163:46678). Sep 9 03:21:07.333023 sshd[1679]: Accepted publickey for core from 147.75.109.163 port 46678 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:21:07.335483 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:21:07.343294 systemd-logind[1485]: New session 6 of user core. Sep 9 03:21:07.351461 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 03:21:07.954607 sshd[1679]: pam_unix(sshd:session): session closed for user core Sep 9 03:21:07.960864 systemd[1]: sshd@3-10.230.34.194:22-147.75.109.163:46678.service: Deactivated successfully. Sep 9 03:21:07.963304 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 03:21:07.964167 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Sep 9 03:21:07.965772 systemd-logind[1485]: Removed session 6. Sep 9 03:21:08.119546 systemd[1]: Started sshd@4-10.230.34.194:22-147.75.109.163:46684.service - OpenSSH per-connection server daemon (147.75.109.163:46684). Sep 9 03:21:09.016953 sshd[1686]: Accepted publickey for core from 147.75.109.163 port 46684 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:21:09.019295 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:21:09.020944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 03:21:09.030537 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 03:21:09.038321 systemd-logind[1485]: New session 7 of user core. Sep 9 03:21:09.047319 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 03:21:09.228579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 03:21:09.239943 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 03:21:09.322534 kubelet[1697]: E0909 03:21:09.322363 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 03:21:09.326665 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 03:21:09.326929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 03:21:09.507107 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 03:21:09.508142 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 03:21:09.519785 sudo[1704]: pam_unix(sudo:session): session closed for user root Sep 9 03:21:09.665603 sshd[1686]: pam_unix(sshd:session): session closed for user core Sep 9 03:21:09.669909 systemd[1]: sshd@4-10.230.34.194:22-147.75.109.163:46684.service: Deactivated successfully. Sep 9 03:21:09.672571 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 03:21:09.674680 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Sep 9 03:21:09.676069 systemd-logind[1485]: Removed session 7. Sep 9 03:21:09.818340 systemd[1]: Started sshd@5-10.230.34.194:22-147.75.109.163:46698.service - OpenSSH per-connection server daemon (147.75.109.163:46698). Sep 9 03:21:10.711746 sshd[1709]: Accepted publickey for core from 147.75.109.163 port 46698 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:21:10.714296 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:21:10.720881 systemd-logind[1485]: New session 8 of user core. Sep 9 03:21:10.729409 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 03:21:11.189985 sudo[1713]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 03:21:11.191034 sudo[1713]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 03:21:11.196393 sudo[1713]: pam_unix(sudo:session): session closed for user root Sep 9 03:21:11.204769 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 9 03:21:11.205282 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 03:21:11.228065 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 9 03:21:11.229790 auditctl[1716]: No rules Sep 9 03:21:11.230321 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 03:21:11.230596 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 9 03:21:11.237723 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 03:21:11.271410 augenrules[1735]: No rules Sep 9 03:21:11.273811 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 03:21:11.275735 sudo[1712]: pam_unix(sudo:session): session closed for user root Sep 9 03:21:11.419748 sshd[1709]: pam_unix(sshd:session): session closed for user core Sep 9 03:21:11.424056 systemd[1]: sshd@5-10.230.34.194:22-147.75.109.163:46698.service: Deactivated successfully. Sep 9 03:21:11.426147 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 03:21:11.427886 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Sep 9 03:21:11.429222 systemd-logind[1485]: Removed session 8. Sep 9 03:21:11.579673 systemd[1]: Started sshd@6-10.230.34.194:22-147.75.109.163:53140.service - OpenSSH per-connection server daemon (147.75.109.163:53140). Sep 9 03:21:12.504199 sshd[1743]: Accepted publickey for core from 147.75.109.163 port 53140 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:21:12.506220 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:21:12.513830 systemd-logind[1485]: New session 9 of user core. Sep 9 03:21:12.521392 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 03:21:12.990020 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 03:21:12.990526 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 03:21:13.439592 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 03:21:13.441699 (dockerd)[1761]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 03:21:13.881194 dockerd[1761]: time="2025-09-09T03:21:13.881033864Z" level=info msg="Starting up" Sep 9 03:21:14.049861 dockerd[1761]: time="2025-09-09T03:21:14.049569609Z" level=info msg="Loading containers: start." Sep 9 03:21:14.199220 kernel: Initializing XFRM netlink socket Sep 9 03:21:14.302959 systemd-networkd[1429]: docker0: Link UP Sep 9 03:21:14.335301 dockerd[1761]: time="2025-09-09T03:21:14.335247775Z" level=info msg="Loading containers: done." Sep 9 03:21:14.353983 dockerd[1761]: time="2025-09-09T03:21:14.353862392Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 03:21:14.354191 dockerd[1761]: time="2025-09-09T03:21:14.354019698Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 9 03:21:14.354344 dockerd[1761]: time="2025-09-09T03:21:14.354212332Z" level=info msg="Daemon has completed initialization" Sep 9 03:21:14.402220 dockerd[1761]: time="2025-09-09T03:21:14.401625899Z" level=info msg="API listen on /run/docker.sock" Sep 9 03:21:14.403323 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 03:21:15.628341 containerd[1500]: time="2025-09-09T03:21:15.628226727Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 03:21:16.558964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2602858690.mount: Deactivated successfully. Sep 9 03:21:18.810140 containerd[1500]: time="2025-09-09T03:21:18.809132252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:18.813223 containerd[1500]: time="2025-09-09T03:21:18.812763634Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=28079639" Sep 9 03:21:18.815239 containerd[1500]: time="2025-09-09T03:21:18.813702536Z" level=info msg="ImageCreate event name:\"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:18.819711 containerd[1500]: time="2025-09-09T03:21:18.819623220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:18.821297 containerd[1500]: time="2025-09-09T03:21:18.821227823Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"28076431\" in 3.192879894s" Sep 9 03:21:18.821389 containerd[1500]: time="2025-09-09T03:21:18.821313686Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:b1963c5b49c1722b8f408deaf83aafca7f48f47fed0ed14e5c10e93cc55974a7\"" Sep 9 03:21:18.823422 containerd[1500]: time="2025-09-09T03:21:18.823383502Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 03:21:19.361460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 03:21:19.374715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 03:21:19.594559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 03:21:19.599233 (kubelet)[1965]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 03:21:19.677362 kubelet[1965]: E0909 03:21:19.677166 1965 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 03:21:19.679534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 03:21:19.679815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 03:21:22.765261 containerd[1500]: time="2025-09-09T03:21:22.764922818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:22.768072 containerd[1500]: time="2025-09-09T03:21:22.767957496Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=24714689" Sep 9 03:21:22.769330 containerd[1500]: time="2025-09-09T03:21:22.769258740Z" level=info msg="ImageCreate event name:\"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:22.774365 containerd[1500]: time="2025-09-09T03:21:22.773546491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:22.775402 containerd[1500]: time="2025-09-09T03:21:22.775342782Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"26317875\" in 3.951802643s" Sep 9 03:21:22.775505 containerd[1500]: time="2025-09-09T03:21:22.775449834Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:200c1a99a6f2b9d3b0a6e9b7362663513589341e0e58bc3b953a373efa735dfd\"" Sep 9 03:21:22.777664 containerd[1500]: time="2025-09-09T03:21:22.777594923Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 03:21:24.608163 containerd[1500]: time="2025-09-09T03:21:24.608072324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:24.610703 containerd[1500]: time="2025-09-09T03:21:24.610642150Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=18782435" Sep 9 03:21:24.611919 containerd[1500]: time="2025-09-09T03:21:24.611857675Z" level=info msg="ImageCreate event name:\"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:24.618333 containerd[1500]: time="2025-09-09T03:21:24.618248596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:24.619014 containerd[1500]: time="2025-09-09T03:21:24.618847454Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"20385639\" in 1.840941714s" Sep 9 03:21:24.619014 containerd[1500]: time="2025-09-09T03:21:24.619002923Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:bcdd9599681a9460a5539177a986dbdaf880ac56eeb117ab94adb8f37889efba\"" Sep 9 03:21:24.620980 containerd[1500]: time="2025-09-09T03:21:24.620738696Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 03:21:26.276073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3869360282.mount: Deactivated successfully. Sep 9 03:21:26.998222 containerd[1500]: time="2025-09-09T03:21:26.996695821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:26.998222 containerd[1500]: time="2025-09-09T03:21:26.998106769Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=30384263" Sep 9 03:21:26.999345 containerd[1500]: time="2025-09-09T03:21:26.998866009Z" level=info msg="ImageCreate event name:\"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:27.001721 containerd[1500]: time="2025-09-09T03:21:27.001671193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:27.003042 containerd[1500]: time="2025-09-09T03:21:27.003001154Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"30383274\" in 2.382210487s" Sep 9 03:21:27.003236 containerd[1500]: time="2025-09-09T03:21:27.003205090Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:507cc52f5f78c0cff25e904c76c18e6bfc90982e9cc2aa4dcb19033f21c3f679\"" Sep 9 03:21:27.004618 containerd[1500]: time="2025-09-09T03:21:27.004585277Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 03:21:27.660345 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 9 03:21:27.733805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount796729767.mount: Deactivated successfully. Sep 9 03:21:29.219222 containerd[1500]: time="2025-09-09T03:21:29.218396128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:29.222670 containerd[1500]: time="2025-09-09T03:21:29.222613352Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 9 03:21:29.225579 containerd[1500]: time="2025-09-09T03:21:29.225523395Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:29.281008 containerd[1500]: time="2025-09-09T03:21:29.279588503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:29.281008 containerd[1500]: time="2025-09-09T03:21:29.280548011Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.275903551s" Sep 9 03:21:29.281008 containerd[1500]: time="2025-09-09T03:21:29.280648527Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 9 03:21:29.283504 containerd[1500]: time="2025-09-09T03:21:29.283443949Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 03:21:29.860942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 03:21:29.870580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 03:21:30.078679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 03:21:30.089683 (kubelet)[2052]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 03:21:30.178599 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1255427249.mount: Deactivated successfully. Sep 9 03:21:30.191236 containerd[1500]: time="2025-09-09T03:21:30.189000128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:30.191236 containerd[1500]: time="2025-09-09T03:21:30.191030915Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 9 03:21:30.192665 containerd[1500]: time="2025-09-09T03:21:30.192299722Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:30.196778 containerd[1500]: time="2025-09-09T03:21:30.196567869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:30.201060 containerd[1500]: time="2025-09-09T03:21:30.201018177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 917.530615ms" Sep 9 03:21:30.201237 containerd[1500]: time="2025-09-09T03:21:30.201068813Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 03:21:30.202145 containerd[1500]: time="2025-09-09T03:21:30.202089023Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 03:21:30.218784 kubelet[2052]: E0909 03:21:30.218669 2052 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 03:21:30.222233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 03:21:30.222546 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 03:21:30.854358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2899101652.mount: Deactivated successfully. Sep 9 03:21:34.651689 containerd[1500]: time="2025-09-09T03:21:34.651428677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:34.654724 containerd[1500]: time="2025-09-09T03:21:34.654620623Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910717" Sep 9 03:21:34.655756 containerd[1500]: time="2025-09-09T03:21:34.655678811Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:34.660770 containerd[1500]: time="2025-09-09T03:21:34.660642288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:34.663595 containerd[1500]: time="2025-09-09T03:21:34.663505263Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.461354532s" Sep 9 03:21:34.663595 containerd[1500]: time="2025-09-09T03:21:34.663589292Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 9 03:21:39.060131 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 03:21:39.071582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 03:21:39.116159 systemd[1]: Reloading requested from client PID 2145 ('systemctl') (unit session-9.scope)... Sep 9 03:21:39.116228 systemd[1]: Reloading... Sep 9 03:21:39.337240 zram_generator::config[2184]: No configuration found. Sep 9 03:21:39.489993 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 03:21:39.601895 systemd[1]: Reloading finished in 484 ms. Sep 9 03:21:39.676010 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 03:21:39.676216 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 03:21:39.676703 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 03:21:39.682648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 03:21:39.874293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 03:21:39.890101 (kubelet)[2249]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 03:21:39.976703 kubelet[2249]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 03:21:39.976703 kubelet[2249]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 03:21:39.976703 kubelet[2249]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 03:21:39.979003 kubelet[2249]: I0909 03:21:39.978898 2249 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 03:21:40.603063 kubelet[2249]: I0909 03:21:40.602947 2249 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 03:21:40.603063 kubelet[2249]: I0909 03:21:40.603020 2249 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 03:21:40.603496 kubelet[2249]: I0909 03:21:40.603461 2249 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 03:21:40.637220 kubelet[2249]: E0909 03:21:40.637120 2249 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.34.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.34.194:6443: connect: connection refused" logger="UnhandledError" Sep 9 03:21:40.646697 kubelet[2249]: I0909 03:21:40.646410 2249 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 03:21:40.666474 kubelet[2249]: E0909 03:21:40.666399 2249 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 03:21:40.666474 kubelet[2249]: I0909 03:21:40.666453 2249 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 03:21:40.676210 kubelet[2249]: I0909 03:21:40.676130 2249 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 03:21:40.678266 kubelet[2249]: I0909 03:21:40.678206 2249 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 03:21:40.678592 kubelet[2249]: I0909 03:21:40.678523 2249 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 03:21:40.678966 kubelet[2249]: I0909 03:21:40.678591 2249 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-hr091.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 03:21:40.678966 kubelet[2249]: I0909 03:21:40.678946 2249 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 03:21:40.678966 kubelet[2249]: I0909 03:21:40.678967 2249 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 03:21:40.679411 kubelet[2249]: I0909 03:21:40.679228 2249 state_mem.go:36] "Initialized new in-memory state store" Sep 9 03:21:40.690366 kubelet[2249]: I0909 03:21:40.690311 2249 kubelet.go:408] "Attempting to sync node with API server" Sep 9 03:21:40.690506 kubelet[2249]: I0909 03:21:40.690378 2249 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 03:21:40.690506 kubelet[2249]: I0909 03:21:40.690460 2249 kubelet.go:314] "Adding apiserver pod source" Sep 9 03:21:40.690639 kubelet[2249]: I0909 03:21:40.690515 2249 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 03:21:40.703688 kubelet[2249]: W0909 03:21:40.703251 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.34.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hr091.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.34.194:6443: connect: connection refused Sep 9 03:21:40.703688 kubelet[2249]: E0909 03:21:40.703325 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.34.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hr091.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.34.194:6443: connect: connection refused" logger="UnhandledError" Sep 9 03:21:40.703688 kubelet[2249]: I0909 03:21:40.703477 2249 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 03:21:40.707130 kubelet[2249]: I0909 03:21:40.706928 2249 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 03:21:40.709200 kubelet[2249]: W0909 03:21:40.707668 2249 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 03:21:40.709564 kubelet[2249]: I0909 03:21:40.709542 2249 server.go:1274] "Started kubelet" Sep 9 03:21:40.712690 kubelet[2249]: I0909 03:21:40.712667 2249 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 03:21:40.720505 kubelet[2249]: I0909 03:21:40.720441 2249 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 03:21:40.727150 kubelet[2249]: I0909 03:21:40.726002 2249 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 03:21:40.727740 kubelet[2249]: W0909 03:21:40.709548 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.34.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.34.194:6443: connect: connection refused Sep 9 03:21:40.727859 kubelet[2249]: E0909 03:21:40.727792 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.34.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.34.194:6443: connect: connection refused" logger="UnhandledError" Sep 9 03:21:40.730209 kubelet[2249]: I0909 03:21:40.728299 2249 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 03:21:40.730209 kubelet[2249]: I0909 03:21:40.729673 2249 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 03:21:40.731333 kubelet[2249]: I0909 03:21:40.731300 2249 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 03:21:40.733658 kubelet[2249]: E0909 03:21:40.718253 2249 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.34.194:6443/api/v1/namespaces/default/events\": dial tcp 10.230.34.194:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-hr091.gb1.brightbox.com.18637f2aa349a2b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-hr091.gb1.brightbox.com,UID:srv-hr091.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-hr091.gb1.brightbox.com,},FirstTimestamp:2025-09-09 03:21:40.70949138 +0000 UTC m=+0.811705754,LastTimestamp:2025-09-09 03:21:40.70949138 +0000 UTC m=+0.811705754,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-hr091.gb1.brightbox.com,}" Sep 9 03:21:40.733877 kubelet[2249]: E0909 03:21:40.731843 2249 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-hr091.gb1.brightbox.com\" not found" Sep 9 03:21:40.734480 kubelet[2249]: I0909 03:21:40.733445 2249 server.go:449] "Adding debug handlers to kubelet server" Sep 9 03:21:40.738054 kubelet[2249]: I0909 03:21:40.731604 2249 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 03:21:40.740787 kubelet[2249]: I0909 03:21:40.733810 2249 reconciler.go:26] "Reconciler: start to sync state" Sep 9 03:21:40.741028 kubelet[2249]: E0909 03:21:40.734517 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.34.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hr091.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.34.194:6443: connect: connection refused" interval="200ms" Sep 9 03:21:40.741209 kubelet[2249]: I0909 03:21:40.738288 2249 factory.go:221] Registration of the systemd container factory successfully Sep 9 03:21:40.742195 kubelet[2249]: I0909 03:21:40.742128 2249 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 03:21:40.742455 kubelet[2249]: W0909 03:21:40.740701 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.34.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.34.194:6443: connect: connection refused Sep 9 03:21:40.743266 kubelet[2249]: E0909 03:21:40.742544 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.34.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.34.194:6443: connect: connection refused" logger="UnhandledError" Sep 9 03:21:40.743266 kubelet[2249]: E0909 03:21:40.742824 2249 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 03:21:40.747375 kubelet[2249]: I0909 03:21:40.747332 2249 factory.go:221] Registration of the containerd container factory successfully Sep 9 03:21:40.778372 kubelet[2249]: I0909 03:21:40.778286 2249 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 03:21:40.782957 kubelet[2249]: I0909 03:21:40.782883 2249 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 03:21:40.783212 kubelet[2249]: I0909 03:21:40.783114 2249 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 03:21:40.784392 kubelet[2249]: I0909 03:21:40.784343 2249 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 03:21:40.784479 kubelet[2249]: E0909 03:21:40.784435 2249 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 03:21:40.792506 kubelet[2249]: W0909 03:21:40.792436 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.34.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.34.194:6443: connect: connection refused Sep 9 03:21:40.792707 kubelet[2249]: E0909 03:21:40.792673 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.34.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.34.194:6443: connect: connection refused" logger="UnhandledError" Sep 9 03:21:40.794078 kubelet[2249]: I0909 03:21:40.794056 2249 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 03:21:40.794404 kubelet[2249]: I0909 03:21:40.794384 2249 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 03:21:40.794550 kubelet[2249]: I0909 03:21:40.794533 2249 state_mem.go:36] "Initialized new in-memory state store" Sep 9 03:21:40.797541 kubelet[2249]: I0909 03:21:40.797516 2249 policy_none.go:49] "None policy: Start" Sep 9 03:21:40.798594 kubelet[2249]: I0909 03:21:40.798571 2249 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 03:21:40.798827 kubelet[2249]: I0909 03:21:40.798810 2249 state_mem.go:35] "Initializing new in-memory state store" Sep 9 03:21:40.813581 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 03:21:40.828307 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 03:21:40.834267 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 03:21:40.835305 kubelet[2249]: E0909 03:21:40.835244 2249 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-hr091.gb1.brightbox.com\" not found" Sep 9 03:21:40.842360 kubelet[2249]: I0909 03:21:40.842328 2249 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 03:21:40.843901 kubelet[2249]: I0909 03:21:40.842689 2249 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 03:21:40.843901 kubelet[2249]: I0909 03:21:40.842738 2249 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 03:21:40.843901 kubelet[2249]: I0909 03:21:40.843500 2249 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 03:21:40.846021 kubelet[2249]: E0909 03:21:40.845990 2249 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-hr091.gb1.brightbox.com\" not found" Sep 9 03:21:40.903056 systemd[1]: Created slice kubepods-burstable-pod5cae1fec473586ea448634ca55b361e9.slice - libcontainer container kubepods-burstable-pod5cae1fec473586ea448634ca55b361e9.slice. Sep 9 03:21:40.921053 systemd[1]: Created slice kubepods-burstable-podb9faf44e5c7238e37f759077009a6606.slice - libcontainer container kubepods-burstable-podb9faf44e5c7238e37f759077009a6606.slice. Sep 9 03:21:40.938942 systemd[1]: Created slice kubepods-burstable-pod4eb6c701a82d17fb47eb7c748ac2a14d.slice - libcontainer container kubepods-burstable-pod4eb6c701a82d17fb47eb7c748ac2a14d.slice. Sep 9 03:21:40.942705 kubelet[2249]: I0909 03:21:40.942630 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9faf44e5c7238e37f759077009a6606-ca-certs\") pod \"kube-apiserver-srv-hr091.gb1.brightbox.com\" (UID: \"b9faf44e5c7238e37f759077009a6606\") " pod="kube-system/kube-apiserver-srv-hr091.gb1.brightbox.com" Sep 9 03:21:40.943548 kubelet[2249]: I0909 03:21:40.942876 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9faf44e5c7238e37f759077009a6606-k8s-certs\") pod \"kube-apiserver-srv-hr091.gb1.brightbox.com\" (UID: \"b9faf44e5c7238e37f759077009a6606\") " pod="kube-system/kube-apiserver-srv-hr091.gb1.brightbox.com" Sep 9 03:21:40.943548 kubelet[2249]: I0909 03:21:40.942920 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4eb6c701a82d17fb47eb7c748ac2a14d-flexvolume-dir\") pod \"kube-controller-manager-srv-hr091.gb1.brightbox.com\" (UID: \"4eb6c701a82d17fb47eb7c748ac2a14d\") " pod="kube-system/kube-controller-manager-srv-hr091.gb1.brightbox.com" Sep 9 03:21:40.943548 kubelet[2249]: I0909 03:21:40.942956 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4eb6c701a82d17fb47eb7c748ac2a14d-k8s-certs\") pod \"kube-controller-manager-srv-hr091.gb1.brightbox.com\" (UID: \"4eb6c701a82d17fb47eb7c748ac2a14d\") " pod="kube-system/kube-controller-manager-srv-hr091.gb1.brightbox.com" Sep 9 03:21:40.943548 kubelet[2249]: I0909 03:21:40.942988 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4eb6c701a82d17fb47eb7c748ac2a14d-kubeconfig\") pod \"kube-controller-manager-srv-hr091.gb1.brightbox.com\" (UID: \"4eb6c701a82d17fb47eb7c748ac2a14d\") " pod="kube-system/kube-controller-manager-srv-hr091.gb1.brightbox.com" Sep 9 03:21:40.943548 kubelet[2249]: I0909 03:21:40.943020 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9faf44e5c7238e37f759077009a6606-usr-share-ca-certificates\") pod \"kube-apiserver-srv-hr091.gb1.brightbox.com\" (UID: \"b9faf44e5c7238e37f759077009a6606\") " pod="kube-system/kube-apiserver-srv-hr091.gb1.brightbox.com" Sep 9 03:21:40.943843 kubelet[2249]: I0909 03:21:40.943063 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4eb6c701a82d17fb47eb7c748ac2a14d-ca-certs\") pod \"kube-controller-manager-srv-hr091.gb1.brightbox.com\" (UID: \"4eb6c701a82d17fb47eb7c748ac2a14d\") " pod="kube-system/kube-controller-manager-srv-hr091.gb1.brightbox.com" Sep 9 03:21:40.943843 kubelet[2249]: I0909 03:21:40.943095 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4eb6c701a82d17fb47eb7c748ac2a14d-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-hr091.gb1.brightbox.com\" (UID: \"4eb6c701a82d17fb47eb7c748ac2a14d\") " pod="kube-system/kube-controller-manager-srv-hr091.gb1.brightbox.com" Sep 9 03:21:40.943843 kubelet[2249]: I0909 03:21:40.943124 2249 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5cae1fec473586ea448634ca55b361e9-kubeconfig\") pod \"kube-scheduler-srv-hr091.gb1.brightbox.com\" (UID: \"5cae1fec473586ea448634ca55b361e9\") " pod="kube-system/kube-scheduler-srv-hr091.gb1.brightbox.com" Sep 9 03:21:40.943843 kubelet[2249]: E0909 03:21:40.943501 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.34.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hr091.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.34.194:6443: connect: connection refused" interval="400ms" Sep 9 03:21:40.946984 kubelet[2249]: I0909 03:21:40.946763 2249 kubelet_node_status.go:72] "Attempting to register node" node="srv-hr091.gb1.brightbox.com" Sep 9 03:21:40.947363 kubelet[2249]: E0909 03:21:40.947326 2249 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.34.194:6443/api/v1/nodes\": dial tcp 10.230.34.194:6443: connect: connection refused" node="srv-hr091.gb1.brightbox.com" Sep 9 03:21:41.151542 kubelet[2249]: I0909 03:21:41.151478 2249 kubelet_node_status.go:72] "Attempting to register node" node="srv-hr091.gb1.brightbox.com" Sep 9 03:21:41.152274 kubelet[2249]: E0909 03:21:41.152083 2249 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.34.194:6443/api/v1/nodes\": dial tcp 10.230.34.194:6443: connect: connection refused" node="srv-hr091.gb1.brightbox.com" Sep 9 03:21:41.219190 containerd[1500]: time="2025-09-09T03:21:41.218972687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-hr091.gb1.brightbox.com,Uid:5cae1fec473586ea448634ca55b361e9,Namespace:kube-system,Attempt:0,}" Sep 9 03:21:41.243580 containerd[1500]: time="2025-09-09T03:21:41.243040665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-hr091.gb1.brightbox.com,Uid:b9faf44e5c7238e37f759077009a6606,Namespace:kube-system,Attempt:0,}" Sep 9 03:21:41.243812 containerd[1500]: time="2025-09-09T03:21:41.243773889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-hr091.gb1.brightbox.com,Uid:4eb6c701a82d17fb47eb7c748ac2a14d,Namespace:kube-system,Attempt:0,}" Sep 9 03:21:41.344441 kubelet[2249]: E0909 03:21:41.344359 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.34.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hr091.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.34.194:6443: connect: connection refused" interval="800ms" Sep 9 03:21:41.348247 update_engine[1486]: I20250909 03:21:41.347996 1486 update_attempter.cc:509] Updating boot flags... Sep 9 03:21:41.394349 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2290) Sep 9 03:21:41.486206 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2293) Sep 9 03:21:41.557283 kubelet[2249]: I0909 03:21:41.556449 2249 kubelet_node_status.go:72] "Attempting to register node" node="srv-hr091.gb1.brightbox.com" Sep 9 03:21:41.557283 kubelet[2249]: E0909 03:21:41.557231 2249 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.34.194:6443/api/v1/nodes\": dial tcp 10.230.34.194:6443: connect: connection refused" node="srv-hr091.gb1.brightbox.com" Sep 9 03:21:41.993622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1132122078.mount: Deactivated successfully. Sep 9 03:21:42.003894 containerd[1500]: time="2025-09-09T03:21:42.003818880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 03:21:42.005936 containerd[1500]: time="2025-09-09T03:21:42.005618839Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 03:21:42.006495 containerd[1500]: time="2025-09-09T03:21:42.006455947Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 03:21:42.007710 containerd[1500]: time="2025-09-09T03:21:42.007672952Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 03:21:42.008852 containerd[1500]: time="2025-09-09T03:21:42.008720131Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 9 03:21:42.009665 containerd[1500]: time="2025-09-09T03:21:42.009623950Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 03:21:42.009861 containerd[1500]: time="2025-09-09T03:21:42.009800598Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 03:21:42.016263 containerd[1500]: time="2025-09-09T03:21:42.016149876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 03:21:42.018866 containerd[1500]: time="2025-09-09T03:21:42.018827269Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 774.902223ms" Sep 9 03:21:42.022086 containerd[1500]: time="2025-09-09T03:21:42.021968480Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 802.49157ms" Sep 9 03:21:42.024111 containerd[1500]: time="2025-09-09T03:21:42.023809868Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 779.947901ms" Sep 9 03:21:42.062869 kubelet[2249]: W0909 03:21:42.062685 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.34.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.34.194:6443: connect: connection refused Sep 9 03:21:42.062869 kubelet[2249]: E0909 03:21:42.062864 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.34.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.34.194:6443: connect: connection refused" logger="UnhandledError" Sep 9 03:21:42.107151 kubelet[2249]: W0909 03:21:42.106983 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.34.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.34.194:6443: connect: connection refused Sep 9 03:21:42.107151 kubelet[2249]: E0909 03:21:42.107101 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.34.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.34.194:6443: connect: connection refused" logger="UnhandledError" Sep 9 03:21:42.146223 kubelet[2249]: E0909 03:21:42.146065 2249 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.34.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-hr091.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.34.194:6443: connect: connection refused" interval="1.6s" Sep 9 03:21:42.190566 kubelet[2249]: W0909 03:21:42.190366 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.34.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hr091.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.34.194:6443: connect: connection refused Sep 9 03:21:42.190566 kubelet[2249]: E0909 03:21:42.190501 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.34.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-hr091.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.34.194:6443: connect: connection refused" logger="UnhandledError" Sep 9 03:21:42.243215 containerd[1500]: time="2025-09-09T03:21:42.241651703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 03:21:42.243215 containerd[1500]: time="2025-09-09T03:21:42.241745769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 03:21:42.243215 containerd[1500]: time="2025-09-09T03:21:42.241765493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:21:42.243215 containerd[1500]: time="2025-09-09T03:21:42.241943237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:21:42.245543 containerd[1500]: time="2025-09-09T03:21:42.245355673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 03:21:42.245669 containerd[1500]: time="2025-09-09T03:21:42.245454893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 03:21:42.245669 containerd[1500]: time="2025-09-09T03:21:42.245480311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:21:42.245669 containerd[1500]: time="2025-09-09T03:21:42.245597011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:21:42.251678 containerd[1500]: time="2025-09-09T03:21:42.251553557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 03:21:42.251889 containerd[1500]: time="2025-09-09T03:21:42.251643209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 03:21:42.251889 containerd[1500]: time="2025-09-09T03:21:42.251695879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:21:42.251889 containerd[1500]: time="2025-09-09T03:21:42.251828083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:21:42.305478 systemd[1]: Started cri-containerd-78d55435aa504827f712f2bd284f0ab5700d2578d77db11166f48aa919180143.scope - libcontainer container 78d55435aa504827f712f2bd284f0ab5700d2578d77db11166f48aa919180143. Sep 9 03:21:42.308685 systemd[1]: Started cri-containerd-c9e4ed944ec58fe4a4b9acd7ead6d9c24ab1f2e0829c4751ff6ef4aa1c711e2f.scope - libcontainer container c9e4ed944ec58fe4a4b9acd7ead6d9c24ab1f2e0829c4751ff6ef4aa1c711e2f. Sep 9 03:21:42.316824 systemd[1]: Started cri-containerd-d7839a66db3476ac0a687e603649c188598de4f80a6287494d39c97d1c3c11e6.scope - libcontainer container d7839a66db3476ac0a687e603649c188598de4f80a6287494d39c97d1c3c11e6. Sep 9 03:21:42.341632 kubelet[2249]: W0909 03:21:42.341471 2249 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.34.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.34.194:6443: connect: connection refused Sep 9 03:21:42.341632 kubelet[2249]: E0909 03:21:42.341534 2249 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.34.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.34.194:6443: connect: connection refused" logger="UnhandledError" Sep 9 03:21:42.363453 kubelet[2249]: I0909 03:21:42.362738 2249 kubelet_node_status.go:72] "Attempting to register node" node="srv-hr091.gb1.brightbox.com" Sep 9 03:21:42.363453 kubelet[2249]: E0909 03:21:42.363318 2249 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.34.194:6443/api/v1/nodes\": dial tcp 10.230.34.194:6443: connect: connection refused" node="srv-hr091.gb1.brightbox.com" Sep 9 03:21:42.410560 containerd[1500]: time="2025-09-09T03:21:42.410507412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-hr091.gb1.brightbox.com,Uid:4eb6c701a82d17fb47eb7c748ac2a14d,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9e4ed944ec58fe4a4b9acd7ead6d9c24ab1f2e0829c4751ff6ef4aa1c711e2f\"" Sep 9 03:21:42.430508 containerd[1500]: time="2025-09-09T03:21:42.430429583Z" level=info msg="CreateContainer within sandbox \"c9e4ed944ec58fe4a4b9acd7ead6d9c24ab1f2e0829c4751ff6ef4aa1c711e2f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 03:21:42.455714 containerd[1500]: time="2025-09-09T03:21:42.455386913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-hr091.gb1.brightbox.com,Uid:5cae1fec473586ea448634ca55b361e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7839a66db3476ac0a687e603649c188598de4f80a6287494d39c97d1c3c11e6\"" Sep 9 03:21:42.462531 containerd[1500]: time="2025-09-09T03:21:42.462427677Z" level=info msg="CreateContainer within sandbox \"d7839a66db3476ac0a687e603649c188598de4f80a6287494d39c97d1c3c11e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 03:21:42.465428 containerd[1500]: time="2025-09-09T03:21:42.465381081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-hr091.gb1.brightbox.com,Uid:b9faf44e5c7238e37f759077009a6606,Namespace:kube-system,Attempt:0,} returns sandbox id \"78d55435aa504827f712f2bd284f0ab5700d2578d77db11166f48aa919180143\"" Sep 9 03:21:42.469024 containerd[1500]: time="2025-09-09T03:21:42.468847877Z" level=info msg="CreateContainer within sandbox \"78d55435aa504827f712f2bd284f0ab5700d2578d77db11166f48aa919180143\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 03:21:42.473086 containerd[1500]: time="2025-09-09T03:21:42.473028252Z" level=info msg="CreateContainer within sandbox \"c9e4ed944ec58fe4a4b9acd7ead6d9c24ab1f2e0829c4751ff6ef4aa1c711e2f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"69b85066898fb7fa3eb4ef774b4bc950f028d1da2d30c7a35af7181d2ff03136\"" Sep 9 03:21:42.474233 containerd[1500]: time="2025-09-09T03:21:42.474054459Z" level=info msg="StartContainer for \"69b85066898fb7fa3eb4ef774b4bc950f028d1da2d30c7a35af7181d2ff03136\"" Sep 9 03:21:42.492642 containerd[1500]: time="2025-09-09T03:21:42.492420940Z" level=info msg="CreateContainer within sandbox \"d7839a66db3476ac0a687e603649c188598de4f80a6287494d39c97d1c3c11e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cc591c6f38e8bd540a8cafb75af3924555bbaeda3d60dffe61d8b6bbf4a17edb\"" Sep 9 03:21:42.493795 containerd[1500]: time="2025-09-09T03:21:42.493604177Z" level=info msg="CreateContainer within sandbox \"78d55435aa504827f712f2bd284f0ab5700d2578d77db11166f48aa919180143\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0d4061e6e680365d1e2279e6c521831b3134d05457aff7e44f11c18a2afe952c\"" Sep 9 03:21:42.494720 containerd[1500]: time="2025-09-09T03:21:42.494688162Z" level=info msg="StartContainer for \"0d4061e6e680365d1e2279e6c521831b3134d05457aff7e44f11c18a2afe952c\"" Sep 9 03:21:42.496492 containerd[1500]: time="2025-09-09T03:21:42.495264685Z" level=info msg="StartContainer for \"cc591c6f38e8bd540a8cafb75af3924555bbaeda3d60dffe61d8b6bbf4a17edb\"" Sep 9 03:21:42.543545 systemd[1]: Started cri-containerd-69b85066898fb7fa3eb4ef774b4bc950f028d1da2d30c7a35af7181d2ff03136.scope - libcontainer container 69b85066898fb7fa3eb4ef774b4bc950f028d1da2d30c7a35af7181d2ff03136. Sep 9 03:21:42.553429 systemd[1]: Started cri-containerd-cc591c6f38e8bd540a8cafb75af3924555bbaeda3d60dffe61d8b6bbf4a17edb.scope - libcontainer container cc591c6f38e8bd540a8cafb75af3924555bbaeda3d60dffe61d8b6bbf4a17edb. Sep 9 03:21:42.562559 systemd[1]: Started cri-containerd-0d4061e6e680365d1e2279e6c521831b3134d05457aff7e44f11c18a2afe952c.scope - libcontainer container 0d4061e6e680365d1e2279e6c521831b3134d05457aff7e44f11c18a2afe952c. Sep 9 03:21:42.652070 containerd[1500]: time="2025-09-09T03:21:42.652012966Z" level=info msg="StartContainer for \"cc591c6f38e8bd540a8cafb75af3924555bbaeda3d60dffe61d8b6bbf4a17edb\" returns successfully" Sep 9 03:21:42.663767 containerd[1500]: time="2025-09-09T03:21:42.663713126Z" level=info msg="StartContainer for \"69b85066898fb7fa3eb4ef774b4bc950f028d1da2d30c7a35af7181d2ff03136\" returns successfully" Sep 9 03:21:42.678397 kubelet[2249]: E0909 03:21:42.677411 2249 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.34.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.34.194:6443: connect: connection refused" logger="UnhandledError" Sep 9 03:21:42.704291 containerd[1500]: time="2025-09-09T03:21:42.702126951Z" level=info msg="StartContainer for \"0d4061e6e680365d1e2279e6c521831b3134d05457aff7e44f11c18a2afe952c\" returns successfully" Sep 9 03:21:43.969423 kubelet[2249]: I0909 03:21:43.968783 2249 kubelet_node_status.go:72] "Attempting to register node" node="srv-hr091.gb1.brightbox.com" Sep 9 03:21:45.513403 kubelet[2249]: E0909 03:21:45.513286 2249 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-hr091.gb1.brightbox.com\" not found" node="srv-hr091.gb1.brightbox.com" Sep 9 03:21:45.579379 kubelet[2249]: I0909 03:21:45.576394 2249 kubelet_node_status.go:75] "Successfully registered node" node="srv-hr091.gb1.brightbox.com" Sep 9 03:21:45.711722 kubelet[2249]: I0909 03:21:45.711556 2249 apiserver.go:52] "Watching apiserver" Sep 9 03:21:45.741630 kubelet[2249]: I0909 03:21:45.741570 2249 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 03:21:46.133325 kubelet[2249]: E0909 03:21:46.133259 2249 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-hr091.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-hr091.gb1.brightbox.com" Sep 9 03:21:47.909160 systemd[1]: Reloading requested from client PID 2538 ('systemctl') (unit session-9.scope)... Sep 9 03:21:47.909226 systemd[1]: Reloading... Sep 9 03:21:48.039254 zram_generator::config[2586]: No configuration found. Sep 9 03:21:48.202882 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 03:21:48.339603 systemd[1]: Reloading finished in 429 ms. Sep 9 03:21:48.406026 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 03:21:48.427538 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 03:21:48.427943 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 03:21:48.428063 systemd[1]: kubelet.service: Consumed 1.371s CPU time, 132.0M memory peak, 0B memory swap peak. Sep 9 03:21:48.434547 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 03:21:48.654486 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 03:21:48.667709 (kubelet)[2641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 03:21:48.793202 kubelet[2641]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 03:21:48.793202 kubelet[2641]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 03:21:48.793202 kubelet[2641]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 03:21:48.793202 kubelet[2641]: I0909 03:21:48.792789 2641 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 03:21:48.805832 kubelet[2641]: I0909 03:21:48.805773 2641 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 03:21:48.806020 kubelet[2641]: I0909 03:21:48.806000 2641 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 03:21:48.806520 kubelet[2641]: I0909 03:21:48.806493 2641 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 03:21:48.808862 kubelet[2641]: I0909 03:21:48.808511 2641 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 03:21:48.813325 kubelet[2641]: I0909 03:21:48.813011 2641 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 03:21:48.823700 kubelet[2641]: E0909 03:21:48.823652 2641 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 03:21:48.824267 kubelet[2641]: I0909 03:21:48.824243 2641 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 03:21:48.831189 kubelet[2641]: I0909 03:21:48.830271 2641 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 03:21:48.831189 kubelet[2641]: I0909 03:21:48.830611 2641 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 03:21:48.831189 kubelet[2641]: I0909 03:21:48.830865 2641 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 03:21:48.831398 kubelet[2641]: I0909 03:21:48.830913 2641 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-hr091.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 03:21:48.831712 kubelet[2641]: I0909 03:21:48.831688 2641 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 03:21:48.831839 kubelet[2641]: I0909 03:21:48.831819 2641 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 03:21:48.832013 kubelet[2641]: I0909 03:21:48.831992 2641 state_mem.go:36] "Initialized new in-memory state store" Sep 9 03:21:48.832338 kubelet[2641]: I0909 03:21:48.832317 2641 kubelet.go:408] "Attempting to sync node with API server" Sep 9 03:21:48.832479 kubelet[2641]: I0909 03:21:48.832458 2641 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 03:21:48.832642 kubelet[2641]: I0909 03:21:48.832622 2641 kubelet.go:314] "Adding apiserver pod source" Sep 9 03:21:48.832787 kubelet[2641]: I0909 03:21:48.832767 2641 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 03:21:48.835635 kubelet[2641]: I0909 03:21:48.835591 2641 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 03:21:48.836692 kubelet[2641]: I0909 03:21:48.836390 2641 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 03:21:48.841067 kubelet[2641]: I0909 03:21:48.841042 2641 server.go:1274] "Started kubelet" Sep 9 03:21:48.844639 kubelet[2641]: I0909 03:21:48.842888 2641 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 03:21:48.844639 kubelet[2641]: I0909 03:21:48.843393 2641 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 03:21:48.844639 kubelet[2641]: I0909 03:21:48.843498 2641 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 03:21:48.845215 kubelet[2641]: I0909 03:21:48.845194 2641 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 03:21:48.848443 kubelet[2641]: I0909 03:21:48.848420 2641 server.go:449] "Adding debug handlers to kubelet server" Sep 9 03:21:48.857993 kubelet[2641]: I0909 03:21:48.857958 2641 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 03:21:48.887676 kubelet[2641]: E0909 03:21:48.887611 2641 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 03:21:48.887676 kubelet[2641]: I0909 03:21:48.865704 2641 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 03:21:48.890008 kubelet[2641]: E0909 03:21:48.865910 2641 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-hr091.gb1.brightbox.com\" not found" Sep 9 03:21:48.890008 kubelet[2641]: I0909 03:21:48.879114 2641 factory.go:221] Registration of the systemd container factory successfully Sep 9 03:21:48.890704 kubelet[2641]: I0909 03:21:48.890083 2641 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 03:21:48.891984 kubelet[2641]: I0909 03:21:48.865669 2641 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 03:21:48.895453 kubelet[2641]: I0909 03:21:48.895430 2641 reconciler.go:26] "Reconciler: start to sync state" Sep 9 03:21:48.904115 kubelet[2641]: I0909 03:21:48.903348 2641 factory.go:221] Registration of the containerd container factory successfully Sep 9 03:21:48.904115 kubelet[2641]: I0909 03:21:48.903943 2641 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 03:21:48.908807 kubelet[2641]: I0909 03:21:48.907446 2641 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 03:21:48.908807 kubelet[2641]: I0909 03:21:48.907504 2641 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 03:21:48.908807 kubelet[2641]: I0909 03:21:48.907539 2641 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 03:21:48.908807 kubelet[2641]: E0909 03:21:48.907620 2641 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 03:21:48.939900 sudo[2664]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 03:21:48.940506 sudo[2664]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 03:21:48.995334 kubelet[2641]: I0909 03:21:48.994614 2641 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 03:21:48.995334 kubelet[2641]: I0909 03:21:48.994644 2641 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 03:21:48.995334 kubelet[2641]: I0909 03:21:48.994678 2641 state_mem.go:36] "Initialized new in-memory state store" Sep 9 03:21:48.995334 kubelet[2641]: I0909 03:21:48.994987 2641 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 03:21:48.995334 kubelet[2641]: I0909 03:21:48.995007 2641 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 03:21:48.995334 kubelet[2641]: I0909 03:21:48.995044 2641 policy_none.go:49] "None policy: Start" Sep 9 03:21:48.998636 kubelet[2641]: I0909 03:21:48.997676 2641 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 03:21:48.998636 kubelet[2641]: I0909 03:21:48.997720 2641 state_mem.go:35] "Initializing new in-memory state store" Sep 9 03:21:48.998636 kubelet[2641]: I0909 03:21:48.997970 2641 state_mem.go:75] "Updated machine memory state" Sep 9 03:21:49.007781 kubelet[2641]: E0909 03:21:49.007752 2641 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 03:21:49.008760 kubelet[2641]: I0909 03:21:49.008732 2641 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 03:21:49.009046 kubelet[2641]: I0909 03:21:49.009021 2641 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 03:21:49.009135 kubelet[2641]: I0909 03:21:49.009052 2641 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 03:21:49.011211 kubelet[2641]: I0909 03:21:49.009631 2641 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 03:21:49.140585 kubelet[2641]: I0909 03:21:49.140392 2641 kubelet_node_status.go:72] "Attempting to register node" node="srv-hr091.gb1.brightbox.com" Sep 9 03:21:49.164860 kubelet[2641]: I0909 03:21:49.164378 2641 kubelet_node_status.go:111] "Node was previously registered" node="srv-hr091.gb1.brightbox.com" Sep 9 03:21:49.165445 kubelet[2641]: I0909 03:21:49.165139 2641 kubelet_node_status.go:75] "Successfully registered node" node="srv-hr091.gb1.brightbox.com" Sep 9 03:21:49.227202 kubelet[2641]: W0909 03:21:49.225924 2641 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 03:21:49.229037 kubelet[2641]: W0909 03:21:49.228823 2641 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 03:21:49.234410 kubelet[2641]: W0909 03:21:49.234261 2641 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 03:21:49.299243 kubelet[2641]: I0909 03:21:49.298553 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4eb6c701a82d17fb47eb7c748ac2a14d-flexvolume-dir\") pod \"kube-controller-manager-srv-hr091.gb1.brightbox.com\" (UID: \"4eb6c701a82d17fb47eb7c748ac2a14d\") " pod="kube-system/kube-controller-manager-srv-hr091.gb1.brightbox.com" Sep 9 03:21:49.299243 kubelet[2641]: I0909 03:21:49.298612 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b9faf44e5c7238e37f759077009a6606-ca-certs\") pod \"kube-apiserver-srv-hr091.gb1.brightbox.com\" (UID: \"b9faf44e5c7238e37f759077009a6606\") " pod="kube-system/kube-apiserver-srv-hr091.gb1.brightbox.com" Sep 9 03:21:49.299243 kubelet[2641]: I0909 03:21:49.298644 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b9faf44e5c7238e37f759077009a6606-usr-share-ca-certificates\") pod \"kube-apiserver-srv-hr091.gb1.brightbox.com\" (UID: \"b9faf44e5c7238e37f759077009a6606\") " pod="kube-system/kube-apiserver-srv-hr091.gb1.brightbox.com" Sep 9 03:21:49.299243 kubelet[2641]: I0909 03:21:49.298676 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4eb6c701a82d17fb47eb7c748ac2a14d-ca-certs\") pod \"kube-controller-manager-srv-hr091.gb1.brightbox.com\" (UID: \"4eb6c701a82d17fb47eb7c748ac2a14d\") " pod="kube-system/kube-controller-manager-srv-hr091.gb1.brightbox.com" Sep 9 03:21:49.299243 kubelet[2641]: I0909 03:21:49.298707 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4eb6c701a82d17fb47eb7c748ac2a14d-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-hr091.gb1.brightbox.com\" (UID: \"4eb6c701a82d17fb47eb7c748ac2a14d\") " pod="kube-system/kube-controller-manager-srv-hr091.gb1.brightbox.com" Sep 9 03:21:49.299243 kubelet[2641]: I0909 03:21:49.298737 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5cae1fec473586ea448634ca55b361e9-kubeconfig\") pod \"kube-scheduler-srv-hr091.gb1.brightbox.com\" (UID: \"5cae1fec473586ea448634ca55b361e9\") " pod="kube-system/kube-scheduler-srv-hr091.gb1.brightbox.com" Sep 9 03:21:49.299243 kubelet[2641]: I0909 03:21:49.298762 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b9faf44e5c7238e37f759077009a6606-k8s-certs\") pod \"kube-apiserver-srv-hr091.gb1.brightbox.com\" (UID: \"b9faf44e5c7238e37f759077009a6606\") " pod="kube-system/kube-apiserver-srv-hr091.gb1.brightbox.com" Sep 9 03:21:49.299243 kubelet[2641]: I0909 03:21:49.298790 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4eb6c701a82d17fb47eb7c748ac2a14d-k8s-certs\") pod \"kube-controller-manager-srv-hr091.gb1.brightbox.com\" (UID: \"4eb6c701a82d17fb47eb7c748ac2a14d\") " pod="kube-system/kube-controller-manager-srv-hr091.gb1.brightbox.com" Sep 9 03:21:49.299243 kubelet[2641]: I0909 03:21:49.298821 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4eb6c701a82d17fb47eb7c748ac2a14d-kubeconfig\") pod \"kube-controller-manager-srv-hr091.gb1.brightbox.com\" (UID: \"4eb6c701a82d17fb47eb7c748ac2a14d\") " pod="kube-system/kube-controller-manager-srv-hr091.gb1.brightbox.com" Sep 9 03:21:49.712406 sudo[2664]: pam_unix(sudo:session): session closed for user root Sep 9 03:21:49.835256 kubelet[2641]: I0909 03:21:49.835147 2641 apiserver.go:52] "Watching apiserver" Sep 9 03:21:49.888624 kubelet[2641]: I0909 03:21:49.888545 2641 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 03:21:49.937984 kubelet[2641]: I0909 03:21:49.937796 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-hr091.gb1.brightbox.com" podStartSLOduration=0.937762762 podStartE2EDuration="937.762762ms" podCreationTimestamp="2025-09-09 03:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 03:21:49.914647885 +0000 UTC m=+1.210247085" watchObservedRunningTime="2025-09-09 03:21:49.937762762 +0000 UTC m=+1.233361971" Sep 9 03:21:49.969498 kubelet[2641]: W0909 03:21:49.969371 2641 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 9 03:21:49.969498 kubelet[2641]: E0909 03:21:49.969457 2641 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-hr091.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-hr091.gb1.brightbox.com" Sep 9 03:21:49.972068 kubelet[2641]: I0909 03:21:49.972014 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-hr091.gb1.brightbox.com" podStartSLOduration=0.971999585 podStartE2EDuration="971.999585ms" podCreationTimestamp="2025-09-09 03:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 03:21:49.938691057 +0000 UTC m=+1.234290270" watchObservedRunningTime="2025-09-09 03:21:49.971999585 +0000 UTC m=+1.267598793" Sep 9 03:21:49.996105 kubelet[2641]: I0909 03:21:49.996039 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-hr091.gb1.brightbox.com" podStartSLOduration=0.996018894 podStartE2EDuration="996.018894ms" podCreationTimestamp="2025-09-09 03:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 03:21:49.972929585 +0000 UTC m=+1.268528800" watchObservedRunningTime="2025-09-09 03:21:49.996018894 +0000 UTC m=+1.291618100" Sep 9 03:21:51.828716 sudo[1746]: pam_unix(sudo:session): session closed for user root Sep 9 03:21:51.976928 sshd[1743]: pam_unix(sshd:session): session closed for user core Sep 9 03:21:51.983681 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Sep 9 03:21:51.984476 systemd[1]: sshd@6-10.230.34.194:22-147.75.109.163:53140.service: Deactivated successfully. Sep 9 03:21:51.989825 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 03:21:51.990374 systemd[1]: session-9.scope: Consumed 6.781s CPU time, 141.3M memory peak, 0B memory swap peak. Sep 9 03:21:51.992618 systemd-logind[1485]: Removed session 9. Sep 9 03:21:54.688407 kubelet[2641]: I0909 03:21:54.688317 2641 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 03:21:54.691102 kubelet[2641]: I0909 03:21:54.689696 2641 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 03:21:54.691207 containerd[1500]: time="2025-09-09T03:21:54.689266234Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 03:21:55.406944 systemd[1]: Created slice kubepods-besteffort-podea60a5bd_5f81_48b1_bf88_b01fb2b41621.slice - libcontainer container kubepods-besteffort-podea60a5bd_5f81_48b1_bf88_b01fb2b41621.slice. Sep 9 03:21:55.442323 kubelet[2641]: I0909 03:21:55.442130 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea60a5bd-5f81-48b1-bf88-b01fb2b41621-cilium-config-path\") pod \"cilium-operator-5d85765b45-ckfhp\" (UID: \"ea60a5bd-5f81-48b1-bf88-b01fb2b41621\") " pod="kube-system/cilium-operator-5d85765b45-ckfhp" Sep 9 03:21:55.442673 kubelet[2641]: I0909 03:21:55.442619 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkgk2\" (UniqueName: \"kubernetes.io/projected/ea60a5bd-5f81-48b1-bf88-b01fb2b41621-kube-api-access-jkgk2\") pod \"cilium-operator-5d85765b45-ckfhp\" (UID: \"ea60a5bd-5f81-48b1-bf88-b01fb2b41621\") " pod="kube-system/cilium-operator-5d85765b45-ckfhp" Sep 9 03:21:55.483576 systemd[1]: Created slice kubepods-besteffort-podf9798939_460c_40c9_b1fa_89c1fc2e3693.slice - libcontainer container kubepods-besteffort-podf9798939_460c_40c9_b1fa_89c1fc2e3693.slice. Sep 9 03:21:55.505926 systemd[1]: Created slice kubepods-burstable-pode1795d02_2165_41cf_9dd0_099a305a21b7.slice - libcontainer container kubepods-burstable-pode1795d02_2165_41cf_9dd0_099a305a21b7.slice. Sep 9 03:21:55.543822 kubelet[2641]: I0909 03:21:55.543560 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-host-proc-sys-net\") pod \"cilium-sk9pg\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " pod="kube-system/cilium-sk9pg" Sep 9 03:21:55.543822 kubelet[2641]: I0909 03:21:55.543631 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9798939-460c-40c9-b1fa-89c1fc2e3693-kube-proxy\") pod \"kube-proxy-stthd\" (UID: \"f9798939-460c-40c9-b1fa-89c1fc2e3693\") " pod="kube-system/kube-proxy-stthd" Sep 9 03:21:55.543822 kubelet[2641]: I0909 03:21:55.543664 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9798939-460c-40c9-b1fa-89c1fc2e3693-xtables-lock\") pod \"kube-proxy-stthd\" (UID: \"f9798939-460c-40c9-b1fa-89c1fc2e3693\") " pod="kube-system/kube-proxy-stthd" Sep 9 03:21:55.543822 kubelet[2641]: I0909 03:21:55.543692 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-hostproc\") pod \"cilium-sk9pg\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " pod="kube-system/cilium-sk9pg" Sep 9 03:21:55.543822 kubelet[2641]: I0909 03:21:55.543720 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1795d02-2165-41cf-9dd0-099a305a21b7-clustermesh-secrets\") pod \"cilium-sk9pg\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " pod="kube-system/cilium-sk9pg" Sep 9 03:21:55.543822 kubelet[2641]: I0909 03:21:55.543750 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-host-proc-sys-kernel\") pod \"cilium-sk9pg\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " pod="kube-system/cilium-sk9pg" Sep 9 03:21:55.543822 kubelet[2641]: I0909 03:21:55.543777 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1795d02-2165-41cf-9dd0-099a305a21b7-hubble-tls\") pod \"cilium-sk9pg\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " pod="kube-system/cilium-sk9pg" Sep 9 03:21:55.543822 kubelet[2641]: I0909 03:21:55.543823 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbtvb\" (UniqueName: \"kubernetes.io/projected/e1795d02-2165-41cf-9dd0-099a305a21b7-kube-api-access-vbtvb\") pod \"cilium-sk9pg\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " pod="kube-system/cilium-sk9pg" Sep 9 03:21:55.544398 kubelet[2641]: I0909 03:21:55.543906 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgpqr\" (UniqueName: \"kubernetes.io/projected/f9798939-460c-40c9-b1fa-89c1fc2e3693-kube-api-access-vgpqr\") pod \"kube-proxy-stthd\" (UID: \"f9798939-460c-40c9-b1fa-89c1fc2e3693\") " pod="kube-system/kube-proxy-stthd" Sep 9 03:21:55.544398 kubelet[2641]: I0909 03:21:55.543946 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-cilium-run\") pod \"cilium-sk9pg\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " pod="kube-system/cilium-sk9pg" Sep 9 03:21:55.544398 kubelet[2641]: I0909 03:21:55.543976 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-cni-path\") pod \"cilium-sk9pg\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " pod="kube-system/cilium-sk9pg" Sep 9 03:21:55.544679 kubelet[2641]: I0909 03:21:55.544552 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-bpf-maps\") pod \"cilium-sk9pg\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " pod="kube-system/cilium-sk9pg" Sep 9 03:21:55.545379 kubelet[2641]: I0909 03:21:55.544924 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-cilium-cgroup\") pod \"cilium-sk9pg\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " pod="kube-system/cilium-sk9pg" Sep 9 03:21:55.545379 kubelet[2641]: I0909 03:21:55.545003 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9798939-460c-40c9-b1fa-89c1fc2e3693-lib-modules\") pod \"kube-proxy-stthd\" (UID: \"f9798939-460c-40c9-b1fa-89c1fc2e3693\") " pod="kube-system/kube-proxy-stthd" Sep 9 03:21:55.545379 kubelet[2641]: I0909 03:21:55.545066 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-lib-modules\") pod \"cilium-sk9pg\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " pod="kube-system/cilium-sk9pg" Sep 9 03:21:55.545379 kubelet[2641]: I0909 03:21:55.545098 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-etc-cni-netd\") pod \"cilium-sk9pg\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " pod="kube-system/cilium-sk9pg" Sep 9 03:21:55.545379 kubelet[2641]: I0909 03:21:55.545157 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-xtables-lock\") pod \"cilium-sk9pg\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " pod="kube-system/cilium-sk9pg" Sep 9 03:21:55.545379 kubelet[2641]: I0909 03:21:55.545305 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1795d02-2165-41cf-9dd0-099a305a21b7-cilium-config-path\") pod \"cilium-sk9pg\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " pod="kube-system/cilium-sk9pg" Sep 9 03:21:55.722081 containerd[1500]: time="2025-09-09T03:21:55.721927016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ckfhp,Uid:ea60a5bd-5f81-48b1-bf88-b01fb2b41621,Namespace:kube-system,Attempt:0,}" Sep 9 03:21:55.763959 containerd[1500]: time="2025-09-09T03:21:55.763161050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 03:21:55.763959 containerd[1500]: time="2025-09-09T03:21:55.763390978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 03:21:55.763959 containerd[1500]: time="2025-09-09T03:21:55.763464457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:21:55.764542 containerd[1500]: time="2025-09-09T03:21:55.763929524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:21:55.792848 containerd[1500]: time="2025-09-09T03:21:55.792736212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stthd,Uid:f9798939-460c-40c9-b1fa-89c1fc2e3693,Namespace:kube-system,Attempt:0,}" Sep 9 03:21:55.798430 systemd[1]: Started cri-containerd-177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8.scope - libcontainer container 177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8. Sep 9 03:21:55.812989 containerd[1500]: time="2025-09-09T03:21:55.812926871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sk9pg,Uid:e1795d02-2165-41cf-9dd0-099a305a21b7,Namespace:kube-system,Attempt:0,}" Sep 9 03:21:55.867739 containerd[1500]: time="2025-09-09T03:21:55.866327605Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 03:21:55.868993 containerd[1500]: time="2025-09-09T03:21:55.868666134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 03:21:55.868993 containerd[1500]: time="2025-09-09T03:21:55.868696545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:21:55.868993 containerd[1500]: time="2025-09-09T03:21:55.868852001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:21:55.886215 containerd[1500]: time="2025-09-09T03:21:55.886070099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 03:21:55.888467 containerd[1500]: time="2025-09-09T03:21:55.888400906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 03:21:55.889396 containerd[1500]: time="2025-09-09T03:21:55.889334228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:21:55.890392 containerd[1500]: time="2025-09-09T03:21:55.889546054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:21:55.940754 systemd[1]: Started cri-containerd-90e3067767944825d8690f3da7a2b0882b9a0f11f0629f9e388cc6213a3d61a0.scope - libcontainer container 90e3067767944825d8690f3da7a2b0882b9a0f11f0629f9e388cc6213a3d61a0. Sep 9 03:21:55.944518 containerd[1500]: time="2025-09-09T03:21:55.944460542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-ckfhp,Uid:ea60a5bd-5f81-48b1-bf88-b01fb2b41621,Namespace:kube-system,Attempt:0,} returns sandbox id \"177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8\"" Sep 9 03:21:55.944754 systemd[1]: Started cri-containerd-ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e.scope - libcontainer container ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e. Sep 9 03:21:55.952395 containerd[1500]: time="2025-09-09T03:21:55.952136181Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 03:21:56.031403 containerd[1500]: time="2025-09-09T03:21:56.030323930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-sk9pg,Uid:e1795d02-2165-41cf-9dd0-099a305a21b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\"" Sep 9 03:21:56.038810 containerd[1500]: time="2025-09-09T03:21:56.038749336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-stthd,Uid:f9798939-460c-40c9-b1fa-89c1fc2e3693,Namespace:kube-system,Attempt:0,} returns sandbox id \"90e3067767944825d8690f3da7a2b0882b9a0f11f0629f9e388cc6213a3d61a0\"" Sep 9 03:21:56.043415 containerd[1500]: time="2025-09-09T03:21:56.043312049Z" level=info msg="CreateContainer within sandbox \"90e3067767944825d8690f3da7a2b0882b9a0f11f0629f9e388cc6213a3d61a0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 03:21:56.097545 containerd[1500]: time="2025-09-09T03:21:56.097474214Z" level=info msg="CreateContainer within sandbox \"90e3067767944825d8690f3da7a2b0882b9a0f11f0629f9e388cc6213a3d61a0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"02951d29355ff60890977486a3bd34055bcfa8a1e66c992300e19243c1f680cc\"" Sep 9 03:21:56.100411 containerd[1500]: time="2025-09-09T03:21:56.098358957Z" level=info msg="StartContainer for \"02951d29355ff60890977486a3bd34055bcfa8a1e66c992300e19243c1f680cc\"" Sep 9 03:21:56.133386 systemd[1]: Started cri-containerd-02951d29355ff60890977486a3bd34055bcfa8a1e66c992300e19243c1f680cc.scope - libcontainer container 02951d29355ff60890977486a3bd34055bcfa8a1e66c992300e19243c1f680cc. Sep 9 03:21:56.178115 containerd[1500]: time="2025-09-09T03:21:56.178060843Z" level=info msg="StartContainer for \"02951d29355ff60890977486a3bd34055bcfa8a1e66c992300e19243c1f680cc\" returns successfully" Sep 9 03:21:58.449622 kubelet[2641]: I0909 03:21:58.449434 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-stthd" podStartSLOduration=3.449273862 podStartE2EDuration="3.449273862s" podCreationTimestamp="2025-09-09 03:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 03:21:56.999962627 +0000 UTC m=+8.295561846" watchObservedRunningTime="2025-09-09 03:21:58.449273862 +0000 UTC m=+9.744873066" Sep 9 03:21:58.641592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1278912823.mount: Deactivated successfully. Sep 9 03:21:59.597928 containerd[1500]: time="2025-09-09T03:21:59.596782850Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:59.599593 containerd[1500]: time="2025-09-09T03:21:59.599545951Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Sep 9 03:21:59.600789 containerd[1500]: time="2025-09-09T03:21:59.600754916Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:21:59.603568 containerd[1500]: time="2025-09-09T03:21:59.603531541Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.650692378s" Sep 9 03:21:59.603720 containerd[1500]: time="2025-09-09T03:21:59.603680341Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Sep 9 03:21:59.642697 containerd[1500]: time="2025-09-09T03:21:59.642648431Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 03:21:59.649237 containerd[1500]: time="2025-09-09T03:21:59.649191665Z" level=info msg="CreateContainer within sandbox \"177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 03:21:59.666946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1343676546.mount: Deactivated successfully. Sep 9 03:21:59.670116 containerd[1500]: time="2025-09-09T03:21:59.669924378Z" level=info msg="CreateContainer within sandbox \"177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759\"" Sep 9 03:21:59.677396 containerd[1500]: time="2025-09-09T03:21:59.677361625Z" level=info msg="StartContainer for \"5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759\"" Sep 9 03:21:59.727430 systemd[1]: Started cri-containerd-5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759.scope - libcontainer container 5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759. Sep 9 03:21:59.768148 containerd[1500]: time="2025-09-09T03:21:59.768096587Z" level=info msg="StartContainer for \"5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759\" returns successfully" Sep 9 03:22:00.113802 kubelet[2641]: I0909 03:22:00.111282 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-ckfhp" podStartSLOduration=1.424633247 podStartE2EDuration="5.109001914s" podCreationTimestamp="2025-09-09 03:21:55 +0000 UTC" firstStartedPulling="2025-09-09 03:21:55.94906794 +0000 UTC m=+7.244667135" lastFinishedPulling="2025-09-09 03:21:59.6334366 +0000 UTC m=+10.929035802" observedRunningTime="2025-09-09 03:22:00.102784233 +0000 UTC m=+11.398383457" watchObservedRunningTime="2025-09-09 03:22:00.109001914 +0000 UTC m=+11.404601123" Sep 9 03:22:06.934105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3560539767.mount: Deactivated successfully. Sep 9 03:22:10.371895 containerd[1500]: time="2025-09-09T03:22:10.371786319Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:22:10.374283 containerd[1500]: time="2025-09-09T03:22:10.374226013Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Sep 9 03:22:10.375260 containerd[1500]: time="2025-09-09T03:22:10.374604294Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 03:22:10.377365 containerd[1500]: time="2025-09-09T03:22:10.377117923Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 10.734223242s" Sep 9 03:22:10.377365 containerd[1500]: time="2025-09-09T03:22:10.377203145Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Sep 9 03:22:10.401399 containerd[1500]: time="2025-09-09T03:22:10.401186292Z" level=info msg="CreateContainer within sandbox \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 03:22:10.480021 containerd[1500]: time="2025-09-09T03:22:10.479849265Z" level=info msg="CreateContainer within sandbox \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8ace94bdac76d8a2e3a42f02072a4d2804e59df4310492f76cf64c5a0c1f2b7a\"" Sep 9 03:22:10.481032 containerd[1500]: time="2025-09-09T03:22:10.480994080Z" level=info msg="StartContainer for \"8ace94bdac76d8a2e3a42f02072a4d2804e59df4310492f76cf64c5a0c1f2b7a\"" Sep 9 03:22:10.572848 systemd[1]: run-containerd-runc-k8s.io-8ace94bdac76d8a2e3a42f02072a4d2804e59df4310492f76cf64c5a0c1f2b7a-runc.u4FOO7.mount: Deactivated successfully. Sep 9 03:22:10.584470 systemd[1]: Started cri-containerd-8ace94bdac76d8a2e3a42f02072a4d2804e59df4310492f76cf64c5a0c1f2b7a.scope - libcontainer container 8ace94bdac76d8a2e3a42f02072a4d2804e59df4310492f76cf64c5a0c1f2b7a. Sep 9 03:22:10.627439 containerd[1500]: time="2025-09-09T03:22:10.627035945Z" level=info msg="StartContainer for \"8ace94bdac76d8a2e3a42f02072a4d2804e59df4310492f76cf64c5a0c1f2b7a\" returns successfully" Sep 9 03:22:10.642304 systemd[1]: cri-containerd-8ace94bdac76d8a2e3a42f02072a4d2804e59df4310492f76cf64c5a0c1f2b7a.scope: Deactivated successfully. Sep 9 03:22:10.863816 containerd[1500]: time="2025-09-09T03:22:10.847017007Z" level=info msg="shim disconnected" id=8ace94bdac76d8a2e3a42f02072a4d2804e59df4310492f76cf64c5a0c1f2b7a namespace=k8s.io Sep 9 03:22:10.863816 containerd[1500]: time="2025-09-09T03:22:10.863794159Z" level=warning msg="cleaning up after shim disconnected" id=8ace94bdac76d8a2e3a42f02072a4d2804e59df4310492f76cf64c5a0c1f2b7a namespace=k8s.io Sep 9 03:22:10.863816 containerd[1500]: time="2025-09-09T03:22:10.863829039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 03:22:11.236662 containerd[1500]: time="2025-09-09T03:22:11.235380696Z" level=info msg="CreateContainer within sandbox \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 03:22:11.272976 containerd[1500]: time="2025-09-09T03:22:11.272813461Z" level=info msg="CreateContainer within sandbox \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1a5846bf1dd8663af6d13c21381226a1e805429a07a1251c973c592d622d5d4e\"" Sep 9 03:22:11.273583 containerd[1500]: time="2025-09-09T03:22:11.273548191Z" level=info msg="StartContainer for \"1a5846bf1dd8663af6d13c21381226a1e805429a07a1251c973c592d622d5d4e\"" Sep 9 03:22:11.313489 systemd[1]: Started cri-containerd-1a5846bf1dd8663af6d13c21381226a1e805429a07a1251c973c592d622d5d4e.scope - libcontainer container 1a5846bf1dd8663af6d13c21381226a1e805429a07a1251c973c592d622d5d4e. Sep 9 03:22:11.353610 containerd[1500]: time="2025-09-09T03:22:11.353427818Z" level=info msg="StartContainer for \"1a5846bf1dd8663af6d13c21381226a1e805429a07a1251c973c592d622d5d4e\" returns successfully" Sep 9 03:22:11.380803 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 03:22:11.381625 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 03:22:11.381809 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 03:22:11.389818 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 03:22:11.390165 systemd[1]: cri-containerd-1a5846bf1dd8663af6d13c21381226a1e805429a07a1251c973c592d622d5d4e.scope: Deactivated successfully. Sep 9 03:22:11.443426 containerd[1500]: time="2025-09-09T03:22:11.442021936Z" level=info msg="shim disconnected" id=1a5846bf1dd8663af6d13c21381226a1e805429a07a1251c973c592d622d5d4e namespace=k8s.io Sep 9 03:22:11.443426 containerd[1500]: time="2025-09-09T03:22:11.442103076Z" level=warning msg="cleaning up after shim disconnected" id=1a5846bf1dd8663af6d13c21381226a1e805429a07a1251c973c592d622d5d4e namespace=k8s.io Sep 9 03:22:11.443426 containerd[1500]: time="2025-09-09T03:22:11.442125124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 03:22:11.468205 containerd[1500]: time="2025-09-09T03:22:11.465773465Z" level=warning msg="cleanup warnings time=\"2025-09-09T03:22:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 9 03:22:11.470863 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ace94bdac76d8a2e3a42f02072a4d2804e59df4310492f76cf64c5a0c1f2b7a-rootfs.mount: Deactivated successfully. Sep 9 03:22:11.491692 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 03:22:12.239356 containerd[1500]: time="2025-09-09T03:22:12.239299462Z" level=info msg="CreateContainer within sandbox \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 03:22:12.272997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1881692350.mount: Deactivated successfully. Sep 9 03:22:12.280971 containerd[1500]: time="2025-09-09T03:22:12.280913414Z" level=info msg="CreateContainer within sandbox \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bd729edb4ddaacaf2f64c5216e0398965a25ba6c396c6b034de7cddd64a4e0ba\"" Sep 9 03:22:12.283243 containerd[1500]: time="2025-09-09T03:22:12.283051566Z" level=info msg="StartContainer for \"bd729edb4ddaacaf2f64c5216e0398965a25ba6c396c6b034de7cddd64a4e0ba\"" Sep 9 03:22:12.327416 systemd[1]: Started cri-containerd-bd729edb4ddaacaf2f64c5216e0398965a25ba6c396c6b034de7cddd64a4e0ba.scope - libcontainer container bd729edb4ddaacaf2f64c5216e0398965a25ba6c396c6b034de7cddd64a4e0ba. Sep 9 03:22:12.376095 containerd[1500]: time="2025-09-09T03:22:12.375444237Z" level=info msg="StartContainer for \"bd729edb4ddaacaf2f64c5216e0398965a25ba6c396c6b034de7cddd64a4e0ba\" returns successfully" Sep 9 03:22:12.381866 systemd[1]: cri-containerd-bd729edb4ddaacaf2f64c5216e0398965a25ba6c396c6b034de7cddd64a4e0ba.scope: Deactivated successfully. Sep 9 03:22:12.414407 containerd[1500]: time="2025-09-09T03:22:12.414321370Z" level=info msg="shim disconnected" id=bd729edb4ddaacaf2f64c5216e0398965a25ba6c396c6b034de7cddd64a4e0ba namespace=k8s.io Sep 9 03:22:12.414407 containerd[1500]: time="2025-09-09T03:22:12.414391810Z" level=warning msg="cleaning up after shim disconnected" id=bd729edb4ddaacaf2f64c5216e0398965a25ba6c396c6b034de7cddd64a4e0ba namespace=k8s.io Sep 9 03:22:12.414407 containerd[1500]: time="2025-09-09T03:22:12.414408674Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 03:22:12.468421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bd729edb4ddaacaf2f64c5216e0398965a25ba6c396c6b034de7cddd64a4e0ba-rootfs.mount: Deactivated successfully. Sep 9 03:22:13.244077 containerd[1500]: time="2025-09-09T03:22:13.243997710Z" level=info msg="CreateContainer within sandbox \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 03:22:13.264205 containerd[1500]: time="2025-09-09T03:22:13.263494291Z" level=info msg="CreateContainer within sandbox \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e8a92bb7a7ec35f5b5f5599f226b771d3393044375dae406bacdbe9dae34d999\"" Sep 9 03:22:13.266198 containerd[1500]: time="2025-09-09T03:22:13.264518061Z" level=info msg="StartContainer for \"e8a92bb7a7ec35f5b5f5599f226b771d3393044375dae406bacdbe9dae34d999\"" Sep 9 03:22:13.319456 systemd[1]: Started cri-containerd-e8a92bb7a7ec35f5b5f5599f226b771d3393044375dae406bacdbe9dae34d999.scope - libcontainer container e8a92bb7a7ec35f5b5f5599f226b771d3393044375dae406bacdbe9dae34d999. Sep 9 03:22:13.357958 systemd[1]: cri-containerd-e8a92bb7a7ec35f5b5f5599f226b771d3393044375dae406bacdbe9dae34d999.scope: Deactivated successfully. Sep 9 03:22:13.362929 containerd[1500]: time="2025-09-09T03:22:13.362884687Z" level=info msg="StartContainer for \"e8a92bb7a7ec35f5b5f5599f226b771d3393044375dae406bacdbe9dae34d999\" returns successfully" Sep 9 03:22:13.390493 containerd[1500]: time="2025-09-09T03:22:13.390414931Z" level=info msg="shim disconnected" id=e8a92bb7a7ec35f5b5f5599f226b771d3393044375dae406bacdbe9dae34d999 namespace=k8s.io Sep 9 03:22:13.390493 containerd[1500]: time="2025-09-09T03:22:13.390483449Z" level=warning msg="cleaning up after shim disconnected" id=e8a92bb7a7ec35f5b5f5599f226b771d3393044375dae406bacdbe9dae34d999 namespace=k8s.io Sep 9 03:22:13.390493 containerd[1500]: time="2025-09-09T03:22:13.390498964Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 03:22:13.468418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8a92bb7a7ec35f5b5f5599f226b771d3393044375dae406bacdbe9dae34d999-rootfs.mount: Deactivated successfully. Sep 9 03:22:14.252008 containerd[1500]: time="2025-09-09T03:22:14.251957599Z" level=info msg="CreateContainer within sandbox \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 03:22:14.280157 containerd[1500]: time="2025-09-09T03:22:14.276897949Z" level=info msg="CreateContainer within sandbox \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8\"" Sep 9 03:22:14.280157 containerd[1500]: time="2025-09-09T03:22:14.278709340Z" level=info msg="StartContainer for \"d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8\"" Sep 9 03:22:14.328393 systemd[1]: Started cri-containerd-d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8.scope - libcontainer container d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8. Sep 9 03:22:14.372375 containerd[1500]: time="2025-09-09T03:22:14.372276290Z" level=info msg="StartContainer for \"d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8\" returns successfully" Sep 9 03:22:14.612390 kubelet[2641]: I0909 03:22:14.612215 2641 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 03:22:14.674008 kubelet[2641]: W0909 03:22:14.673933 2641 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:srv-hr091.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'srv-hr091.gb1.brightbox.com' and this object Sep 9 03:22:14.674311 kubelet[2641]: E0909 03:22:14.674246 2641 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:srv-hr091.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-hr091.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 9 03:22:14.677541 systemd[1]: Created slice kubepods-burstable-pod5688a6a6_1c86_423a_be00_8a03dc376f70.slice - libcontainer container kubepods-burstable-pod5688a6a6_1c86_423a_be00_8a03dc376f70.slice. Sep 9 03:22:14.694919 systemd[1]: Created slice kubepods-burstable-pod7597933a_3d77_4a54_bdab_efd4a5edeb34.slice - libcontainer container kubepods-burstable-pod7597933a_3d77_4a54_bdab_efd4a5edeb34.slice. Sep 9 03:22:14.794098 kubelet[2641]: I0909 03:22:14.793227 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rm9c\" (UniqueName: \"kubernetes.io/projected/7597933a-3d77-4a54-bdab-efd4a5edeb34-kube-api-access-8rm9c\") pod \"coredns-7c65d6cfc9-j9ghl\" (UID: \"7597933a-3d77-4a54-bdab-efd4a5edeb34\") " pod="kube-system/coredns-7c65d6cfc9-j9ghl" Sep 9 03:22:14.794098 kubelet[2641]: I0909 03:22:14.793295 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll82f\" (UniqueName: \"kubernetes.io/projected/5688a6a6-1c86-423a-be00-8a03dc376f70-kube-api-access-ll82f\") pod \"coredns-7c65d6cfc9-x7tjt\" (UID: \"5688a6a6-1c86-423a-be00-8a03dc376f70\") " pod="kube-system/coredns-7c65d6cfc9-x7tjt" Sep 9 03:22:14.794098 kubelet[2641]: I0909 03:22:14.793330 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7597933a-3d77-4a54-bdab-efd4a5edeb34-config-volume\") pod \"coredns-7c65d6cfc9-j9ghl\" (UID: \"7597933a-3d77-4a54-bdab-efd4a5edeb34\") " pod="kube-system/coredns-7c65d6cfc9-j9ghl" Sep 9 03:22:14.794098 kubelet[2641]: I0909 03:22:14.793396 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5688a6a6-1c86-423a-be00-8a03dc376f70-config-volume\") pod \"coredns-7c65d6cfc9-x7tjt\" (UID: \"5688a6a6-1c86-423a-be00-8a03dc376f70\") " pod="kube-system/coredns-7c65d6cfc9-x7tjt" Sep 9 03:22:15.298380 kubelet[2641]: I0909 03:22:15.298202 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-sk9pg" podStartSLOduration=5.952176611 podStartE2EDuration="20.298143979s" podCreationTimestamp="2025-09-09 03:21:55 +0000 UTC" firstStartedPulling="2025-09-09 03:21:56.032780519 +0000 UTC m=+7.328379716" lastFinishedPulling="2025-09-09 03:22:10.378747883 +0000 UTC m=+21.674347084" observedRunningTime="2025-09-09 03:22:15.296811552 +0000 UTC m=+26.592410775" watchObservedRunningTime="2025-09-09 03:22:15.298143979 +0000 UTC m=+26.593743187" Sep 9 03:22:15.897189 kubelet[2641]: E0909 03:22:15.897071 2641 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 9 03:22:15.897971 kubelet[2641]: E0909 03:22:15.897273 2641 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7597933a-3d77-4a54-bdab-efd4a5edeb34-config-volume podName:7597933a-3d77-4a54-bdab-efd4a5edeb34 nodeName:}" failed. No retries permitted until 2025-09-09 03:22:16.397228823 +0000 UTC m=+27.692828024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7597933a-3d77-4a54-bdab-efd4a5edeb34-config-volume") pod "coredns-7c65d6cfc9-j9ghl" (UID: "7597933a-3d77-4a54-bdab-efd4a5edeb34") : failed to sync configmap cache: timed out waiting for the condition Sep 9 03:22:15.897971 kubelet[2641]: E0909 03:22:15.897602 2641 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Sep 9 03:22:15.897971 kubelet[2641]: E0909 03:22:15.897659 2641 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5688a6a6-1c86-423a-be00-8a03dc376f70-config-volume podName:5688a6a6-1c86-423a-be00-8a03dc376f70 nodeName:}" failed. No retries permitted until 2025-09-09 03:22:16.397643381 +0000 UTC m=+27.693242576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/5688a6a6-1c86-423a-be00-8a03dc376f70-config-volume") pod "coredns-7c65d6cfc9-x7tjt" (UID: "5688a6a6-1c86-423a-be00-8a03dc376f70") : failed to sync configmap cache: timed out waiting for the condition Sep 9 03:22:16.488555 containerd[1500]: time="2025-09-09T03:22:16.487321927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x7tjt,Uid:5688a6a6-1c86-423a-be00-8a03dc376f70,Namespace:kube-system,Attempt:0,}" Sep 9 03:22:16.501204 containerd[1500]: time="2025-09-09T03:22:16.500786799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j9ghl,Uid:7597933a-3d77-4a54-bdab-efd4a5edeb34,Namespace:kube-system,Attempt:0,}" Sep 9 03:22:16.964547 systemd-networkd[1429]: cilium_host: Link UP Sep 9 03:22:16.965571 systemd-networkd[1429]: cilium_net: Link UP Sep 9 03:22:16.966207 systemd-networkd[1429]: cilium_net: Gained carrier Sep 9 03:22:16.966688 systemd-networkd[1429]: cilium_host: Gained carrier Sep 9 03:22:17.151318 systemd-networkd[1429]: cilium_vxlan: Link UP Sep 9 03:22:17.151330 systemd-networkd[1429]: cilium_vxlan: Gained carrier Sep 9 03:22:17.735762 kernel: NET: Registered PF_ALG protocol family Sep 9 03:22:17.802477 systemd-networkd[1429]: cilium_host: Gained IPv6LL Sep 9 03:22:17.994437 systemd-networkd[1429]: cilium_net: Gained IPv6LL Sep 9 03:22:18.810257 systemd-networkd[1429]: lxc_health: Link UP Sep 9 03:22:18.816872 systemd-networkd[1429]: lxc_health: Gained carrier Sep 9 03:22:19.117240 systemd-networkd[1429]: lxc7cd16c806feb: Link UP Sep 9 03:22:19.126281 kernel: eth0: renamed from tmp968ac Sep 9 03:22:19.131333 systemd-networkd[1429]: lxc7cd16c806feb: Gained carrier Sep 9 03:22:19.147466 systemd-networkd[1429]: cilium_vxlan: Gained IPv6LL Sep 9 03:22:19.147875 systemd-networkd[1429]: lxc5505e5746d98: Link UP Sep 9 03:22:19.153241 kernel: eth0: renamed from tmp8a4f1 Sep 9 03:22:19.164382 systemd-networkd[1429]: lxc5505e5746d98: Gained carrier Sep 9 03:22:20.234407 systemd-networkd[1429]: lxc_health: Gained IPv6LL Sep 9 03:22:21.066472 systemd-networkd[1429]: lxc7cd16c806feb: Gained IPv6LL Sep 9 03:22:21.066974 systemd-networkd[1429]: lxc5505e5746d98: Gained IPv6LL Sep 9 03:22:25.173836 containerd[1500]: time="2025-09-09T03:22:25.172654973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 03:22:25.175658 containerd[1500]: time="2025-09-09T03:22:25.173777062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 03:22:25.175658 containerd[1500]: time="2025-09-09T03:22:25.174412922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:22:25.175845 containerd[1500]: time="2025-09-09T03:22:25.175048256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:22:25.252586 containerd[1500]: time="2025-09-09T03:22:25.252244536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 03:22:25.252586 containerd[1500]: time="2025-09-09T03:22:25.252388408Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 03:22:25.252586 containerd[1500]: time="2025-09-09T03:22:25.252409089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:22:25.253272 containerd[1500]: time="2025-09-09T03:22:25.253062780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:22:25.253413 systemd[1]: Started cri-containerd-968ac4dfd75e0b32ba0ee63a894301a8d978761353743bc257ca760015639490.scope - libcontainer container 968ac4dfd75e0b32ba0ee63a894301a8d978761353743bc257ca760015639490. Sep 9 03:22:25.316400 systemd[1]: Started cri-containerd-8a4f1ebb2ff9405263c7995dc5aeb32b15dade45850c52e3d7a383442db00389.scope - libcontainer container 8a4f1ebb2ff9405263c7995dc5aeb32b15dade45850c52e3d7a383442db00389. Sep 9 03:22:25.371938 containerd[1500]: time="2025-09-09T03:22:25.371818766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j9ghl,Uid:7597933a-3d77-4a54-bdab-efd4a5edeb34,Namespace:kube-system,Attempt:0,} returns sandbox id \"968ac4dfd75e0b32ba0ee63a894301a8d978761353743bc257ca760015639490\"" Sep 9 03:22:25.379682 containerd[1500]: time="2025-09-09T03:22:25.379547771Z" level=info msg="CreateContainer within sandbox \"968ac4dfd75e0b32ba0ee63a894301a8d978761353743bc257ca760015639490\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 03:22:25.406576 containerd[1500]: time="2025-09-09T03:22:25.406521022Z" level=info msg="CreateContainer within sandbox \"968ac4dfd75e0b32ba0ee63a894301a8d978761353743bc257ca760015639490\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d795575791e984e51a0d728ff611feedfe1f610b713bc853394508509bb716f\"" Sep 9 03:22:25.408481 containerd[1500]: time="2025-09-09T03:22:25.407712292Z" level=info msg="StartContainer for \"9d795575791e984e51a0d728ff611feedfe1f610b713bc853394508509bb716f\"" Sep 9 03:22:25.484944 containerd[1500]: time="2025-09-09T03:22:25.484715780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x7tjt,Uid:5688a6a6-1c86-423a-be00-8a03dc376f70,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a4f1ebb2ff9405263c7995dc5aeb32b15dade45850c52e3d7a383442db00389\"" Sep 9 03:22:25.493076 containerd[1500]: time="2025-09-09T03:22:25.492939019Z" level=info msg="CreateContainer within sandbox \"8a4f1ebb2ff9405263c7995dc5aeb32b15dade45850c52e3d7a383442db00389\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 03:22:25.493676 systemd[1]: Started cri-containerd-9d795575791e984e51a0d728ff611feedfe1f610b713bc853394508509bb716f.scope - libcontainer container 9d795575791e984e51a0d728ff611feedfe1f610b713bc853394508509bb716f. Sep 9 03:22:25.517388 containerd[1500]: time="2025-09-09T03:22:25.517331876Z" level=info msg="CreateContainer within sandbox \"8a4f1ebb2ff9405263c7995dc5aeb32b15dade45850c52e3d7a383442db00389\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d764662cb9242a07e3cea9d9705ae1f0599eb6674538e64ef77b18c9a7343382\"" Sep 9 03:22:25.519494 containerd[1500]: time="2025-09-09T03:22:25.519361500Z" level=info msg="StartContainer for \"d764662cb9242a07e3cea9d9705ae1f0599eb6674538e64ef77b18c9a7343382\"" Sep 9 03:22:25.580397 systemd[1]: Started cri-containerd-d764662cb9242a07e3cea9d9705ae1f0599eb6674538e64ef77b18c9a7343382.scope - libcontainer container d764662cb9242a07e3cea9d9705ae1f0599eb6674538e64ef77b18c9a7343382. Sep 9 03:22:25.592547 containerd[1500]: time="2025-09-09T03:22:25.592476696Z" level=info msg="StartContainer for \"9d795575791e984e51a0d728ff611feedfe1f610b713bc853394508509bb716f\" returns successfully" Sep 9 03:22:25.626809 containerd[1500]: time="2025-09-09T03:22:25.626759625Z" level=info msg="StartContainer for \"d764662cb9242a07e3cea9d9705ae1f0599eb6674538e64ef77b18c9a7343382\" returns successfully" Sep 9 03:22:26.187589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1876326640.mount: Deactivated successfully. Sep 9 03:22:26.316829 kubelet[2641]: I0909 03:22:26.316448 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-j9ghl" podStartSLOduration=31.316384386 podStartE2EDuration="31.316384386s" podCreationTimestamp="2025-09-09 03:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 03:22:26.314389478 +0000 UTC m=+37.609988695" watchObservedRunningTime="2025-09-09 03:22:26.316384386 +0000 UTC m=+37.611983596" Sep 9 03:22:26.340210 kubelet[2641]: I0909 03:22:26.339103 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-x7tjt" podStartSLOduration=31.339081142 podStartE2EDuration="31.339081142s" podCreationTimestamp="2025-09-09 03:21:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 03:22:26.333766102 +0000 UTC m=+37.629365326" watchObservedRunningTime="2025-09-09 03:22:26.339081142 +0000 UTC m=+37.634680351" Sep 9 03:23:16.589706 systemd[1]: Started sshd@7-10.230.34.194:22-147.75.109.163:45458.service - OpenSSH per-connection server daemon (147.75.109.163:45458). Sep 9 03:23:17.515482 sshd[4025]: Accepted publickey for core from 147.75.109.163 port 45458 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:23:17.518614 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:23:17.528437 systemd-logind[1485]: New session 10 of user core. Sep 9 03:23:17.532396 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 03:23:18.686741 sshd[4025]: pam_unix(sshd:session): session closed for user core Sep 9 03:23:18.691977 systemd[1]: sshd@7-10.230.34.194:22-147.75.109.163:45458.service: Deactivated successfully. Sep 9 03:23:18.694764 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 03:23:18.697485 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Sep 9 03:23:18.699141 systemd-logind[1485]: Removed session 10. Sep 9 03:23:23.844287 systemd[1]: Started sshd@8-10.230.34.194:22-147.75.109.163:51002.service - OpenSSH per-connection server daemon (147.75.109.163:51002). Sep 9 03:23:24.758340 sshd[4039]: Accepted publickey for core from 147.75.109.163 port 51002 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:23:24.760835 sshd[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:23:24.769386 systemd-logind[1485]: New session 11 of user core. Sep 9 03:23:24.773419 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 03:23:25.698788 sshd[4039]: pam_unix(sshd:session): session closed for user core Sep 9 03:23:25.703300 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Sep 9 03:23:25.703846 systemd[1]: sshd@8-10.230.34.194:22-147.75.109.163:51002.service: Deactivated successfully. Sep 9 03:23:25.706666 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 03:23:25.709120 systemd-logind[1485]: Removed session 11. Sep 9 03:23:30.860334 systemd[1]: Started sshd@9-10.230.34.194:22-147.75.109.163:40728.service - OpenSSH per-connection server daemon (147.75.109.163:40728). Sep 9 03:23:31.808450 sshd[4055]: Accepted publickey for core from 147.75.109.163 port 40728 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:23:31.810665 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:23:31.817742 systemd-logind[1485]: New session 12 of user core. Sep 9 03:23:31.830409 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 03:23:32.533714 sshd[4055]: pam_unix(sshd:session): session closed for user core Sep 9 03:23:32.538873 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Sep 9 03:23:32.539715 systemd[1]: sshd@9-10.230.34.194:22-147.75.109.163:40728.service: Deactivated successfully. Sep 9 03:23:32.542445 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 03:23:32.544374 systemd-logind[1485]: Removed session 12. Sep 9 03:23:37.696694 systemd[1]: Started sshd@10-10.230.34.194:22-147.75.109.163:40740.service - OpenSSH per-connection server daemon (147.75.109.163:40740). Sep 9 03:23:38.606883 sshd[4069]: Accepted publickey for core from 147.75.109.163 port 40740 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:23:38.609484 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:23:38.617830 systemd-logind[1485]: New session 13 of user core. Sep 9 03:23:38.625415 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 03:23:39.340934 sshd[4069]: pam_unix(sshd:session): session closed for user core Sep 9 03:23:39.345767 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Sep 9 03:23:39.346986 systemd[1]: sshd@10-10.230.34.194:22-147.75.109.163:40740.service: Deactivated successfully. Sep 9 03:23:39.350192 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 03:23:39.353259 systemd-logind[1485]: Removed session 13. Sep 9 03:23:39.501555 systemd[1]: Started sshd@11-10.230.34.194:22-147.75.109.163:40744.service - OpenSSH per-connection server daemon (147.75.109.163:40744). Sep 9 03:23:40.411998 sshd[4083]: Accepted publickey for core from 147.75.109.163 port 40744 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:23:40.414047 sshd[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:23:40.421236 systemd-logind[1485]: New session 14 of user core. Sep 9 03:23:40.428408 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 03:23:41.211038 sshd[4083]: pam_unix(sshd:session): session closed for user core Sep 9 03:23:41.217443 systemd[1]: sshd@11-10.230.34.194:22-147.75.109.163:40744.service: Deactivated successfully. Sep 9 03:23:41.220407 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 03:23:41.222016 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Sep 9 03:23:41.223685 systemd-logind[1485]: Removed session 14. Sep 9 03:23:41.364342 systemd[1]: Started sshd@12-10.230.34.194:22-147.75.109.163:47646.service - OpenSSH per-connection server daemon (147.75.109.163:47646). Sep 9 03:23:42.281011 sshd[4094]: Accepted publickey for core from 147.75.109.163 port 47646 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:23:42.283140 sshd[4094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:23:42.289576 systemd-logind[1485]: New session 15 of user core. Sep 9 03:23:42.297583 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 03:23:43.004705 sshd[4094]: pam_unix(sshd:session): session closed for user core Sep 9 03:23:43.009883 systemd[1]: sshd@12-10.230.34.194:22-147.75.109.163:47646.service: Deactivated successfully. Sep 9 03:23:43.013018 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 03:23:43.015081 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Sep 9 03:23:43.016784 systemd-logind[1485]: Removed session 15. Sep 9 03:23:48.166642 systemd[1]: Started sshd@13-10.230.34.194:22-147.75.109.163:47658.service - OpenSSH per-connection server daemon (147.75.109.163:47658). Sep 9 03:23:49.060614 sshd[4107]: Accepted publickey for core from 147.75.109.163 port 47658 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:23:49.062853 sshd[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:23:49.070636 systemd-logind[1485]: New session 16 of user core. Sep 9 03:23:49.075383 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 03:23:49.795356 sshd[4107]: pam_unix(sshd:session): session closed for user core Sep 9 03:23:49.802226 systemd[1]: sshd@13-10.230.34.194:22-147.75.109.163:47658.service: Deactivated successfully. Sep 9 03:23:49.805018 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 03:23:49.806045 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Sep 9 03:23:49.808119 systemd-logind[1485]: Removed session 16. Sep 9 03:23:54.956532 systemd[1]: Started sshd@14-10.230.34.194:22-147.75.109.163:46262.service - OpenSSH per-connection server daemon (147.75.109.163:46262). Sep 9 03:23:55.850263 sshd[4121]: Accepted publickey for core from 147.75.109.163 port 46262 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:23:55.853006 sshd[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:23:55.861498 systemd-logind[1485]: New session 17 of user core. Sep 9 03:23:55.866707 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 03:23:56.554545 sshd[4121]: pam_unix(sshd:session): session closed for user core Sep 9 03:23:56.559243 systemd[1]: sshd@14-10.230.34.194:22-147.75.109.163:46262.service: Deactivated successfully. Sep 9 03:23:56.562010 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 03:23:56.566406 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Sep 9 03:23:56.568367 systemd-logind[1485]: Removed session 17. Sep 9 03:23:56.716668 systemd[1]: Started sshd@15-10.230.34.194:22-147.75.109.163:46276.service - OpenSSH per-connection server daemon (147.75.109.163:46276). Sep 9 03:23:57.605202 sshd[4138]: Accepted publickey for core from 147.75.109.163 port 46276 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:23:57.607447 sshd[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:23:57.613793 systemd-logind[1485]: New session 18 of user core. Sep 9 03:23:57.622380 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 03:23:58.591729 sshd[4138]: pam_unix(sshd:session): session closed for user core Sep 9 03:23:58.603833 systemd[1]: sshd@15-10.230.34.194:22-147.75.109.163:46276.service: Deactivated successfully. Sep 9 03:23:58.606443 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 03:23:58.608267 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Sep 9 03:23:58.610477 systemd-logind[1485]: Removed session 18. Sep 9 03:23:58.751853 systemd[1]: Started sshd@16-10.230.34.194:22-147.75.109.163:46290.service - OpenSSH per-connection server daemon (147.75.109.163:46290). Sep 9 03:23:59.670757 sshd[4149]: Accepted publickey for core from 147.75.109.163 port 46290 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:23:59.673308 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:23:59.683217 systemd-logind[1485]: New session 19 of user core. Sep 9 03:23:59.690515 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 03:24:02.383120 sshd[4149]: pam_unix(sshd:session): session closed for user core Sep 9 03:24:02.391988 systemd[1]: sshd@16-10.230.34.194:22-147.75.109.163:46290.service: Deactivated successfully. Sep 9 03:24:02.395737 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 03:24:02.397281 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Sep 9 03:24:02.399778 systemd-logind[1485]: Removed session 19. Sep 9 03:24:02.560750 systemd[1]: Started sshd@17-10.230.34.194:22-147.75.109.163:48040.service - OpenSSH per-connection server daemon (147.75.109.163:48040). Sep 9 03:24:03.472615 sshd[4166]: Accepted publickey for core from 147.75.109.163 port 48040 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:24:03.475328 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:24:03.483465 systemd-logind[1485]: New session 20 of user core. Sep 9 03:24:03.490437 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 03:24:04.514635 sshd[4166]: pam_unix(sshd:session): session closed for user core Sep 9 03:24:04.521708 systemd[1]: sshd@17-10.230.34.194:22-147.75.109.163:48040.service: Deactivated successfully. Sep 9 03:24:04.525405 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 03:24:04.527624 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Sep 9 03:24:04.529868 systemd-logind[1485]: Removed session 20. Sep 9 03:24:04.683097 systemd[1]: Started sshd@18-10.230.34.194:22-147.75.109.163:48050.service - OpenSSH per-connection server daemon (147.75.109.163:48050). Sep 9 03:24:05.570771 sshd[4177]: Accepted publickey for core from 147.75.109.163 port 48050 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:24:05.572948 sshd[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:24:05.580044 systemd-logind[1485]: New session 21 of user core. Sep 9 03:24:05.591445 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 03:24:06.319196 sshd[4177]: pam_unix(sshd:session): session closed for user core Sep 9 03:24:06.323994 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Sep 9 03:24:06.324500 systemd[1]: sshd@18-10.230.34.194:22-147.75.109.163:48050.service: Deactivated successfully. Sep 9 03:24:06.327263 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 03:24:06.329755 systemd-logind[1485]: Removed session 21. Sep 9 03:24:11.478608 systemd[1]: Started sshd@19-10.230.34.194:22-147.75.109.163:53808.service - OpenSSH per-connection server daemon (147.75.109.163:53808). Sep 9 03:24:12.372641 sshd[4190]: Accepted publickey for core from 147.75.109.163 port 53808 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:24:12.375307 sshd[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:24:12.383494 systemd-logind[1485]: New session 22 of user core. Sep 9 03:24:12.393469 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 03:24:13.089415 sshd[4190]: pam_unix(sshd:session): session closed for user core Sep 9 03:24:13.096936 systemd[1]: sshd@19-10.230.34.194:22-147.75.109.163:53808.service: Deactivated successfully. Sep 9 03:24:13.100591 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 03:24:13.102539 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Sep 9 03:24:13.104528 systemd-logind[1485]: Removed session 22. Sep 9 03:24:18.260528 systemd[1]: Started sshd@20-10.230.34.194:22-147.75.109.163:53824.service - OpenSSH per-connection server daemon (147.75.109.163:53824). Sep 9 03:24:19.176225 sshd[4206]: Accepted publickey for core from 147.75.109.163 port 53824 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:24:19.178429 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:24:19.185834 systemd-logind[1485]: New session 23 of user core. Sep 9 03:24:19.194587 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 03:24:19.897463 sshd[4206]: pam_unix(sshd:session): session closed for user core Sep 9 03:24:19.902220 systemd[1]: sshd@20-10.230.34.194:22-147.75.109.163:53824.service: Deactivated successfully. Sep 9 03:24:19.904840 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 03:24:19.906026 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Sep 9 03:24:19.907762 systemd-logind[1485]: Removed session 23. Sep 9 03:24:25.054631 systemd[1]: Started sshd@21-10.230.34.194:22-147.75.109.163:42154.service - OpenSSH per-connection server daemon (147.75.109.163:42154). Sep 9 03:24:25.952222 sshd[4219]: Accepted publickey for core from 147.75.109.163 port 42154 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:24:25.954640 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:24:25.961536 systemd-logind[1485]: New session 24 of user core. Sep 9 03:24:25.969461 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 03:24:26.661775 sshd[4219]: pam_unix(sshd:session): session closed for user core Sep 9 03:24:26.666908 systemd[1]: sshd@21-10.230.34.194:22-147.75.109.163:42154.service: Deactivated successfully. Sep 9 03:24:26.671681 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 03:24:26.673086 systemd-logind[1485]: Session 24 logged out. Waiting for processes to exit. Sep 9 03:24:26.674928 systemd-logind[1485]: Removed session 24. Sep 9 03:24:26.829547 systemd[1]: Started sshd@22-10.230.34.194:22-147.75.109.163:42162.service - OpenSSH per-connection server daemon (147.75.109.163:42162). Sep 9 03:24:27.789295 sshd[4234]: Accepted publickey for core from 147.75.109.163 port 42162 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:24:27.791344 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:24:27.798690 systemd-logind[1485]: New session 25 of user core. Sep 9 03:24:27.804400 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 03:24:30.322339 containerd[1500]: time="2025-09-09T03:24:30.319749952Z" level=info msg="StopContainer for \"5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759\" with timeout 30 (s)" Sep 9 03:24:30.322339 containerd[1500]: time="2025-09-09T03:24:30.320620633Z" level=info msg="Stop container \"5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759\" with signal terminated" Sep 9 03:24:30.333159 systemd[1]: run-containerd-runc-k8s.io-d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8-runc.ZlycXj.mount: Deactivated successfully. Sep 9 03:24:30.365372 systemd[1]: cri-containerd-5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759.scope: Deactivated successfully. Sep 9 03:24:30.383108 containerd[1500]: time="2025-09-09T03:24:30.383035837Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 03:24:30.399641 containerd[1500]: time="2025-09-09T03:24:30.399021179Z" level=info msg="StopContainer for \"d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8\" with timeout 2 (s)" Sep 9 03:24:30.400639 containerd[1500]: time="2025-09-09T03:24:30.400586889Z" level=info msg="Stop container \"d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8\" with signal terminated" Sep 9 03:24:30.419664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759-rootfs.mount: Deactivated successfully. Sep 9 03:24:30.423720 systemd-networkd[1429]: lxc_health: Link DOWN Sep 9 03:24:30.423731 systemd-networkd[1429]: lxc_health: Lost carrier Sep 9 03:24:30.431676 containerd[1500]: time="2025-09-09T03:24:30.431124927Z" level=info msg="shim disconnected" id=5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759 namespace=k8s.io Sep 9 03:24:30.431676 containerd[1500]: time="2025-09-09T03:24:30.431275798Z" level=warning msg="cleaning up after shim disconnected" id=5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759 namespace=k8s.io Sep 9 03:24:30.431676 containerd[1500]: time="2025-09-09T03:24:30.431302968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 03:24:30.444726 systemd[1]: cri-containerd-d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8.scope: Deactivated successfully. Sep 9 03:24:30.445101 systemd[1]: cri-containerd-d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8.scope: Consumed 10.677s CPU time. Sep 9 03:24:30.470839 containerd[1500]: time="2025-09-09T03:24:30.470608167Z" level=warning msg="cleanup warnings time=\"2025-09-09T03:24:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 9 03:24:30.474224 containerd[1500]: time="2025-09-09T03:24:30.474028857Z" level=info msg="StopContainer for \"5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759\" returns successfully" Sep 9 03:24:30.475206 containerd[1500]: time="2025-09-09T03:24:30.475147829Z" level=info msg="StopPodSandbox for \"177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8\"" Sep 9 03:24:30.475291 containerd[1500]: time="2025-09-09T03:24:30.475236903Z" level=info msg="Container to stop \"5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 03:24:30.479266 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8-shm.mount: Deactivated successfully. Sep 9 03:24:30.492061 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8-rootfs.mount: Deactivated successfully. Sep 9 03:24:30.497660 systemd[1]: cri-containerd-177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8.scope: Deactivated successfully. Sep 9 03:24:30.506261 containerd[1500]: time="2025-09-09T03:24:30.505963727Z" level=info msg="shim disconnected" id=d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8 namespace=k8s.io Sep 9 03:24:30.506261 containerd[1500]: time="2025-09-09T03:24:30.506025000Z" level=warning msg="cleaning up after shim disconnected" id=d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8 namespace=k8s.io Sep 9 03:24:30.506261 containerd[1500]: time="2025-09-09T03:24:30.506041921Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 03:24:30.536501 containerd[1500]: time="2025-09-09T03:24:30.536354915Z" level=warning msg="cleanup warnings time=\"2025-09-09T03:24:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 9 03:24:30.540523 containerd[1500]: time="2025-09-09T03:24:30.540408263Z" level=info msg="StopContainer for \"d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8\" returns successfully" Sep 9 03:24:30.541545 containerd[1500]: time="2025-09-09T03:24:30.541105023Z" level=info msg="StopPodSandbox for \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\"" Sep 9 03:24:30.541545 containerd[1500]: time="2025-09-09T03:24:30.541142240Z" level=info msg="Container to stop \"e8a92bb7a7ec35f5b5f5599f226b771d3393044375dae406bacdbe9dae34d999\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 03:24:30.541545 containerd[1500]: time="2025-09-09T03:24:30.541160394Z" level=info msg="Container to stop \"d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 03:24:30.541545 containerd[1500]: time="2025-09-09T03:24:30.541215602Z" level=info msg="Container to stop \"8ace94bdac76d8a2e3a42f02072a4d2804e59df4310492f76cf64c5a0c1f2b7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 03:24:30.541545 containerd[1500]: time="2025-09-09T03:24:30.541249461Z" level=info msg="Container to stop \"1a5846bf1dd8663af6d13c21381226a1e805429a07a1251c973c592d622d5d4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 03:24:30.541545 containerd[1500]: time="2025-09-09T03:24:30.541266405Z" level=info msg="Container to stop \"bd729edb4ddaacaf2f64c5216e0398965a25ba6c396c6b034de7cddd64a4e0ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 03:24:30.544286 containerd[1500]: time="2025-09-09T03:24:30.543825542Z" level=info msg="shim disconnected" id=177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8 namespace=k8s.io Sep 9 03:24:30.544286 containerd[1500]: time="2025-09-09T03:24:30.543873947Z" level=warning msg="cleaning up after shim disconnected" id=177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8 namespace=k8s.io Sep 9 03:24:30.544286 containerd[1500]: time="2025-09-09T03:24:30.543888968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 03:24:30.558117 systemd[1]: cri-containerd-ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e.scope: Deactivated successfully. Sep 9 03:24:30.572025 containerd[1500]: time="2025-09-09T03:24:30.571962694Z" level=warning msg="cleanup warnings time=\"2025-09-09T03:24:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 9 03:24:30.588430 containerd[1500]: time="2025-09-09T03:24:30.587986323Z" level=info msg="TearDown network for sandbox \"177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8\" successfully" Sep 9 03:24:30.588430 containerd[1500]: time="2025-09-09T03:24:30.588031565Z" level=info msg="StopPodSandbox for \"177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8\" returns successfully" Sep 9 03:24:30.617808 containerd[1500]: time="2025-09-09T03:24:30.617721776Z" level=info msg="shim disconnected" id=ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e namespace=k8s.io Sep 9 03:24:30.617808 containerd[1500]: time="2025-09-09T03:24:30.617791898Z" level=warning msg="cleaning up after shim disconnected" id=ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e namespace=k8s.io Sep 9 03:24:30.617808 containerd[1500]: time="2025-09-09T03:24:30.617810687Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 03:24:30.648454 kubelet[2641]: I0909 03:24:30.648220 2641 scope.go:117] "RemoveContainer" containerID="5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759" Sep 9 03:24:30.655441 containerd[1500]: time="2025-09-09T03:24:30.655205624Z" level=info msg="TearDown network for sandbox \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" successfully" Sep 9 03:24:30.655441 containerd[1500]: time="2025-09-09T03:24:30.655364017Z" level=info msg="StopPodSandbox for \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" returns successfully" Sep 9 03:24:30.658073 containerd[1500]: time="2025-09-09T03:24:30.658040829Z" level=info msg="RemoveContainer for \"5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759\"" Sep 9 03:24:30.667829 containerd[1500]: time="2025-09-09T03:24:30.667743359Z" level=info msg="RemoveContainer for \"5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759\" returns successfully" Sep 9 03:24:30.668183 kubelet[2641]: I0909 03:24:30.668148 2641 scope.go:117] "RemoveContainer" containerID="5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759" Sep 9 03:24:30.670147 kubelet[2641]: I0909 03:24:30.669263 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkgk2\" (UniqueName: \"kubernetes.io/projected/ea60a5bd-5f81-48b1-bf88-b01fb2b41621-kube-api-access-jkgk2\") pod \"ea60a5bd-5f81-48b1-bf88-b01fb2b41621\" (UID: \"ea60a5bd-5f81-48b1-bf88-b01fb2b41621\") " Sep 9 03:24:30.670147 kubelet[2641]: I0909 03:24:30.669476 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea60a5bd-5f81-48b1-bf88-b01fb2b41621-cilium-config-path\") pod \"ea60a5bd-5f81-48b1-bf88-b01fb2b41621\" (UID: \"ea60a5bd-5f81-48b1-bf88-b01fb2b41621\") " Sep 9 03:24:30.699447 containerd[1500]: time="2025-09-09T03:24:30.674071947Z" level=error msg="ContainerStatus for \"5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759\": not found" Sep 9 03:24:30.703137 kubelet[2641]: I0909 03:24:30.701861 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea60a5bd-5f81-48b1-bf88-b01fb2b41621-kube-api-access-jkgk2" (OuterVolumeSpecName: "kube-api-access-jkgk2") pod "ea60a5bd-5f81-48b1-bf88-b01fb2b41621" (UID: "ea60a5bd-5f81-48b1-bf88-b01fb2b41621"). InnerVolumeSpecName "kube-api-access-jkgk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 03:24:30.706592 kubelet[2641]: I0909 03:24:30.701976 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea60a5bd-5f81-48b1-bf88-b01fb2b41621-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ea60a5bd-5f81-48b1-bf88-b01fb2b41621" (UID: "ea60a5bd-5f81-48b1-bf88-b01fb2b41621"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 03:24:30.706914 kubelet[2641]: E0909 03:24:30.706725 2641 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759\": not found" containerID="5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759" Sep 9 03:24:30.732758 kubelet[2641]: I0909 03:24:30.706816 2641 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759"} err="failed to get container status \"5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759\": rpc error: code = NotFound desc = an error occurred when try to find container \"5084ee5aedf3d4a5d7c3babad3245e3af2f28fa2760dd8843daee40d0e102759\": not found" Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.769701 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-hostproc\") pod \"e1795d02-2165-41cf-9dd0-099a305a21b7\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.769747 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-bpf-maps\") pod \"e1795d02-2165-41cf-9dd0-099a305a21b7\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.769778 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-host-proc-sys-kernel\") pod \"e1795d02-2165-41cf-9dd0-099a305a21b7\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.769801 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-lib-modules\") pod \"e1795d02-2165-41cf-9dd0-099a305a21b7\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.769845 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbtvb\" (UniqueName: \"kubernetes.io/projected/e1795d02-2165-41cf-9dd0-099a305a21b7-kube-api-access-vbtvb\") pod \"e1795d02-2165-41cf-9dd0-099a305a21b7\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.769869 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-cni-path\") pod \"e1795d02-2165-41cf-9dd0-099a305a21b7\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.769905 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-host-proc-sys-net\") pod \"e1795d02-2165-41cf-9dd0-099a305a21b7\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.769931 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1795d02-2165-41cf-9dd0-099a305a21b7-hubble-tls\") pod \"e1795d02-2165-41cf-9dd0-099a305a21b7\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.769956 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-cilium-run\") pod \"e1795d02-2165-41cf-9dd0-099a305a21b7\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.769986 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1795d02-2165-41cf-9dd0-099a305a21b7-clustermesh-secrets\") pod \"e1795d02-2165-41cf-9dd0-099a305a21b7\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.770014 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1795d02-2165-41cf-9dd0-099a305a21b7-cilium-config-path\") pod \"e1795d02-2165-41cf-9dd0-099a305a21b7\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.770039 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-etc-cni-netd\") pod \"e1795d02-2165-41cf-9dd0-099a305a21b7\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.770038 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e1795d02-2165-41cf-9dd0-099a305a21b7" (UID: "e1795d02-2165-41cf-9dd0-099a305a21b7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.770062 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-cilium-cgroup\") pod \"e1795d02-2165-41cf-9dd0-099a305a21b7\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.770087 2641 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-xtables-lock\") pod \"e1795d02-2165-41cf-9dd0-099a305a21b7\" (UID: \"e1795d02-2165-41cf-9dd0-099a305a21b7\") " Sep 9 03:24:30.770209 kubelet[2641]: I0909 03:24:30.770100 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-hostproc" (OuterVolumeSpecName: "hostproc") pod "e1795d02-2165-41cf-9dd0-099a305a21b7" (UID: "e1795d02-2165-41cf-9dd0-099a305a21b7"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 03:24:30.771085 kubelet[2641]: I0909 03:24:30.770128 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e1795d02-2165-41cf-9dd0-099a305a21b7" (UID: "e1795d02-2165-41cf-9dd0-099a305a21b7"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 03:24:30.771085 kubelet[2641]: I0909 03:24:30.770148 2641 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-lib-modules\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.771085 kubelet[2641]: I0909 03:24:30.770157 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e1795d02-2165-41cf-9dd0-099a305a21b7" (UID: "e1795d02-2165-41cf-9dd0-099a305a21b7"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 03:24:30.771085 kubelet[2641]: I0909 03:24:30.770245 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e1795d02-2165-41cf-9dd0-099a305a21b7" (UID: "e1795d02-2165-41cf-9dd0-099a305a21b7"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 03:24:30.772422 kubelet[2641]: I0909 03:24:30.771267 2641 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ea60a5bd-5f81-48b1-bf88-b01fb2b41621-cilium-config-path\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.772422 kubelet[2641]: I0909 03:24:30.771295 2641 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkgk2\" (UniqueName: \"kubernetes.io/projected/ea60a5bd-5f81-48b1-bf88-b01fb2b41621-kube-api-access-jkgk2\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.774169 kubelet[2641]: I0909 03:24:30.774135 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1795d02-2165-41cf-9dd0-099a305a21b7-kube-api-access-vbtvb" (OuterVolumeSpecName: "kube-api-access-vbtvb") pod "e1795d02-2165-41cf-9dd0-099a305a21b7" (UID: "e1795d02-2165-41cf-9dd0-099a305a21b7"). InnerVolumeSpecName "kube-api-access-vbtvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 03:24:30.774298 kubelet[2641]: I0909 03:24:30.774204 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-cni-path" (OuterVolumeSpecName: "cni-path") pod "e1795d02-2165-41cf-9dd0-099a305a21b7" (UID: "e1795d02-2165-41cf-9dd0-099a305a21b7"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 03:24:30.774298 kubelet[2641]: I0909 03:24:30.774237 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e1795d02-2165-41cf-9dd0-099a305a21b7" (UID: "e1795d02-2165-41cf-9dd0-099a305a21b7"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 03:24:30.775418 kubelet[2641]: I0909 03:24:30.775388 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1795d02-2165-41cf-9dd0-099a305a21b7-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e1795d02-2165-41cf-9dd0-099a305a21b7" (UID: "e1795d02-2165-41cf-9dd0-099a305a21b7"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 03:24:30.777253 kubelet[2641]: I0909 03:24:30.777222 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1795d02-2165-41cf-9dd0-099a305a21b7-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e1795d02-2165-41cf-9dd0-099a305a21b7" (UID: "e1795d02-2165-41cf-9dd0-099a305a21b7"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 03:24:30.777351 kubelet[2641]: I0909 03:24:30.777275 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e1795d02-2165-41cf-9dd0-099a305a21b7" (UID: "e1795d02-2165-41cf-9dd0-099a305a21b7"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 03:24:30.777351 kubelet[2641]: I0909 03:24:30.777307 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e1795d02-2165-41cf-9dd0-099a305a21b7" (UID: "e1795d02-2165-41cf-9dd0-099a305a21b7"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 03:24:30.777513 kubelet[2641]: I0909 03:24:30.777350 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e1795d02-2165-41cf-9dd0-099a305a21b7" (UID: "e1795d02-2165-41cf-9dd0-099a305a21b7"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 03:24:30.779936 kubelet[2641]: I0909 03:24:30.779893 2641 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1795d02-2165-41cf-9dd0-099a305a21b7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e1795d02-2165-41cf-9dd0-099a305a21b7" (UID: "e1795d02-2165-41cf-9dd0-099a305a21b7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 03:24:30.872115 kubelet[2641]: I0909 03:24:30.872036 2641 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-host-proc-sys-net\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.872115 kubelet[2641]: I0909 03:24:30.872114 2641 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e1795d02-2165-41cf-9dd0-099a305a21b7-hubble-tls\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.872379 kubelet[2641]: I0909 03:24:30.872135 2641 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e1795d02-2165-41cf-9dd0-099a305a21b7-clustermesh-secrets\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.872379 kubelet[2641]: I0909 03:24:30.872159 2641 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-cilium-run\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.872379 kubelet[2641]: I0909 03:24:30.872202 2641 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e1795d02-2165-41cf-9dd0-099a305a21b7-cilium-config-path\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.872379 kubelet[2641]: I0909 03:24:30.872220 2641 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-etc-cni-netd\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.872379 kubelet[2641]: I0909 03:24:30.872356 2641 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-cilium-cgroup\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.872379 kubelet[2641]: I0909 03:24:30.872379 2641 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-xtables-lock\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.872683 kubelet[2641]: I0909 03:24:30.872397 2641 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-hostproc\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.872683 kubelet[2641]: I0909 03:24:30.872413 2641 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-bpf-maps\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.872683 kubelet[2641]: I0909 03:24:30.872427 2641 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-host-proc-sys-kernel\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.872683 kubelet[2641]: I0909 03:24:30.872444 2641 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbtvb\" (UniqueName: \"kubernetes.io/projected/e1795d02-2165-41cf-9dd0-099a305a21b7-kube-api-access-vbtvb\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.872683 kubelet[2641]: I0909 03:24:30.872461 2641 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e1795d02-2165-41cf-9dd0-099a305a21b7-cni-path\") on node \"srv-hr091.gb1.brightbox.com\" DevicePath \"\"" Sep 9 03:24:30.921240 systemd[1]: Removed slice kubepods-burstable-pode1795d02_2165_41cf_9dd0_099a305a21b7.slice - libcontainer container kubepods-burstable-pode1795d02_2165_41cf_9dd0_099a305a21b7.slice. Sep 9 03:24:30.921406 systemd[1]: kubepods-burstable-pode1795d02_2165_41cf_9dd0_099a305a21b7.slice: Consumed 10.797s CPU time. Sep 9 03:24:30.924157 systemd[1]: Removed slice kubepods-besteffort-podea60a5bd_5f81_48b1_bf88_b01fb2b41621.slice - libcontainer container kubepods-besteffort-podea60a5bd_5f81_48b1_bf88_b01fb2b41621.slice. Sep 9 03:24:31.322141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e-rootfs.mount: Deactivated successfully. Sep 9 03:24:31.322324 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e-shm.mount: Deactivated successfully. Sep 9 03:24:31.322478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8-rootfs.mount: Deactivated successfully. Sep 9 03:24:31.322604 systemd[1]: var-lib-kubelet-pods-e1795d02\x2d2165\x2d41cf\x2d9dd0\x2d099a305a21b7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvbtvb.mount: Deactivated successfully. Sep 9 03:24:31.322741 systemd[1]: var-lib-kubelet-pods-e1795d02\x2d2165\x2d41cf\x2d9dd0\x2d099a305a21b7-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 03:24:31.322873 systemd[1]: var-lib-kubelet-pods-e1795d02\x2d2165\x2d41cf\x2d9dd0\x2d099a305a21b7-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 03:24:31.322993 systemd[1]: var-lib-kubelet-pods-ea60a5bd\x2d5f81\x2d48b1\x2dbf88\x2db01fb2b41621-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djkgk2.mount: Deactivated successfully. Sep 9 03:24:31.647875 kubelet[2641]: I0909 03:24:31.647812 2641 scope.go:117] "RemoveContainer" containerID="d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8" Sep 9 03:24:31.667129 containerd[1500]: time="2025-09-09T03:24:31.665491313Z" level=info msg="RemoveContainer for \"d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8\"" Sep 9 03:24:31.675150 containerd[1500]: time="2025-09-09T03:24:31.675104489Z" level=info msg="RemoveContainer for \"d77118a3416a17015000dff65104afebe0ec22eeeaf9e81fc30b0ff381930fb8\" returns successfully" Sep 9 03:24:31.675491 kubelet[2641]: I0909 03:24:31.675458 2641 scope.go:117] "RemoveContainer" containerID="e8a92bb7a7ec35f5b5f5599f226b771d3393044375dae406bacdbe9dae34d999" Sep 9 03:24:31.679271 containerd[1500]: time="2025-09-09T03:24:31.679196698Z" level=info msg="RemoveContainer for \"e8a92bb7a7ec35f5b5f5599f226b771d3393044375dae406bacdbe9dae34d999\"" Sep 9 03:24:31.682757 containerd[1500]: time="2025-09-09T03:24:31.682724420Z" level=info msg="RemoveContainer for \"e8a92bb7a7ec35f5b5f5599f226b771d3393044375dae406bacdbe9dae34d999\" returns successfully" Sep 9 03:24:31.683028 kubelet[2641]: I0909 03:24:31.683000 2641 scope.go:117] "RemoveContainer" containerID="bd729edb4ddaacaf2f64c5216e0398965a25ba6c396c6b034de7cddd64a4e0ba" Sep 9 03:24:31.687687 containerd[1500]: time="2025-09-09T03:24:31.687614375Z" level=info msg="RemoveContainer for \"bd729edb4ddaacaf2f64c5216e0398965a25ba6c396c6b034de7cddd64a4e0ba\"" Sep 9 03:24:31.694139 containerd[1500]: time="2025-09-09T03:24:31.694074040Z" level=info msg="RemoveContainer for \"bd729edb4ddaacaf2f64c5216e0398965a25ba6c396c6b034de7cddd64a4e0ba\" returns successfully" Sep 9 03:24:31.694782 kubelet[2641]: I0909 03:24:31.694674 2641 scope.go:117] "RemoveContainer" containerID="1a5846bf1dd8663af6d13c21381226a1e805429a07a1251c973c592d622d5d4e" Sep 9 03:24:31.697197 containerd[1500]: time="2025-09-09T03:24:31.697117764Z" level=info msg="RemoveContainer for \"1a5846bf1dd8663af6d13c21381226a1e805429a07a1251c973c592d622d5d4e\"" Sep 9 03:24:31.700784 containerd[1500]: time="2025-09-09T03:24:31.700645266Z" level=info msg="RemoveContainer for \"1a5846bf1dd8663af6d13c21381226a1e805429a07a1251c973c592d622d5d4e\" returns successfully" Sep 9 03:24:31.701204 kubelet[2641]: I0909 03:24:31.700912 2641 scope.go:117] "RemoveContainer" containerID="8ace94bdac76d8a2e3a42f02072a4d2804e59df4310492f76cf64c5a0c1f2b7a" Sep 9 03:24:31.702907 containerd[1500]: time="2025-09-09T03:24:31.702528059Z" level=info msg="RemoveContainer for \"8ace94bdac76d8a2e3a42f02072a4d2804e59df4310492f76cf64c5a0c1f2b7a\"" Sep 9 03:24:31.705762 containerd[1500]: time="2025-09-09T03:24:31.705723923Z" level=info msg="RemoveContainer for \"8ace94bdac76d8a2e3a42f02072a4d2804e59df4310492f76cf64c5a0c1f2b7a\" returns successfully" Sep 9 03:24:32.324625 sshd[4234]: pam_unix(sshd:session): session closed for user core Sep 9 03:24:32.330492 systemd[1]: sshd@22-10.230.34.194:22-147.75.109.163:42162.service: Deactivated successfully. Sep 9 03:24:32.333491 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 03:24:32.333735 systemd[1]: session-25.scope: Consumed 1.272s CPU time. Sep 9 03:24:32.334864 systemd-logind[1485]: Session 25 logged out. Waiting for processes to exit. Sep 9 03:24:32.336897 systemd-logind[1485]: Removed session 25. Sep 9 03:24:32.480388 systemd[1]: Started sshd@23-10.230.34.194:22-147.75.109.163:44342.service - OpenSSH per-connection server daemon (147.75.109.163:44342). Sep 9 03:24:32.912442 kubelet[2641]: I0909 03:24:32.912074 2641 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1795d02-2165-41cf-9dd0-099a305a21b7" path="/var/lib/kubelet/pods/e1795d02-2165-41cf-9dd0-099a305a21b7/volumes" Sep 9 03:24:32.914237 kubelet[2641]: I0909 03:24:32.914210 2641 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea60a5bd-5f81-48b1-bf88-b01fb2b41621" path="/var/lib/kubelet/pods/ea60a5bd-5f81-48b1-bf88-b01fb2b41621/volumes" Sep 9 03:24:33.395803 sshd[4394]: Accepted publickey for core from 147.75.109.163 port 44342 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:24:33.397941 sshd[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:24:33.406012 systemd-logind[1485]: New session 26 of user core. Sep 9 03:24:33.410403 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 03:24:34.074468 kubelet[2641]: E0909 03:24:34.074362 2641 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 03:24:34.541079 kubelet[2641]: E0909 03:24:34.540994 2641 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1795d02-2165-41cf-9dd0-099a305a21b7" containerName="mount-cgroup" Sep 9 03:24:34.541079 kubelet[2641]: E0909 03:24:34.541062 2641 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1795d02-2165-41cf-9dd0-099a305a21b7" containerName="mount-bpf-fs" Sep 9 03:24:34.541079 kubelet[2641]: E0909 03:24:34.541078 2641 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1795d02-2165-41cf-9dd0-099a305a21b7" containerName="cilium-agent" Sep 9 03:24:34.541079 kubelet[2641]: E0909 03:24:34.541100 2641 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ea60a5bd-5f81-48b1-bf88-b01fb2b41621" containerName="cilium-operator" Sep 9 03:24:34.541722 kubelet[2641]: E0909 03:24:34.541130 2641 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1795d02-2165-41cf-9dd0-099a305a21b7" containerName="apply-sysctl-overwrites" Sep 9 03:24:34.541722 kubelet[2641]: E0909 03:24:34.541144 2641 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e1795d02-2165-41cf-9dd0-099a305a21b7" containerName="clean-cilium-state" Sep 9 03:24:34.541722 kubelet[2641]: I0909 03:24:34.541242 2641 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea60a5bd-5f81-48b1-bf88-b01fb2b41621" containerName="cilium-operator" Sep 9 03:24:34.541722 kubelet[2641]: I0909 03:24:34.541277 2641 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1795d02-2165-41cf-9dd0-099a305a21b7" containerName="cilium-agent" Sep 9 03:24:34.600414 kubelet[2641]: I0909 03:24:34.599416 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2d9dea42-71ca-4a29-97cb-16ef2c4f0263-etc-cni-netd\") pod \"cilium-cl6wh\" (UID: \"2d9dea42-71ca-4a29-97cb-16ef2c4f0263\") " pod="kube-system/cilium-cl6wh" Sep 9 03:24:34.600414 kubelet[2641]: I0909 03:24:34.599479 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2d9dea42-71ca-4a29-97cb-16ef2c4f0263-hubble-tls\") pod \"cilium-cl6wh\" (UID: \"2d9dea42-71ca-4a29-97cb-16ef2c4f0263\") " pod="kube-system/cilium-cl6wh" Sep 9 03:24:34.600414 kubelet[2641]: I0909 03:24:34.599519 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2d9dea42-71ca-4a29-97cb-16ef2c4f0263-clustermesh-secrets\") pod \"cilium-cl6wh\" (UID: \"2d9dea42-71ca-4a29-97cb-16ef2c4f0263\") " pod="kube-system/cilium-cl6wh" Sep 9 03:24:34.600414 kubelet[2641]: I0909 03:24:34.599566 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2d9dea42-71ca-4a29-97cb-16ef2c4f0263-host-proc-sys-kernel\") pod \"cilium-cl6wh\" (UID: \"2d9dea42-71ca-4a29-97cb-16ef2c4f0263\") " pod="kube-system/cilium-cl6wh" Sep 9 03:24:34.602188 kubelet[2641]: I0909 03:24:34.600927 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2d9dea42-71ca-4a29-97cb-16ef2c4f0263-cilium-run\") pod \"cilium-cl6wh\" (UID: \"2d9dea42-71ca-4a29-97cb-16ef2c4f0263\") " pod="kube-system/cilium-cl6wh" Sep 9 03:24:34.602188 kubelet[2641]: I0909 03:24:34.600994 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2d9dea42-71ca-4a29-97cb-16ef2c4f0263-hostproc\") pod \"cilium-cl6wh\" (UID: \"2d9dea42-71ca-4a29-97cb-16ef2c4f0263\") " pod="kube-system/cilium-cl6wh" Sep 9 03:24:34.602188 kubelet[2641]: I0909 03:24:34.601061 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2d9dea42-71ca-4a29-97cb-16ef2c4f0263-cilium-config-path\") pod \"cilium-cl6wh\" (UID: \"2d9dea42-71ca-4a29-97cb-16ef2c4f0263\") " pod="kube-system/cilium-cl6wh" Sep 9 03:24:34.602188 kubelet[2641]: I0909 03:24:34.601105 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2d9dea42-71ca-4a29-97cb-16ef2c4f0263-host-proc-sys-net\") pod \"cilium-cl6wh\" (UID: \"2d9dea42-71ca-4a29-97cb-16ef2c4f0263\") " pod="kube-system/cilium-cl6wh" Sep 9 03:24:34.602188 kubelet[2641]: I0909 03:24:34.601139 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2d9dea42-71ca-4a29-97cb-16ef2c4f0263-cni-path\") pod \"cilium-cl6wh\" (UID: \"2d9dea42-71ca-4a29-97cb-16ef2c4f0263\") " pod="kube-system/cilium-cl6wh" Sep 9 03:24:34.602188 kubelet[2641]: I0909 03:24:34.601224 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d9dea42-71ca-4a29-97cb-16ef2c4f0263-lib-modules\") pod \"cilium-cl6wh\" (UID: \"2d9dea42-71ca-4a29-97cb-16ef2c4f0263\") " pod="kube-system/cilium-cl6wh" Sep 9 03:24:34.602188 kubelet[2641]: I0909 03:24:34.601277 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d9dea42-71ca-4a29-97cb-16ef2c4f0263-xtables-lock\") pod \"cilium-cl6wh\" (UID: \"2d9dea42-71ca-4a29-97cb-16ef2c4f0263\") " pod="kube-system/cilium-cl6wh" Sep 9 03:24:34.602188 kubelet[2641]: I0909 03:24:34.601316 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/2d9dea42-71ca-4a29-97cb-16ef2c4f0263-cilium-ipsec-secrets\") pod \"cilium-cl6wh\" (UID: \"2d9dea42-71ca-4a29-97cb-16ef2c4f0263\") " pod="kube-system/cilium-cl6wh" Sep 9 03:24:34.602188 kubelet[2641]: I0909 03:24:34.601370 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2d9dea42-71ca-4a29-97cb-16ef2c4f0263-cilium-cgroup\") pod \"cilium-cl6wh\" (UID: \"2d9dea42-71ca-4a29-97cb-16ef2c4f0263\") " pod="kube-system/cilium-cl6wh" Sep 9 03:24:34.602188 kubelet[2641]: I0909 03:24:34.601415 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2d9dea42-71ca-4a29-97cb-16ef2c4f0263-bpf-maps\") pod \"cilium-cl6wh\" (UID: \"2d9dea42-71ca-4a29-97cb-16ef2c4f0263\") " pod="kube-system/cilium-cl6wh" Sep 9 03:24:34.602188 kubelet[2641]: I0909 03:24:34.601491 2641 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvms7\" (UniqueName: \"kubernetes.io/projected/2d9dea42-71ca-4a29-97cb-16ef2c4f0263-kube-api-access-hvms7\") pod \"cilium-cl6wh\" (UID: \"2d9dea42-71ca-4a29-97cb-16ef2c4f0263\") " pod="kube-system/cilium-cl6wh" Sep 9 03:24:34.614149 systemd[1]: Created slice kubepods-burstable-pod2d9dea42_71ca_4a29_97cb_16ef2c4f0263.slice - libcontainer container kubepods-burstable-pod2d9dea42_71ca_4a29_97cb_16ef2c4f0263.slice. Sep 9 03:24:34.643588 sshd[4394]: pam_unix(sshd:session): session closed for user core Sep 9 03:24:34.650947 systemd-logind[1485]: Session 26 logged out. Waiting for processes to exit. Sep 9 03:24:34.654849 systemd[1]: sshd@23-10.230.34.194:22-147.75.109.163:44342.service: Deactivated successfully. Sep 9 03:24:34.660109 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 03:24:34.663190 systemd-logind[1485]: Removed session 26. Sep 9 03:24:34.800946 systemd[1]: Started sshd@24-10.230.34.194:22-147.75.109.163:44350.service - OpenSSH per-connection server daemon (147.75.109.163:44350). Sep 9 03:24:34.928241 containerd[1500]: time="2025-09-09T03:24:34.928109901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cl6wh,Uid:2d9dea42-71ca-4a29-97cb-16ef2c4f0263,Namespace:kube-system,Attempt:0,}" Sep 9 03:24:34.970696 containerd[1500]: time="2025-09-09T03:24:34.970374526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 03:24:34.970696 containerd[1500]: time="2025-09-09T03:24:34.970466562Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 03:24:34.970696 containerd[1500]: time="2025-09-09T03:24:34.970485690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:24:34.970696 containerd[1500]: time="2025-09-09T03:24:34.970626622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 03:24:34.998421 systemd[1]: Started cri-containerd-428f5d9bc0c03e7d86d2ea631c32b5e72b38447faaee4b485c4be0cc47eefd39.scope - libcontainer container 428f5d9bc0c03e7d86d2ea631c32b5e72b38447faaee4b485c4be0cc47eefd39. Sep 9 03:24:35.044753 containerd[1500]: time="2025-09-09T03:24:35.044533540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cl6wh,Uid:2d9dea42-71ca-4a29-97cb-16ef2c4f0263,Namespace:kube-system,Attempt:0,} returns sandbox id \"428f5d9bc0c03e7d86d2ea631c32b5e72b38447faaee4b485c4be0cc47eefd39\"" Sep 9 03:24:35.050954 containerd[1500]: time="2025-09-09T03:24:35.050899424Z" level=info msg="CreateContainer within sandbox \"428f5d9bc0c03e7d86d2ea631c32b5e72b38447faaee4b485c4be0cc47eefd39\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 03:24:35.066100 containerd[1500]: time="2025-09-09T03:24:35.065968240Z" level=info msg="CreateContainer within sandbox \"428f5d9bc0c03e7d86d2ea631c32b5e72b38447faaee4b485c4be0cc47eefd39\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1cea2f921b0c7bfff93da7e2d7f144176f07ca3377a2b6d73cb86ecf1da93ac1\"" Sep 9 03:24:35.067184 containerd[1500]: time="2025-09-09T03:24:35.067115996Z" level=info msg="StartContainer for \"1cea2f921b0c7bfff93da7e2d7f144176f07ca3377a2b6d73cb86ecf1da93ac1\"" Sep 9 03:24:35.114472 systemd[1]: Started cri-containerd-1cea2f921b0c7bfff93da7e2d7f144176f07ca3377a2b6d73cb86ecf1da93ac1.scope - libcontainer container 1cea2f921b0c7bfff93da7e2d7f144176f07ca3377a2b6d73cb86ecf1da93ac1. Sep 9 03:24:35.158213 containerd[1500]: time="2025-09-09T03:24:35.158059254Z" level=info msg="StartContainer for \"1cea2f921b0c7bfff93da7e2d7f144176f07ca3377a2b6d73cb86ecf1da93ac1\" returns successfully" Sep 9 03:24:35.185591 systemd[1]: cri-containerd-1cea2f921b0c7bfff93da7e2d7f144176f07ca3377a2b6d73cb86ecf1da93ac1.scope: Deactivated successfully. Sep 9 03:24:35.237343 containerd[1500]: time="2025-09-09T03:24:35.236886158Z" level=info msg="shim disconnected" id=1cea2f921b0c7bfff93da7e2d7f144176f07ca3377a2b6d73cb86ecf1da93ac1 namespace=k8s.io Sep 9 03:24:35.237343 containerd[1500]: time="2025-09-09T03:24:35.237043111Z" level=warning msg="cleaning up after shim disconnected" id=1cea2f921b0c7bfff93da7e2d7f144176f07ca3377a2b6d73cb86ecf1da93ac1 namespace=k8s.io Sep 9 03:24:35.237343 containerd[1500]: time="2025-09-09T03:24:35.237071662Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 03:24:35.666466 containerd[1500]: time="2025-09-09T03:24:35.666341201Z" level=info msg="CreateContainer within sandbox \"428f5d9bc0c03e7d86d2ea631c32b5e72b38447faaee4b485c4be0cc47eefd39\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 03:24:35.683485 containerd[1500]: time="2025-09-09T03:24:35.683437325Z" level=info msg="CreateContainer within sandbox \"428f5d9bc0c03e7d86d2ea631c32b5e72b38447faaee4b485c4be0cc47eefd39\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d756c1047d8ee42facdae8ee720f21d11843217ed903f7f924eb8184944d3078\"" Sep 9 03:24:35.684011 containerd[1500]: time="2025-09-09T03:24:35.683905248Z" level=info msg="StartContainer for \"d756c1047d8ee42facdae8ee720f21d11843217ed903f7f924eb8184944d3078\"" Sep 9 03:24:35.702212 sshd[4410]: Accepted publickey for core from 147.75.109.163 port 44350 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:24:35.705352 sshd[4410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:24:35.731026 systemd-logind[1485]: New session 27 of user core. Sep 9 03:24:35.741401 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 03:24:35.759438 systemd[1]: Started cri-containerd-d756c1047d8ee42facdae8ee720f21d11843217ed903f7f924eb8184944d3078.scope - libcontainer container d756c1047d8ee42facdae8ee720f21d11843217ed903f7f924eb8184944d3078. Sep 9 03:24:35.803531 containerd[1500]: time="2025-09-09T03:24:35.803412228Z" level=info msg="StartContainer for \"d756c1047d8ee42facdae8ee720f21d11843217ed903f7f924eb8184944d3078\" returns successfully" Sep 9 03:24:35.816895 systemd[1]: cri-containerd-d756c1047d8ee42facdae8ee720f21d11843217ed903f7f924eb8184944d3078.scope: Deactivated successfully. Sep 9 03:24:35.848849 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d756c1047d8ee42facdae8ee720f21d11843217ed903f7f924eb8184944d3078-rootfs.mount: Deactivated successfully. Sep 9 03:24:35.857630 containerd[1500]: time="2025-09-09T03:24:35.857553516Z" level=info msg="shim disconnected" id=d756c1047d8ee42facdae8ee720f21d11843217ed903f7f924eb8184944d3078 namespace=k8s.io Sep 9 03:24:35.857630 containerd[1500]: time="2025-09-09T03:24:35.857623590Z" level=warning msg="cleaning up after shim disconnected" id=d756c1047d8ee42facdae8ee720f21d11843217ed903f7f924eb8184944d3078 namespace=k8s.io Sep 9 03:24:35.859419 containerd[1500]: time="2025-09-09T03:24:35.857642413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 03:24:36.320567 sshd[4410]: pam_unix(sshd:session): session closed for user core Sep 9 03:24:36.325941 systemd[1]: sshd@24-10.230.34.194:22-147.75.109.163:44350.service: Deactivated successfully. Sep 9 03:24:36.329514 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 03:24:36.332275 systemd-logind[1485]: Session 27 logged out. Waiting for processes to exit. Sep 9 03:24:36.333915 systemd-logind[1485]: Removed session 27. Sep 9 03:24:36.481862 systemd[1]: Started sshd@25-10.230.34.194:22-147.75.109.163:44362.service - OpenSSH per-connection server daemon (147.75.109.163:44362). Sep 9 03:24:36.674823 containerd[1500]: time="2025-09-09T03:24:36.674684998Z" level=info msg="CreateContainer within sandbox \"428f5d9bc0c03e7d86d2ea631c32b5e72b38447faaee4b485c4be0cc47eefd39\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 03:24:36.699438 containerd[1500]: time="2025-09-09T03:24:36.699378659Z" level=info msg="CreateContainer within sandbox \"428f5d9bc0c03e7d86d2ea631c32b5e72b38447faaee4b485c4be0cc47eefd39\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1897bdcbaa4b8473822c59066366c0843072809348b97abe01259537f700523d\"" Sep 9 03:24:36.703217 containerd[1500]: time="2025-09-09T03:24:36.701912269Z" level=info msg="StartContainer for \"1897bdcbaa4b8473822c59066366c0843072809348b97abe01259537f700523d\"" Sep 9 03:24:36.760437 systemd[1]: Started cri-containerd-1897bdcbaa4b8473822c59066366c0843072809348b97abe01259537f700523d.scope - libcontainer container 1897bdcbaa4b8473822c59066366c0843072809348b97abe01259537f700523d. Sep 9 03:24:36.809320 containerd[1500]: time="2025-09-09T03:24:36.809259026Z" level=info msg="StartContainer for \"1897bdcbaa4b8473822c59066366c0843072809348b97abe01259537f700523d\" returns successfully" Sep 9 03:24:36.815754 systemd[1]: cri-containerd-1897bdcbaa4b8473822c59066366c0843072809348b97abe01259537f700523d.scope: Deactivated successfully. Sep 9 03:24:36.852847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1897bdcbaa4b8473822c59066366c0843072809348b97abe01259537f700523d-rootfs.mount: Deactivated successfully. Sep 9 03:24:36.856481 containerd[1500]: time="2025-09-09T03:24:36.856375566Z" level=info msg="shim disconnected" id=1897bdcbaa4b8473822c59066366c0843072809348b97abe01259537f700523d namespace=k8s.io Sep 9 03:24:36.856481 containerd[1500]: time="2025-09-09T03:24:36.856454882Z" level=warning msg="cleaning up after shim disconnected" id=1897bdcbaa4b8473822c59066366c0843072809348b97abe01259537f700523d namespace=k8s.io Sep 9 03:24:36.856481 containerd[1500]: time="2025-09-09T03:24:36.856470296Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 03:24:37.376460 sshd[4585]: Accepted publickey for core from 147.75.109.163 port 44362 ssh2: RSA SHA256:3hTcz/48zUeeQn500raM6v2vtJJQJrvIu4rGfKfvnS4 Sep 9 03:24:37.379987 sshd[4585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 03:24:37.388727 systemd-logind[1485]: New session 28 of user core. Sep 9 03:24:37.398464 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 03:24:37.678803 containerd[1500]: time="2025-09-09T03:24:37.678521033Z" level=info msg="CreateContainer within sandbox \"428f5d9bc0c03e7d86d2ea631c32b5e72b38447faaee4b485c4be0cc47eefd39\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 03:24:37.699198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2249823361.mount: Deactivated successfully. Sep 9 03:24:37.701700 containerd[1500]: time="2025-09-09T03:24:37.701643300Z" level=info msg="CreateContainer within sandbox \"428f5d9bc0c03e7d86d2ea631c32b5e72b38447faaee4b485c4be0cc47eefd39\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"60dfca8d44001922b54f95916b7940a8fe3973394d25f416f2e49d6a25ae0166\"" Sep 9 03:24:37.704213 containerd[1500]: time="2025-09-09T03:24:37.704155425Z" level=info msg="StartContainer for \"60dfca8d44001922b54f95916b7940a8fe3973394d25f416f2e49d6a25ae0166\"" Sep 9 03:24:37.758432 systemd[1]: Started cri-containerd-60dfca8d44001922b54f95916b7940a8fe3973394d25f416f2e49d6a25ae0166.scope - libcontainer container 60dfca8d44001922b54f95916b7940a8fe3973394d25f416f2e49d6a25ae0166. Sep 9 03:24:37.795380 systemd[1]: cri-containerd-60dfca8d44001922b54f95916b7940a8fe3973394d25f416f2e49d6a25ae0166.scope: Deactivated successfully. Sep 9 03:24:37.797517 containerd[1500]: time="2025-09-09T03:24:37.797455235Z" level=info msg="StartContainer for \"60dfca8d44001922b54f95916b7940a8fe3973394d25f416f2e49d6a25ae0166\" returns successfully" Sep 9 03:24:37.826773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60dfca8d44001922b54f95916b7940a8fe3973394d25f416f2e49d6a25ae0166-rootfs.mount: Deactivated successfully. Sep 9 03:24:37.847581 containerd[1500]: time="2025-09-09T03:24:37.847492624Z" level=info msg="shim disconnected" id=60dfca8d44001922b54f95916b7940a8fe3973394d25f416f2e49d6a25ae0166 namespace=k8s.io Sep 9 03:24:37.848130 containerd[1500]: time="2025-09-09T03:24:37.847857882Z" level=warning msg="cleaning up after shim disconnected" id=60dfca8d44001922b54f95916b7940a8fe3973394d25f416f2e49d6a25ae0166 namespace=k8s.io Sep 9 03:24:37.848130 containerd[1500]: time="2025-09-09T03:24:37.847887781Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 03:24:38.398161 update_engine[1486]: I20250909 03:24:38.397951 1486 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 9 03:24:38.398161 update_engine[1486]: I20250909 03:24:38.398061 1486 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 9 03:24:38.404391 update_engine[1486]: I20250909 03:24:38.403795 1486 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 9 03:24:38.404677 update_engine[1486]: I20250909 03:24:38.404633 1486 omaha_request_params.cc:62] Current group set to lts Sep 9 03:24:38.404914 update_engine[1486]: I20250909 03:24:38.404863 1486 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 9 03:24:38.404914 update_engine[1486]: I20250909 03:24:38.404903 1486 update_attempter.cc:643] Scheduling an action processor start. Sep 9 03:24:38.405020 update_engine[1486]: I20250909 03:24:38.404943 1486 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 9 03:24:38.405073 update_engine[1486]: I20250909 03:24:38.405028 1486 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 9 03:24:38.405189 update_engine[1486]: I20250909 03:24:38.405123 1486 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 9 03:24:38.405189 update_engine[1486]: I20250909 03:24:38.405153 1486 omaha_request_action.cc:272] Request: Sep 9 03:24:38.405189 update_engine[1486]: Sep 9 03:24:38.405189 update_engine[1486]: Sep 9 03:24:38.405189 update_engine[1486]: Sep 9 03:24:38.405189 update_engine[1486]: Sep 9 03:24:38.405189 update_engine[1486]: Sep 9 03:24:38.405189 update_engine[1486]: Sep 9 03:24:38.405189 update_engine[1486]: Sep 9 03:24:38.405189 update_engine[1486]: Sep 9 03:24:38.405637 update_engine[1486]: I20250909 03:24:38.405180 1486 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 03:24:38.416661 update_engine[1486]: I20250909 03:24:38.415434 1486 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 03:24:38.416661 update_engine[1486]: I20250909 03:24:38.415880 1486 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 03:24:38.430795 update_engine[1486]: E20250909 03:24:38.430624 1486 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 03:24:38.430795 update_engine[1486]: I20250909 03:24:38.430748 1486 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 9 03:24:38.435522 locksmithd[1520]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 9 03:24:38.684508 containerd[1500]: time="2025-09-09T03:24:38.684356627Z" level=info msg="CreateContainer within sandbox \"428f5d9bc0c03e7d86d2ea631c32b5e72b38447faaee4b485c4be0cc47eefd39\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 03:24:38.711875 containerd[1500]: time="2025-09-09T03:24:38.711704461Z" level=info msg="CreateContainer within sandbox \"428f5d9bc0c03e7d86d2ea631c32b5e72b38447faaee4b485c4be0cc47eefd39\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d4edfe161ba8d54ff9dea63113e4b04dcd71777c190fca568ca3758a2a7358b9\"" Sep 9 03:24:38.713216 containerd[1500]: time="2025-09-09T03:24:38.712513220Z" level=info msg="StartContainer for \"d4edfe161ba8d54ff9dea63113e4b04dcd71777c190fca568ca3758a2a7358b9\"" Sep 9 03:24:38.768418 systemd[1]: Started cri-containerd-d4edfe161ba8d54ff9dea63113e4b04dcd71777c190fca568ca3758a2a7358b9.scope - libcontainer container d4edfe161ba8d54ff9dea63113e4b04dcd71777c190fca568ca3758a2a7358b9. Sep 9 03:24:38.816199 containerd[1500]: time="2025-09-09T03:24:38.814767084Z" level=info msg="StartContainer for \"d4edfe161ba8d54ff9dea63113e4b04dcd71777c190fca568ca3758a2a7358b9\" returns successfully" Sep 9 03:24:39.512253 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Sep 9 03:24:40.388300 systemd[1]: run-containerd-runc-k8s.io-d4edfe161ba8d54ff9dea63113e4b04dcd71777c190fca568ca3758a2a7358b9-runc.NBS4VM.mount: Deactivated successfully. Sep 9 03:24:42.631586 systemd[1]: run-containerd-runc-k8s.io-d4edfe161ba8d54ff9dea63113e4b04dcd71777c190fca568ca3758a2a7358b9-runc.cyFf27.mount: Deactivated successfully. Sep 9 03:24:43.502732 systemd-networkd[1429]: lxc_health: Link UP Sep 9 03:24:43.511819 systemd-networkd[1429]: lxc_health: Gained carrier Sep 9 03:24:44.875639 systemd-networkd[1429]: lxc_health: Gained IPv6LL Sep 9 03:24:45.007501 kubelet[2641]: I0909 03:24:45.006480 2641 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-cl6wh" podStartSLOduration=11.006403862 podStartE2EDuration="11.006403862s" podCreationTimestamp="2025-09-09 03:24:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 03:24:39.74477342 +0000 UTC m=+171.040372641" watchObservedRunningTime="2025-09-09 03:24:45.006403862 +0000 UTC m=+176.302003066" Sep 9 03:24:45.107904 systemd[1]: run-containerd-runc-k8s.io-d4edfe161ba8d54ff9dea63113e4b04dcd71777c190fca568ca3758a2a7358b9-runc.tLmImy.mount: Deactivated successfully. Sep 9 03:24:47.375486 systemd[1]: run-containerd-runc-k8s.io-d4edfe161ba8d54ff9dea63113e4b04dcd71777c190fca568ca3758a2a7358b9-runc.ivhXjD.mount: Deactivated successfully. Sep 9 03:24:48.358259 update_engine[1486]: I20250909 03:24:48.356827 1486 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 03:24:48.358259 update_engine[1486]: I20250909 03:24:48.357631 1486 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 03:24:48.359148 update_engine[1486]: I20250909 03:24:48.359112 1486 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 03:24:48.359613 update_engine[1486]: E20250909 03:24:48.359577 1486 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 03:24:48.359795 update_engine[1486]: I20250909 03:24:48.359763 1486 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 9 03:24:48.892698 containerd[1500]: time="2025-09-09T03:24:48.892568449Z" level=info msg="StopPodSandbox for \"177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8\"" Sep 9 03:24:48.895323 containerd[1500]: time="2025-09-09T03:24:48.893598078Z" level=info msg="TearDown network for sandbox \"177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8\" successfully" Sep 9 03:24:48.895323 containerd[1500]: time="2025-09-09T03:24:48.893635237Z" level=info msg="StopPodSandbox for \"177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8\" returns successfully" Sep 9 03:24:48.895323 containerd[1500]: time="2025-09-09T03:24:48.894701287Z" level=info msg="RemovePodSandbox for \"177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8\"" Sep 9 03:24:48.895323 containerd[1500]: time="2025-09-09T03:24:48.894756441Z" level=info msg="Forcibly stopping sandbox \"177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8\"" Sep 9 03:24:48.895323 containerd[1500]: time="2025-09-09T03:24:48.894820283Z" level=info msg="TearDown network for sandbox \"177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8\" successfully" Sep 9 03:24:48.906904 containerd[1500]: time="2025-09-09T03:24:48.906593501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 03:24:48.906904 containerd[1500]: time="2025-09-09T03:24:48.906696031Z" level=info msg="RemovePodSandbox \"177b9521a822b3c83bfb3ab45f3deb35b0d7562d53077caef1c1d0b855e026d8\" returns successfully" Sep 9 03:24:48.909235 containerd[1500]: time="2025-09-09T03:24:48.907666078Z" level=info msg="StopPodSandbox for \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\"" Sep 9 03:24:48.909235 containerd[1500]: time="2025-09-09T03:24:48.907788575Z" level=info msg="TearDown network for sandbox \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" successfully" Sep 9 03:24:48.909235 containerd[1500]: time="2025-09-09T03:24:48.907809321Z" level=info msg="StopPodSandbox for \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" returns successfully" Sep 9 03:24:48.911207 containerd[1500]: time="2025-09-09T03:24:48.910301581Z" level=info msg="RemovePodSandbox for \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\"" Sep 9 03:24:48.911207 containerd[1500]: time="2025-09-09T03:24:48.910350178Z" level=info msg="Forcibly stopping sandbox \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\"" Sep 9 03:24:48.911207 containerd[1500]: time="2025-09-09T03:24:48.910431150Z" level=info msg="TearDown network for sandbox \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" successfully" Sep 9 03:24:48.918400 containerd[1500]: time="2025-09-09T03:24:48.918304618Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 03:24:48.918747 containerd[1500]: time="2025-09-09T03:24:48.918701267Z" level=info msg="RemovePodSandbox \"ae35501d210e79f64777afefc3e1822a84a322fb19df836c7a4d11693db6915e\" returns successfully" Sep 9 03:24:49.760844 kubelet[2641]: E0909 03:24:49.760619 2641 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48832->127.0.0.1:42149: write tcp 127.0.0.1:48832->127.0.0.1:42149: write: broken pipe Sep 9 03:24:51.850520 systemd[1]: run-containerd-runc-k8s.io-d4edfe161ba8d54ff9dea63113e4b04dcd71777c190fca568ca3758a2a7358b9-runc.KMjWLK.mount: Deactivated successfully. Sep 9 03:24:52.061851 sshd[4585]: pam_unix(sshd:session): session closed for user core Sep 9 03:24:52.068540 systemd[1]: sshd@25-10.230.34.194:22-147.75.109.163:44362.service: Deactivated successfully. Sep 9 03:24:52.071375 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 03:24:52.073751 systemd-logind[1485]: Session 28 logged out. Waiting for processes to exit. Sep 9 03:24:52.075703 systemd-logind[1485]: Removed session 28.