Dec 16 16:53:48.967096 kernel: Linux version 6.12.61-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 12 15:21:28 -00 2025 Dec 16 16:53:48.967152 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 16:53:48.967167 kernel: BIOS-provided physical RAM map: Dec 16 16:53:48.967178 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 16 16:53:48.967193 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 16 16:53:48.967215 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 16 16:53:48.967227 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 16 16:53:48.967238 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 16 16:53:48.967249 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 16 16:53:48.967259 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 16 16:53:48.967270 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 16 16:53:48.967281 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 16 16:53:48.967291 kernel: NX (Execute Disable) protection: active Dec 16 16:53:48.967307 kernel: APIC: Static calls initialized Dec 16 16:53:48.967320 kernel: SMBIOS 2.8 present. Dec 16 16:53:48.967332 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 16 16:53:48.967344 kernel: DMI: Memory slots populated: 1/1 Dec 16 16:53:48.967355 kernel: Hypervisor detected: KVM Dec 16 16:53:48.967366 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 16 16:53:48.967382 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 16 16:53:48.967393 kernel: kvm-clock: using sched offset of 5931127684 cycles Dec 16 16:53:48.967406 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 16 16:53:48.967418 kernel: tsc: Detected 2499.998 MHz processor Dec 16 16:53:48.967430 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 16 16:53:48.967442 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 16 16:53:48.967453 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 16 16:53:48.967465 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 16 16:53:48.967477 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 16 16:53:48.967493 kernel: Using GB pages for direct mapping Dec 16 16:53:48.967505 kernel: ACPI: Early table checksum verification disabled Dec 16 16:53:48.967516 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 16 16:53:48.967528 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 16:53:48.967540 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 16:53:48.967552 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 16:53:48.967564 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 16 16:53:48.967575 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 16:53:48.967587 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 16:53:48.967603 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 16:53:48.967615 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 16 16:53:48.967627 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 16 16:53:48.967644 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 16 16:53:48.967656 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 16 16:53:48.967668 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 16 16:53:48.967684 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 16 16:53:48.967697 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 16 16:53:48.967709 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 16 16:53:48.967721 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 16 16:53:48.967733 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 16 16:53:48.967745 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 16 16:53:48.967758 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00001000-0x7ffdbfff] Dec 16 16:53:48.967770 kernel: NODE_DATA(0) allocated [mem 0x7ffd4dc0-0x7ffdbfff] Dec 16 16:53:48.967787 kernel: Zone ranges: Dec 16 16:53:48.968371 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 16 16:53:48.968387 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 16 16:53:48.968399 kernel: Normal empty Dec 16 16:53:48.968412 kernel: Device empty Dec 16 16:53:48.968424 kernel: Movable zone start for each node Dec 16 16:53:48.968436 kernel: Early memory node ranges Dec 16 16:53:48.968448 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 16 16:53:48.968472 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 16 16:53:48.968490 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 16 16:53:48.968503 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 16 16:53:48.968515 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 16 16:53:48.968540 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 16 16:53:48.968552 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 16 16:53:48.968564 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 16 16:53:48.968576 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 16 16:53:48.968601 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 16 16:53:48.968613 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 16 16:53:48.968624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 16 16:53:48.968641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 16 16:53:48.968666 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 16 16:53:48.968678 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 16 16:53:48.968690 kernel: TSC deadline timer available Dec 16 16:53:48.968702 kernel: CPU topo: Max. logical packages: 16 Dec 16 16:53:48.968714 kernel: CPU topo: Max. logical dies: 16 Dec 16 16:53:48.968726 kernel: CPU topo: Max. dies per package: 1 Dec 16 16:53:48.968738 kernel: CPU topo: Max. threads per core: 1 Dec 16 16:53:48.968750 kernel: CPU topo: Num. cores per package: 1 Dec 16 16:53:48.968767 kernel: CPU topo: Num. threads per package: 1 Dec 16 16:53:48.968779 kernel: CPU topo: Allowing 2 present CPUs plus 14 hotplug CPUs Dec 16 16:53:48.968791 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 16 16:53:48.968803 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 16 16:53:48.968815 kernel: Booting paravirtualized kernel on KVM Dec 16 16:53:48.968828 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 16 16:53:48.968840 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 16 16:53:48.968873 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u262144 Dec 16 16:53:48.968885 kernel: pcpu-alloc: s207832 r8192 d29736 u262144 alloc=1*2097152 Dec 16 16:53:48.968904 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 16 16:53:48.968916 kernel: kvm-guest: PV spinlocks enabled Dec 16 16:53:48.968928 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 16 16:53:48.968942 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 16:53:48.968955 kernel: random: crng init done Dec 16 16:53:48.968967 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 16 16:53:48.968979 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 16 16:53:48.968992 kernel: Fallback order for Node 0: 0 Dec 16 16:53:48.969008 kernel: Built 1 zonelists, mobility grouping on. Total pages: 524154 Dec 16 16:53:48.969021 kernel: Policy zone: DMA32 Dec 16 16:53:48.969033 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 16 16:53:48.969045 kernel: software IO TLB: area num 16. Dec 16 16:53:48.969057 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 16 16:53:48.969070 kernel: Kernel/User page tables isolation: enabled Dec 16 16:53:48.969082 kernel: ftrace: allocating 40103 entries in 157 pages Dec 16 16:53:48.969094 kernel: ftrace: allocated 157 pages with 5 groups Dec 16 16:53:48.969106 kernel: Dynamic Preempt: voluntary Dec 16 16:53:48.969123 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 16 16:53:48.969136 kernel: rcu: RCU event tracing is enabled. Dec 16 16:53:48.969148 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 16 16:53:48.969161 kernel: Trampoline variant of Tasks RCU enabled. Dec 16 16:53:48.969173 kernel: Rude variant of Tasks RCU enabled. Dec 16 16:53:48.969186 kernel: Tracing variant of Tasks RCU enabled. Dec 16 16:53:48.969211 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 16 16:53:48.969224 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 16 16:53:48.969236 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 16:53:48.969254 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 16:53:48.969266 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 16 16:53:48.969279 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 16 16:53:48.969291 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 16 16:53:48.969314 kernel: Console: colour VGA+ 80x25 Dec 16 16:53:48.969331 kernel: printk: legacy console [tty0] enabled Dec 16 16:53:48.969344 kernel: printk: legacy console [ttyS0] enabled Dec 16 16:53:48.969357 kernel: ACPI: Core revision 20240827 Dec 16 16:53:48.969369 kernel: APIC: Switch to symmetric I/O mode setup Dec 16 16:53:48.969382 kernel: x2apic enabled Dec 16 16:53:48.969395 kernel: APIC: Switched APIC routing to: physical x2apic Dec 16 16:53:48.969408 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 16 16:53:48.969425 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 16 16:53:48.969438 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 16 16:53:48.969451 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 16 16:53:48.969464 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 16 16:53:48.969477 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 16 16:53:48.969493 kernel: Spectre V2 : Mitigation: Retpolines Dec 16 16:53:48.969506 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Dec 16 16:53:48.969519 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 16 16:53:48.969532 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 16 16:53:48.969544 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 16 16:53:48.969557 kernel: MDS: Mitigation: Clear CPU buffers Dec 16 16:53:48.969570 kernel: MMIO Stale Data: Unknown: No mitigations Dec 16 16:53:48.969582 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 16 16:53:48.969595 kernel: active return thunk: its_return_thunk Dec 16 16:53:48.969607 kernel: ITS: Mitigation: Aligned branch/return thunks Dec 16 16:53:48.969620 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 16 16:53:48.969637 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 16 16:53:48.969650 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 16 16:53:48.969663 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 16 16:53:48.969676 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 16 16:53:48.969688 kernel: Freeing SMP alternatives memory: 32K Dec 16 16:53:48.969701 kernel: pid_max: default: 32768 minimum: 301 Dec 16 16:53:48.969714 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Dec 16 16:53:48.969726 kernel: landlock: Up and running. Dec 16 16:53:48.969739 kernel: SELinux: Initializing. Dec 16 16:53:48.969752 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 16:53:48.969764 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 16 16:53:48.969781 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 16 16:53:48.969821 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 16 16:53:48.969838 kernel: signal: max sigframe size: 1776 Dec 16 16:53:48.969851 kernel: rcu: Hierarchical SRCU implementation. Dec 16 16:53:48.969865 kernel: rcu: Max phase no-delay instances is 400. Dec 16 16:53:48.969878 kernel: Timer migration: 2 hierarchy levels; 8 children per group; 2 crossnode level Dec 16 16:53:48.969891 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 16 16:53:48.969904 kernel: smp: Bringing up secondary CPUs ... Dec 16 16:53:48.969916 kernel: smpboot: x86: Booting SMP configuration: Dec 16 16:53:48.969929 kernel: .... node #0, CPUs: #1 Dec 16 16:53:48.969948 kernel: smp: Brought up 1 node, 2 CPUs Dec 16 16:53:48.969961 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 16 16:53:48.969974 kernel: Memory: 1887488K/2096616K available (14336K kernel code, 2444K rwdata, 26064K rodata, 46188K init, 2572K bss, 203112K reserved, 0K cma-reserved) Dec 16 16:53:48.969987 kernel: devtmpfs: initialized Dec 16 16:53:48.970000 kernel: x86/mm: Memory block size: 128MB Dec 16 16:53:48.970013 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 16 16:53:48.970026 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 16 16:53:48.970039 kernel: pinctrl core: initialized pinctrl subsystem Dec 16 16:53:48.970056 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 16 16:53:48.970069 kernel: audit: initializing netlink subsys (disabled) Dec 16 16:53:48.970082 kernel: audit: type=2000 audit(1765904025.264:1): state=initialized audit_enabled=0 res=1 Dec 16 16:53:48.970095 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 16 16:53:48.970108 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 16 16:53:48.970121 kernel: cpuidle: using governor menu Dec 16 16:53:48.970133 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 16 16:53:48.970147 kernel: dca service started, version 1.12.1 Dec 16 16:53:48.970159 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Dec 16 16:53:48.970176 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 16 16:53:48.970189 kernel: PCI: Using configuration type 1 for base access Dec 16 16:53:48.970214 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 16 16:53:48.970227 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 16 16:53:48.970240 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 16 16:53:48.970253 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 16 16:53:48.970266 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 16 16:53:48.970278 kernel: ACPI: Added _OSI(Module Device) Dec 16 16:53:48.970291 kernel: ACPI: Added _OSI(Processor Device) Dec 16 16:53:48.970309 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 16 16:53:48.970322 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 16 16:53:48.970335 kernel: ACPI: Interpreter enabled Dec 16 16:53:48.970347 kernel: ACPI: PM: (supports S0 S5) Dec 16 16:53:48.970360 kernel: ACPI: Using IOAPIC for interrupt routing Dec 16 16:53:48.970373 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 16 16:53:48.970386 kernel: PCI: Using E820 reservations for host bridge windows Dec 16 16:53:48.970398 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 16 16:53:48.970411 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 16 16:53:48.970754 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 16 16:53:48.970969 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 16 16:53:48.971133 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 16 16:53:48.971153 kernel: PCI host bridge to bus 0000:00 Dec 16 16:53:48.971356 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 16 16:53:48.971510 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 16 16:53:48.971658 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 16 16:53:48.971905 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 16 16:53:48.972118 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 16 16:53:48.972365 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 16 16:53:48.972536 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 16 16:53:48.972730 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Dec 16 16:53:48.972978 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 conventional PCI endpoint Dec 16 16:53:48.973183 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfa000000-0xfbffffff pref] Dec 16 16:53:48.973359 kernel: pci 0000:00:01.0: BAR 1 [mem 0xfea50000-0xfea50fff] Dec 16 16:53:48.973520 kernel: pci 0000:00:01.0: ROM [mem 0xfea40000-0xfea4ffff pref] Dec 16 16:53:48.973679 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 16 16:53:48.975882 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:53:48.976057 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea51000-0xfea51fff] Dec 16 16:53:48.976256 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 16:53:48.976430 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 16 16:53:48.976640 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 16:53:48.976861 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:53:48.977048 kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea52000-0xfea52fff] Dec 16 16:53:48.977222 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 16:53:48.977385 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 16:53:48.977545 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 16:53:48.977767 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:53:48.979991 kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea53000-0xfea53fff] Dec 16 16:53:48.980162 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 16:53:48.980360 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 16:53:48.980522 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 16:53:48.980707 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:53:48.980903 kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea54000-0xfea54fff] Dec 16 16:53:48.981075 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 16:53:48.981256 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 16:53:48.981419 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 16:53:48.981604 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:53:48.981769 kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea55000-0xfea55fff] Dec 16 16:53:48.984005 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 16:53:48.984228 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 16:53:48.984406 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 16:53:48.984582 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:53:48.984757 kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea56000-0xfea56fff] Dec 16 16:53:48.984933 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 16:53:48.985090 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 16:53:48.985292 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 16:53:48.985464 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:53:48.985633 kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea57000-0xfea57fff] Dec 16 16:53:48.985793 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 16:53:48.988025 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 16:53:48.988210 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 16:53:48.988391 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Dec 16 16:53:48.988556 kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea58000-0xfea58fff] Dec 16 16:53:48.988726 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 16:53:48.990505 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 16:53:48.990677 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 16:53:48.990919 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Dec 16 16:53:48.991086 kernel: pci 0000:00:03.0: BAR 0 [io 0xc0c0-0xc0df] Dec 16 16:53:48.991265 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfea59000-0xfea59fff] Dec 16 16:53:48.991428 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfd000000-0xfd003fff 64bit pref] Dec 16 16:53:48.991612 kernel: pci 0000:00:03.0: ROM [mem 0xfea00000-0xfea3ffff pref] Dec 16 16:53:48.991791 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Dec 16 16:53:48.991986 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc07f] Dec 16 16:53:48.992161 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfea5a000-0xfea5afff] Dec 16 16:53:48.992333 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfd004000-0xfd007fff 64bit pref] Dec 16 16:53:48.992503 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Dec 16 16:53:48.992664 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 16 16:53:48.994900 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Dec 16 16:53:48.995085 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0e0-0xc0ff] Dec 16 16:53:48.995267 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea5b000-0xfea5bfff] Dec 16 16:53:48.995449 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Dec 16 16:53:48.995640 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Dec 16 16:53:48.995831 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge Dec 16 16:53:48.996022 kernel: pci 0000:01:00.0: BAR 0 [mem 0xfda00000-0xfda000ff 64bit] Dec 16 16:53:48.996232 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 16:53:48.996400 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 16:53:48.996566 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 16:53:48.996796 kernel: pci_bus 0000:02: extended config space not accessible Dec 16 16:53:48.998696 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 conventional PCI endpoint Dec 16 16:53:48.998928 kernel: pci 0000:02:01.0: BAR 0 [mem 0xfd800000-0xfd80000f] Dec 16 16:53:48.999101 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 16:53:48.999307 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Dec 16 16:53:48.999475 kernel: pci 0000:03:00.0: BAR 0 [mem 0xfe800000-0xfe803fff 64bit] Dec 16 16:53:48.999660 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 16:53:48.999893 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Dec 16 16:53:49.000064 kernel: pci 0000:04:00.0: BAR 4 [mem 0xfca00000-0xfca03fff 64bit pref] Dec 16 16:53:49.000240 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 16:53:49.000410 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 16:53:49.000576 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 16:53:49.000752 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 16:53:49.003627 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 16:53:49.003803 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 16:53:49.003823 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 16 16:53:49.003876 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 16 16:53:49.003897 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 16 16:53:49.003918 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 16 16:53:49.003932 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 16 16:53:49.003945 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 16 16:53:49.003958 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 16 16:53:49.003971 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 16 16:53:49.003984 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 16 16:53:49.003997 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 16 16:53:49.004009 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 16 16:53:49.004027 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 16 16:53:49.004040 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 16 16:53:49.004053 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 16 16:53:49.004066 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 16 16:53:49.004078 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 16 16:53:49.004091 kernel: iommu: Default domain type: Translated Dec 16 16:53:49.004104 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 16 16:53:49.004117 kernel: PCI: Using ACPI for IRQ routing Dec 16 16:53:49.004130 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 16 16:53:49.004155 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 16 16:53:49.004168 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 16 16:53:49.004340 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 16 16:53:49.004510 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 16 16:53:49.004654 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 16 16:53:49.004672 kernel: vgaarb: loaded Dec 16 16:53:49.004684 kernel: clocksource: Switched to clocksource kvm-clock Dec 16 16:53:49.004696 kernel: VFS: Disk quotas dquot_6.6.0 Dec 16 16:53:49.004707 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 16 16:53:49.004725 kernel: pnp: PnP ACPI init Dec 16 16:53:49.004926 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 16 16:53:49.004962 kernel: pnp: PnP ACPI: found 5 devices Dec 16 16:53:49.004975 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 16 16:53:49.004988 kernel: NET: Registered PF_INET protocol family Dec 16 16:53:49.005002 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 16 16:53:49.005015 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 16 16:53:49.005028 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 16 16:53:49.005048 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 16 16:53:49.005061 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 16 16:53:49.005074 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 16 16:53:49.005087 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 16:53:49.005100 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 16 16:53:49.005113 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 16 16:53:49.005126 kernel: NET: Registered PF_XDP protocol family Dec 16 16:53:49.005297 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 16 16:53:49.005477 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 16 16:53:49.005663 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 16 16:53:49.007829 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 16 16:53:49.008016 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 16 16:53:49.008178 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 16 16:53:49.008355 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 16 16:53:49.008527 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 16 16:53:49.008695 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Dec 16 16:53:49.008909 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Dec 16 16:53:49.009072 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Dec 16 16:53:49.009247 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Dec 16 16:53:49.009408 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Dec 16 16:53:49.009568 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Dec 16 16:53:49.009728 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Dec 16 16:53:49.012671 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Dec 16 16:53:49.012915 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 16 16:53:49.013129 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 16 16:53:49.013328 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 16 16:53:49.013491 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 16 16:53:49.013669 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 16 16:53:49.013864 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 16:53:49.014027 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 16 16:53:49.014226 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 16 16:53:49.014390 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 16 16:53:49.014552 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 16:53:49.014733 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 16 16:53:49.017940 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 16 16:53:49.018108 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 16 16:53:49.018293 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 16:53:49.018457 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 16 16:53:49.018621 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 16 16:53:49.018799 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 16 16:53:49.020005 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 16:53:49.020182 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 16 16:53:49.020358 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 16 16:53:49.020520 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 16 16:53:49.020690 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 16:53:49.021954 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 16 16:53:49.022124 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 16 16:53:49.022299 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 16 16:53:49.022460 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 16:53:49.022621 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 16 16:53:49.022786 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 16 16:53:49.022992 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 16 16:53:49.023154 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 16:53:49.023398 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 16 16:53:49.023675 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 16 16:53:49.025015 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 16 16:53:49.025206 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 16:53:49.025364 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 16 16:53:49.025513 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 16 16:53:49.025660 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 16 16:53:49.026847 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 16 16:53:49.027008 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 16 16:53:49.027970 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 16 16:53:49.028135 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 16 16:53:49.028324 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 16 16:53:49.028477 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 16 16:53:49.028639 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 16 16:53:49.028846 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 16 16:53:49.029004 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 16 16:53:49.029171 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 16 16:53:49.029362 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 16 16:53:49.029517 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 16 16:53:49.029674 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 16 16:53:49.029870 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 16 16:53:49.030046 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 16 16:53:49.030253 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 16 16:53:49.030416 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 16 16:53:49.030567 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 16 16:53:49.030717 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 16 16:53:49.030926 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 16 16:53:49.031084 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 16 16:53:49.031253 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 16 16:53:49.031425 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 16 16:53:49.031578 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 16 16:53:49.031728 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 16 16:53:49.031954 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 16 16:53:49.032109 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 16 16:53:49.032274 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 16 16:53:49.032296 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 16 16:53:49.032317 kernel: PCI: CLS 0 bytes, default 64 Dec 16 16:53:49.032331 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 16 16:53:49.032345 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Dec 16 16:53:49.032359 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 16 16:53:49.032373 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 16 16:53:49.032386 kernel: Initialise system trusted keyrings Dec 16 16:53:49.032400 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 16 16:53:49.032414 kernel: Key type asymmetric registered Dec 16 16:53:49.032427 kernel: Asymmetric key parser 'x509' registered Dec 16 16:53:49.032445 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 16 16:53:49.032458 kernel: io scheduler mq-deadline registered Dec 16 16:53:49.032472 kernel: io scheduler kyber registered Dec 16 16:53:49.032485 kernel: io scheduler bfq registered Dec 16 16:53:49.032644 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 16 16:53:49.032827 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 16 16:53:49.032992 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:53:49.033181 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 16 16:53:49.033365 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 16 16:53:49.033526 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:53:49.033688 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 16 16:53:49.033884 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 16 16:53:49.034047 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:53:49.034222 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 16 16:53:49.034393 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 16 16:53:49.034555 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:53:49.034718 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 16 16:53:49.034902 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 16 16:53:49.035064 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:53:49.035239 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 16 16:53:49.035409 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 16 16:53:49.035571 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:53:49.035732 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 16 16:53:49.035935 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 16 16:53:49.036098 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:53:49.036275 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 16 16:53:49.036446 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 16 16:53:49.036607 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 16 16:53:49.036628 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 16 16:53:49.036643 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 16 16:53:49.036657 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 16 16:53:49.036671 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 16 16:53:49.036685 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 16 16:53:49.036705 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 16 16:53:49.036719 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 16 16:53:49.036733 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 16 16:53:49.036933 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 16 16:53:49.036955 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 16 16:53:49.037134 kernel: rtc_cmos 00:03: registered as rtc0 Dec 16 16:53:49.037323 kernel: rtc_cmos 00:03: setting system clock to 2025-12-16T16:53:48 UTC (1765904028) Dec 16 16:53:49.037476 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 16 16:53:49.037504 kernel: intel_pstate: CPU model not supported Dec 16 16:53:49.037518 kernel: NET: Registered PF_INET6 protocol family Dec 16 16:53:49.037532 kernel: Segment Routing with IPv6 Dec 16 16:53:49.037546 kernel: In-situ OAM (IOAM) with IPv6 Dec 16 16:53:49.037559 kernel: NET: Registered PF_PACKET protocol family Dec 16 16:53:49.037573 kernel: Key type dns_resolver registered Dec 16 16:53:49.037586 kernel: IPI shorthand broadcast: enabled Dec 16 16:53:49.037600 kernel: sched_clock: Marking stable (3563004397, 226748940)->(3919340890, -129587553) Dec 16 16:53:49.037613 kernel: registered taskstats version 1 Dec 16 16:53:49.037631 kernel: Loading compiled-in X.509 certificates Dec 16 16:53:49.037649 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.61-flatcar: 0d0c78e6590cb40d27f1cef749ef9f2f3425f38d' Dec 16 16:53:49.037663 kernel: Demotion targets for Node 0: null Dec 16 16:53:49.037676 kernel: Key type .fscrypt registered Dec 16 16:53:49.037689 kernel: Key type fscrypt-provisioning registered Dec 16 16:53:49.037703 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 16 16:53:49.037716 kernel: ima: Allocated hash algorithm: sha1 Dec 16 16:53:49.037730 kernel: ima: No architecture policies found Dec 16 16:53:49.037743 kernel: clk: Disabling unused clocks Dec 16 16:53:49.037760 kernel: Warning: unable to open an initial console. Dec 16 16:53:49.037775 kernel: Freeing unused kernel image (initmem) memory: 46188K Dec 16 16:53:49.037788 kernel: Write protecting the kernel read-only data: 40960k Dec 16 16:53:49.037802 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Dec 16 16:53:49.037832 kernel: Run /init as init process Dec 16 16:53:49.037846 kernel: with arguments: Dec 16 16:53:49.037859 kernel: /init Dec 16 16:53:49.037872 kernel: with environment: Dec 16 16:53:49.037886 kernel: HOME=/ Dec 16 16:53:49.037904 kernel: TERM=linux Dec 16 16:53:49.037927 systemd[1]: Successfully made /usr/ read-only. Dec 16 16:53:49.037946 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 16:53:49.037962 systemd[1]: Detected virtualization kvm. Dec 16 16:53:49.037976 systemd[1]: Detected architecture x86-64. Dec 16 16:53:49.037989 systemd[1]: Running in initrd. Dec 16 16:53:49.038003 systemd[1]: No hostname configured, using default hostname. Dec 16 16:53:49.038023 systemd[1]: Hostname set to . Dec 16 16:53:49.038037 systemd[1]: Initializing machine ID from VM UUID. Dec 16 16:53:49.038052 systemd[1]: Queued start job for default target initrd.target. Dec 16 16:53:49.038066 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 16:53:49.038080 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 16:53:49.038095 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 16 16:53:49.038122 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 16:53:49.038136 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 16 16:53:49.038155 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 16 16:53:49.038170 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 16 16:53:49.038207 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 16 16:53:49.038223 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 16:53:49.038237 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 16:53:49.038251 systemd[1]: Reached target paths.target - Path Units. Dec 16 16:53:49.038266 systemd[1]: Reached target slices.target - Slice Units. Dec 16 16:53:49.038285 systemd[1]: Reached target swap.target - Swaps. Dec 16 16:53:49.038299 systemd[1]: Reached target timers.target - Timer Units. Dec 16 16:53:49.038314 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 16:53:49.038328 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 16:53:49.038343 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 16 16:53:49.038357 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Dec 16 16:53:49.038371 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 16:53:49.038385 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 16:53:49.038400 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 16:53:49.038419 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 16:53:49.038433 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 16 16:53:49.038448 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 16:53:49.038462 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 16 16:53:49.038477 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Dec 16 16:53:49.038491 systemd[1]: Starting systemd-fsck-usr.service... Dec 16 16:53:49.038505 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 16:53:49.038520 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 16:53:49.038539 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 16:53:49.038553 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 16 16:53:49.038568 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 16:53:49.038587 systemd[1]: Finished systemd-fsck-usr.service. Dec 16 16:53:49.038655 systemd-journald[211]: Collecting audit messages is disabled. Dec 16 16:53:49.038696 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 16:53:49.038712 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 16:53:49.038726 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 16 16:53:49.038764 kernel: Bridge firewalling registered Dec 16 16:53:49.038785 systemd-journald[211]: Journal started Dec 16 16:53:49.038857 systemd-journald[211]: Runtime Journal (/run/log/journal/6522ab67f60d4d39a71cfe792b489154) is 4.7M, max 37.8M, 33.1M free. Dec 16 16:53:48.972341 systemd-modules-load[212]: Inserted module 'overlay' Dec 16 16:53:49.093671 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 16:53:49.027998 systemd-modules-load[212]: Inserted module 'br_netfilter' Dec 16 16:53:49.094779 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 16:53:49.096185 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 16:53:49.099953 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 16 16:53:49.102972 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 16:53:49.107077 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 16:53:49.108952 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 16:53:49.130012 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 16:53:49.137180 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 16:53:49.142078 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Dec 16 16:53:49.149958 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 16:53:49.155041 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 16:53:49.156181 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 16:53:49.161985 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 16 16:53:49.192961 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=a214a2d85e162c493e8b13db2df50a43e1005a0e4854a1ae089a14f442a30022 Dec 16 16:53:49.218178 systemd-resolved[248]: Positive Trust Anchors: Dec 16 16:53:49.219144 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 16:53:49.219202 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 16:53:49.227518 systemd-resolved[248]: Defaulting to hostname 'linux'. Dec 16 16:53:49.230806 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 16:53:49.232556 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 16:53:49.313850 kernel: SCSI subsystem initialized Dec 16 16:53:49.325832 kernel: Loading iSCSI transport class v2.0-870. Dec 16 16:53:49.339962 kernel: iscsi: registered transport (tcp) Dec 16 16:53:49.366981 kernel: iscsi: registered transport (qla4xxx) Dec 16 16:53:49.367073 kernel: QLogic iSCSI HBA Driver Dec 16 16:53:49.393327 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 16:53:49.415636 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 16:53:49.417343 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 16:53:49.482616 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 16 16:53:49.486868 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 16 16:53:49.549860 kernel: raid6: sse2x4 gen() 7479 MB/s Dec 16 16:53:49.567852 kernel: raid6: sse2x2 gen() 5296 MB/s Dec 16 16:53:49.586487 kernel: raid6: sse2x1 gen() 5340 MB/s Dec 16 16:53:49.586534 kernel: raid6: using algorithm sse2x4 gen() 7479 MB/s Dec 16 16:53:49.605612 kernel: raid6: .... xor() 4882 MB/s, rmw enabled Dec 16 16:53:49.605669 kernel: raid6: using ssse3x2 recovery algorithm Dec 16 16:53:49.632844 kernel: xor: automatically using best checksumming function avx Dec 16 16:53:49.831843 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 16 16:53:49.841327 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 16 16:53:49.845144 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 16:53:49.882105 systemd-udevd[459]: Using default interface naming scheme 'v255'. Dec 16 16:53:49.892194 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 16:53:49.896950 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 16 16:53:49.926222 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Dec 16 16:53:49.961329 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 16:53:49.965118 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 16:53:50.095292 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 16:53:50.098032 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 16 16:53:50.234102 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 16 16:53:50.254850 kernel: cryptd: max_cpu_qlen set to 1000 Dec 16 16:53:50.254910 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 16 16:53:50.283441 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 16 16:53:50.283513 kernel: GPT:17805311 != 125829119 Dec 16 16:53:50.283533 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 16 16:53:50.283561 kernel: AES CTR mode by8 optimization enabled Dec 16 16:53:50.287350 kernel: GPT:17805311 != 125829119 Dec 16 16:53:50.287384 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 16 16:53:50.289183 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 16:53:50.301059 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 16:53:50.308117 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Dec 16 16:53:50.301271 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 16:53:50.308960 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 16:53:50.327560 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 16:53:50.328856 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 16:53:50.348858 kernel: ACPI: bus type USB registered Dec 16 16:53:50.356824 kernel: usbcore: registered new interface driver usbfs Dec 16 16:53:50.366824 kernel: usbcore: registered new interface driver hub Dec 16 16:53:50.374837 kernel: usbcore: registered new device driver usb Dec 16 16:53:50.389875 kernel: libata version 3.00 loaded. Dec 16 16:53:50.419679 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 16 16:53:50.537277 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 16 16:53:50.537689 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 16 16:53:50.537935 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 16 16:53:50.538139 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 16 16:53:50.538420 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 16 16:53:50.538628 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 16 16:53:50.538874 kernel: hub 1-0:1.0: USB hub found Dec 16 16:53:50.539179 kernel: hub 1-0:1.0: 4 ports detected Dec 16 16:53:50.539395 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 16 16:53:50.539714 kernel: hub 2-0:1.0: USB hub found Dec 16 16:53:50.539978 kernel: hub 2-0:1.0: 4 ports detected Dec 16 16:53:50.540204 kernel: ahci 0000:00:1f.2: version 3.0 Dec 16 16:53:50.540422 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 16 16:53:50.540446 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Dec 16 16:53:50.540648 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Dec 16 16:53:50.540906 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 16 16:53:50.541097 kernel: scsi host0: ahci Dec 16 16:53:50.541323 kernel: scsi host1: ahci Dec 16 16:53:50.541519 kernel: scsi host2: ahci Dec 16 16:53:50.541765 kernel: scsi host3: ahci Dec 16 16:53:50.541976 kernel: scsi host4: ahci Dec 16 16:53:50.542183 kernel: scsi host5: ahci Dec 16 16:53:50.542374 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 lpm-pol 1 Dec 16 16:53:50.542396 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 lpm-pol 1 Dec 16 16:53:50.542415 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 lpm-pol 1 Dec 16 16:53:50.542433 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 lpm-pol 1 Dec 16 16:53:50.542505 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 lpm-pol 1 Dec 16 16:53:50.542525 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 lpm-pol 1 Dec 16 16:53:50.539031 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 16:53:50.562989 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 16 16:53:50.585101 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 16:53:50.596061 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 16 16:53:50.596995 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 16 16:53:50.600582 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 16 16:53:50.625387 disk-uuid[611]: Primary Header is updated. Dec 16 16:53:50.625387 disk-uuid[611]: Secondary Entries is updated. Dec 16 16:53:50.625387 disk-uuid[611]: Secondary Header is updated. Dec 16 16:53:50.632924 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 16:53:50.639826 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 16:53:50.667193 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 16 16:53:50.763939 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 16 16:53:50.764022 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 16 16:53:50.764042 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 16 16:53:50.764908 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 16 16:53:50.766261 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 16 16:53:50.769060 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 16 16:53:50.815834 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 16 16:53:50.822047 kernel: usbcore: registered new interface driver usbhid Dec 16 16:53:50.822093 kernel: usbhid: USB HID core driver Dec 16 16:53:50.829825 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Dec 16 16:53:50.833828 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 16 16:53:50.853426 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 16 16:53:50.855294 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 16:53:50.856100 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 16:53:50.857765 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 16:53:50.860554 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 16 16:53:50.883066 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 16 16:53:51.643206 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 16 16:53:51.644823 disk-uuid[612]: The operation has completed successfully. Dec 16 16:53:51.704040 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 16 16:53:51.704263 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 16 16:53:51.763009 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 16 16:53:51.794394 sh[638]: Success Dec 16 16:53:51.819284 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 16 16:53:51.819393 kernel: device-mapper: uevent: version 1.0.3 Dec 16 16:53:51.821271 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Dec 16 16:53:51.835843 kernel: device-mapper: verity: sha256 using shash "sha256-avx" Dec 16 16:53:51.890154 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 16 16:53:51.894931 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 16 16:53:51.910433 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 16 16:53:51.921988 kernel: BTRFS: device fsid a6ae7f96-a076-4d3c-81ed-46dd341492f8 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (650) Dec 16 16:53:51.924827 kernel: BTRFS info (device dm-0): first mount of filesystem a6ae7f96-a076-4d3c-81ed-46dd341492f8 Dec 16 16:53:51.924863 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 16 16:53:51.937063 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 16 16:53:51.937141 kernel: BTRFS info (device dm-0): enabling free space tree Dec 16 16:53:51.940749 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 16 16:53:51.942996 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Dec 16 16:53:51.944815 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 16 16:53:51.946063 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 16 16:53:51.949942 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 16 16:53:51.992878 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (683) Dec 16 16:53:51.996152 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 16:53:51.998830 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 16:53:52.006441 kernel: BTRFS info (device vda6): turning on async discard Dec 16 16:53:52.006499 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 16:53:52.014837 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 16:53:52.017151 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 16 16:53:52.020526 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 16 16:53:52.103712 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 16:53:52.107876 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 16:53:52.183505 systemd-networkd[820]: lo: Link UP Dec 16 16:53:52.183521 systemd-networkd[820]: lo: Gained carrier Dec 16 16:53:52.188290 systemd-networkd[820]: Enumeration completed Dec 16 16:53:52.189571 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 16:53:52.189577 systemd-networkd[820]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 16:53:52.190117 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 16:53:52.192189 systemd[1]: Reached target network.target - Network. Dec 16 16:53:52.193442 systemd-networkd[820]: eth0: Link UP Dec 16 16:53:52.193684 systemd-networkd[820]: eth0: Gained carrier Dec 16 16:53:52.193699 systemd-networkd[820]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 16:53:52.216060 systemd-networkd[820]: eth0: DHCPv4 address 10.230.10.122/30, gateway 10.230.10.121 acquired from 10.230.10.121 Dec 16 16:53:52.259954 ignition[740]: Ignition 2.22.0 Dec 16 16:53:52.261388 ignition[740]: Stage: fetch-offline Dec 16 16:53:52.261555 ignition[740]: no configs at "/usr/lib/ignition/base.d" Dec 16 16:53:52.261575 ignition[740]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 16:53:52.265328 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 16:53:52.261843 ignition[740]: parsed url from cmdline: "" Dec 16 16:53:52.261851 ignition[740]: no config URL provided Dec 16 16:53:52.261868 ignition[740]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 16:53:52.261885 ignition[740]: no config at "/usr/lib/ignition/user.ign" Dec 16 16:53:52.261902 ignition[740]: failed to fetch config: resource requires networking Dec 16 16:53:52.262215 ignition[740]: Ignition finished successfully Dec 16 16:53:52.270029 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 16 16:53:52.315167 ignition[831]: Ignition 2.22.0 Dec 16 16:53:52.315923 ignition[831]: Stage: fetch Dec 16 16:53:52.316213 ignition[831]: no configs at "/usr/lib/ignition/base.d" Dec 16 16:53:52.316234 ignition[831]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 16:53:52.316364 ignition[831]: parsed url from cmdline: "" Dec 16 16:53:52.316371 ignition[831]: no config URL provided Dec 16 16:53:52.316381 ignition[831]: reading system config file "/usr/lib/ignition/user.ign" Dec 16 16:53:52.316398 ignition[831]: no config at "/usr/lib/ignition/user.ign" Dec 16 16:53:52.316626 ignition[831]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 16 16:53:52.318142 ignition[831]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 16 16:53:52.318193 ignition[831]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 16 16:53:52.337200 ignition[831]: GET result: OK Dec 16 16:53:52.337501 ignition[831]: parsing config with SHA512: 2148920e181083807ffaf17667424bec4887d424f40ee14d16f9e60654dcc9da54c714530569df5c5ec0b086312d382de74f92a1ba9b85d60ef8f910f80fd31a Dec 16 16:53:52.344616 unknown[831]: fetched base config from "system" Dec 16 16:53:52.344636 unknown[831]: fetched base config from "system" Dec 16 16:53:52.345011 ignition[831]: fetch: fetch complete Dec 16 16:53:52.344645 unknown[831]: fetched user config from "openstack" Dec 16 16:53:52.345019 ignition[831]: fetch: fetch passed Dec 16 16:53:52.348328 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 16 16:53:52.345115 ignition[831]: Ignition finished successfully Dec 16 16:53:52.350886 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 16 16:53:52.391248 ignition[837]: Ignition 2.22.0 Dec 16 16:53:52.391276 ignition[837]: Stage: kargs Dec 16 16:53:52.391527 ignition[837]: no configs at "/usr/lib/ignition/base.d" Dec 16 16:53:52.391546 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 16:53:52.394898 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 16 16:53:52.392840 ignition[837]: kargs: kargs passed Dec 16 16:53:52.392919 ignition[837]: Ignition finished successfully Dec 16 16:53:52.398901 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 16 16:53:52.443013 ignition[843]: Ignition 2.22.0 Dec 16 16:53:52.443882 ignition[843]: Stage: disks Dec 16 16:53:52.444151 ignition[843]: no configs at "/usr/lib/ignition/base.d" Dec 16 16:53:52.444171 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 16:53:52.445967 ignition[843]: disks: disks passed Dec 16 16:53:52.446040 ignition[843]: Ignition finished successfully Dec 16 16:53:52.447777 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 16 16:53:52.449861 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 16 16:53:52.450678 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 16 16:53:52.452324 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 16:53:52.453883 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 16:53:52.455425 systemd[1]: Reached target basic.target - Basic System. Dec 16 16:53:52.458443 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 16 16:53:52.490700 systemd-fsck[851]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Dec 16 16:53:52.495497 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 16 16:53:52.498474 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 16 16:53:52.639820 kernel: EXT4-fs (vda9): mounted filesystem e48ca59c-1206-4abd-b121-5e9b35e49852 r/w with ordered data mode. Quota mode: none. Dec 16 16:53:52.641385 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 16 16:53:52.642701 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 16 16:53:52.645243 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 16:53:52.647174 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 16 16:53:52.649648 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 16 16:53:52.652045 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 16 16:53:52.652853 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 16 16:53:52.652911 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 16:53:52.671560 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 16 16:53:52.685569 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (859) Dec 16 16:53:52.684825 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 16 16:53:52.690166 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 16:53:52.692832 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 16:53:52.702317 kernel: BTRFS info (device vda6): turning on async discard Dec 16 16:53:52.702365 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 16:53:52.707397 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 16:53:52.758831 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:53:52.781728 initrd-setup-root[888]: cut: /sysroot/etc/passwd: No such file or directory Dec 16 16:53:52.788265 initrd-setup-root[895]: cut: /sysroot/etc/group: No such file or directory Dec 16 16:53:52.794653 initrd-setup-root[902]: cut: /sysroot/etc/shadow: No such file or directory Dec 16 16:53:52.800396 initrd-setup-root[909]: cut: /sysroot/etc/gshadow: No such file or directory Dec 16 16:53:52.913377 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 16 16:53:52.915913 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 16 16:53:52.918994 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 16 16:53:52.938011 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 16 16:53:52.941632 kernel: BTRFS info (device vda6): last unmount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 16:53:52.967607 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 16 16:53:52.990973 ignition[977]: INFO : Ignition 2.22.0 Dec 16 16:53:52.992298 ignition[977]: INFO : Stage: mount Dec 16 16:53:52.992298 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 16:53:52.992298 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 16:53:52.994925 ignition[977]: INFO : mount: mount passed Dec 16 16:53:52.994925 ignition[977]: INFO : Ignition finished successfully Dec 16 16:53:52.995830 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 16 16:53:53.786858 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:53:53.824179 systemd-networkd[820]: eth0: Gained IPv6LL Dec 16 16:53:55.330881 systemd-networkd[820]: eth0: Ignoring DHCPv6 address 2a02:1348:179:829e:24:19ff:fee6:a7a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:829e:24:19ff:fee6:a7a/64 assigned by NDisc. Dec 16 16:53:55.330896 systemd-networkd[820]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 16 16:53:55.799945 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:53:59.808831 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:53:59.816918 coreos-metadata[861]: Dec 16 16:53:59.816 WARN failed to locate config-drive, using the metadata service API instead Dec 16 16:53:59.841465 coreos-metadata[861]: Dec 16 16:53:59.841 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 16:53:59.856396 coreos-metadata[861]: Dec 16 16:53:59.856 INFO Fetch successful Dec 16 16:53:59.858403 coreos-metadata[861]: Dec 16 16:53:59.858 INFO wrote hostname srv-jrcza.gb1.brightbox.com to /sysroot/etc/hostname Dec 16 16:53:59.860330 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 16 16:53:59.860550 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 16 16:53:59.865093 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 16 16:53:59.887395 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 16 16:53:59.917845 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (993) Dec 16 16:53:59.924884 kernel: BTRFS info (device vda6): first mount of filesystem 7e9ead35-f0ec-40e8-bc31-5061934f865a Dec 16 16:53:59.924939 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 16 16:53:59.931082 kernel: BTRFS info (device vda6): turning on async discard Dec 16 16:53:59.931125 kernel: BTRFS info (device vda6): enabling free space tree Dec 16 16:53:59.934938 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 16 16:53:59.974967 ignition[1011]: INFO : Ignition 2.22.0 Dec 16 16:53:59.974967 ignition[1011]: INFO : Stage: files Dec 16 16:53:59.977039 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 16:53:59.977039 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 16:53:59.977039 ignition[1011]: DEBUG : files: compiled without relabeling support, skipping Dec 16 16:53:59.979943 ignition[1011]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 16 16:53:59.979943 ignition[1011]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 16 16:53:59.987827 ignition[1011]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 16 16:53:59.987827 ignition[1011]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 16 16:53:59.987827 ignition[1011]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 16 16:53:59.987827 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 16:53:59.987827 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Dec 16 16:53:59.984291 unknown[1011]: wrote ssh authorized keys file for user: core Dec 16 16:54:00.170034 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 16 16:54:00.482902 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Dec 16 16:54:00.482902 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 16 16:54:00.482902 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 16 16:54:00.482902 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 16 16:54:00.482902 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 16 16:54:00.482902 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 16:54:00.489937 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 16 16:54:00.489937 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 16:54:00.489937 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 16 16:54:00.489937 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 16:54:00.489937 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 16 16:54:00.489937 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 16:54:00.489937 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 16:54:00.489937 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 16:54:00.489937 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Dec 16 16:54:00.857239 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 16 16:54:03.017932 ignition[1011]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Dec 16 16:54:03.020445 ignition[1011]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 16 16:54:03.069129 ignition[1011]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 16:54:03.075854 ignition[1011]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 16 16:54:03.075854 ignition[1011]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 16 16:54:03.075854 ignition[1011]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 16 16:54:03.075854 ignition[1011]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 16 16:54:03.075854 ignition[1011]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 16 16:54:03.075854 ignition[1011]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 16 16:54:03.075854 ignition[1011]: INFO : files: files passed Dec 16 16:54:03.075854 ignition[1011]: INFO : Ignition finished successfully Dec 16 16:54:03.077899 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 16 16:54:03.083035 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 16 16:54:03.086278 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 16 16:54:03.108761 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 16 16:54:03.108994 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 16 16:54:03.116628 initrd-setup-root-after-ignition[1041]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 16:54:03.118532 initrd-setup-root-after-ignition[1041]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 16 16:54:03.119761 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 16 16:54:03.120461 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 16:54:03.122313 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 16 16:54:03.124599 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 16 16:54:03.179525 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 16 16:54:03.179725 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 16 16:54:03.182034 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 16 16:54:03.182913 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 16 16:54:03.184496 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 16 16:54:03.185656 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 16 16:54:03.229054 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 16:54:03.233047 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 16 16:54:03.257743 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 16 16:54:03.259563 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 16:54:03.260482 systemd[1]: Stopped target timers.target - Timer Units. Dec 16 16:54:03.262077 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 16 16:54:03.262281 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 16 16:54:03.264157 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 16 16:54:03.265104 systemd[1]: Stopped target basic.target - Basic System. Dec 16 16:54:03.266670 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 16 16:54:03.268149 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 16 16:54:03.269531 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 16 16:54:03.271082 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Dec 16 16:54:03.272758 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 16 16:54:03.274358 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 16 16:54:03.276093 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 16 16:54:03.277565 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 16 16:54:03.279112 systemd[1]: Stopped target swap.target - Swaps. Dec 16 16:54:03.280507 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 16 16:54:03.280763 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 16 16:54:03.282396 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 16 16:54:03.283365 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 16:54:03.284772 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 16 16:54:03.285010 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 16:54:03.286418 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 16 16:54:03.286649 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 16 16:54:03.288666 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 16 16:54:03.288877 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 16 16:54:03.290598 systemd[1]: ignition-files.service: Deactivated successfully. Dec 16 16:54:03.290858 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 16 16:54:03.299941 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 16 16:54:03.303089 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 16 16:54:03.306187 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 16 16:54:03.306462 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 16:54:03.308044 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 16 16:54:03.309011 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 16 16:54:03.316481 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 16 16:54:03.318967 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 16 16:54:03.342856 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 16 16:54:03.354976 ignition[1065]: INFO : Ignition 2.22.0 Dec 16 16:54:03.354976 ignition[1065]: INFO : Stage: umount Dec 16 16:54:03.354976 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 16 16:54:03.354976 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 16 16:54:03.354301 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 16 16:54:03.361618 ignition[1065]: INFO : umount: umount passed Dec 16 16:54:03.361618 ignition[1065]: INFO : Ignition finished successfully Dec 16 16:54:03.354502 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 16 16:54:03.357390 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 16 16:54:03.357546 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 16 16:54:03.360911 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 16 16:54:03.361023 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 16 16:54:03.362402 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 16 16:54:03.362474 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 16 16:54:03.363654 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 16 16:54:03.363724 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 16 16:54:03.365066 systemd[1]: Stopped target network.target - Network. Dec 16 16:54:03.366302 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 16 16:54:03.366423 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 16 16:54:03.367768 systemd[1]: Stopped target paths.target - Path Units. Dec 16 16:54:03.369150 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 16 16:54:03.372892 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 16:54:03.373782 systemd[1]: Stopped target slices.target - Slice Units. Dec 16 16:54:03.375227 systemd[1]: Stopped target sockets.target - Socket Units. Dec 16 16:54:03.376710 systemd[1]: iscsid.socket: Deactivated successfully. Dec 16 16:54:03.376825 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 16 16:54:03.378178 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 16 16:54:03.378243 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 16 16:54:03.379784 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 16 16:54:03.379927 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 16 16:54:03.381170 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 16 16:54:03.381248 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 16 16:54:03.382724 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 16 16:54:03.382832 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 16 16:54:03.384820 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 16 16:54:03.387095 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 16 16:54:03.390005 systemd-networkd[820]: eth0: DHCPv6 lease lost Dec 16 16:54:03.398399 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 16 16:54:03.398620 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 16 16:54:03.403837 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Dec 16 16:54:03.404279 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 16 16:54:03.404478 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 16 16:54:03.407182 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Dec 16 16:54:03.408084 systemd[1]: Stopped target network-pre.target - Preparation for Network. Dec 16 16:54:03.409190 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 16 16:54:03.409266 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 16 16:54:03.412920 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 16 16:54:03.413635 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 16 16:54:03.413715 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 16 16:54:03.416239 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 16 16:54:03.416312 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 16 16:54:03.419267 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 16 16:54:03.419353 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 16 16:54:03.421014 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 16 16:54:03.421085 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 16:54:03.423058 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 16:54:03.427884 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 16 16:54:03.427993 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Dec 16 16:54:03.436413 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 16 16:54:03.436699 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 16:54:03.439237 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 16 16:54:03.439407 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 16 16:54:03.442197 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 16 16:54:03.442299 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 16:54:03.444142 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 16 16:54:03.444222 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 16 16:54:03.447052 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 16 16:54:03.447129 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 16 16:54:03.447901 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 16 16:54:03.447978 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 16 16:54:03.450987 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 16 16:54:03.452541 systemd[1]: systemd-network-generator.service: Deactivated successfully. Dec 16 16:54:03.452619 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 16:54:03.456406 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 16 16:54:03.456483 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 16:54:03.459169 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 16 16:54:03.459246 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 16:54:03.460976 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 16 16:54:03.461058 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 16:54:03.461968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 16 16:54:03.462050 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 16:54:03.464563 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Dec 16 16:54:03.464647 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Dec 16 16:54:03.464716 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 16 16:54:03.464789 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Dec 16 16:54:03.465493 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 16 16:54:03.465649 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 16 16:54:03.476232 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 16 16:54:03.476380 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 16 16:54:03.480204 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 16 16:54:03.482428 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 16 16:54:03.503710 systemd[1]: Switching root. Dec 16 16:54:03.551643 systemd-journald[211]: Journal stopped Dec 16 16:54:05.131852 systemd-journald[211]: Received SIGTERM from PID 1 (systemd). Dec 16 16:54:05.132065 kernel: SELinux: policy capability network_peer_controls=1 Dec 16 16:54:05.132126 kernel: SELinux: policy capability open_perms=1 Dec 16 16:54:05.132155 kernel: SELinux: policy capability extended_socket_class=1 Dec 16 16:54:05.132183 kernel: SELinux: policy capability always_check_network=0 Dec 16 16:54:05.132210 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 16 16:54:05.132247 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 16 16:54:05.132267 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 16 16:54:05.132291 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 16 16:54:05.132315 kernel: SELinux: policy capability userspace_initial_context=0 Dec 16 16:54:05.132347 kernel: audit: type=1403 audit(1765904043.841:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 16 16:54:05.132378 systemd[1]: Successfully loaded SELinux policy in 76.237ms. Dec 16 16:54:05.132435 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.312ms. Dec 16 16:54:05.132467 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Dec 16 16:54:05.132502 systemd[1]: Detected virtualization kvm. Dec 16 16:54:05.132524 systemd[1]: Detected architecture x86-64. Dec 16 16:54:05.132554 systemd[1]: Detected first boot. Dec 16 16:54:05.132575 systemd[1]: Hostname set to . Dec 16 16:54:05.132600 systemd[1]: Initializing machine ID from VM UUID. Dec 16 16:54:05.132646 zram_generator::config[1110]: No configuration found. Dec 16 16:54:05.132672 kernel: Guest personality initialized and is inactive Dec 16 16:54:05.132706 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Dec 16 16:54:05.132731 kernel: Initialized host personality Dec 16 16:54:05.132771 kernel: NET: Registered PF_VSOCK protocol family Dec 16 16:54:05.133910 systemd[1]: Populated /etc with preset unit settings. Dec 16 16:54:05.133996 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Dec 16 16:54:05.134023 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 16 16:54:05.134044 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 16 16:54:05.134082 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 16 16:54:05.134115 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 16 16:54:05.134145 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 16 16:54:05.134186 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 16 16:54:05.134216 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 16 16:54:05.134246 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 16 16:54:05.134289 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 16 16:54:05.134327 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 16 16:54:05.134348 systemd[1]: Created slice user.slice - User and Session Slice. Dec 16 16:54:05.134381 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 16 16:54:05.134404 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 16 16:54:05.134424 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 16 16:54:05.134445 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 16 16:54:05.134466 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 16 16:54:05.134487 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 16 16:54:05.134520 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 16 16:54:05.134542 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 16 16:54:05.134570 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 16 16:54:05.134592 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 16 16:54:05.134620 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 16 16:54:05.134647 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 16 16:54:05.134675 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 16 16:54:05.138837 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 16 16:54:05.139944 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 16 16:54:05.139990 systemd[1]: Reached target slices.target - Slice Units. Dec 16 16:54:05.140013 systemd[1]: Reached target swap.target - Swaps. Dec 16 16:54:05.140040 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 16 16:54:05.140062 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 16 16:54:05.140122 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Dec 16 16:54:05.140157 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 16 16:54:05.140185 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 16 16:54:05.140212 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 16 16:54:05.140233 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 16 16:54:05.140253 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 16 16:54:05.140292 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 16 16:54:05.140314 systemd[1]: Mounting media.mount - External Media Directory... Dec 16 16:54:05.140336 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 16:54:05.140356 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 16 16:54:05.140378 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 16 16:54:05.140406 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 16 16:54:05.140428 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 16 16:54:05.140448 systemd[1]: Reached target machines.target - Containers. Dec 16 16:54:05.140482 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 16 16:54:05.140504 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 16:54:05.140525 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 16 16:54:05.140552 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 16 16:54:05.140573 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 16:54:05.140599 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 16:54:05.140619 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 16:54:05.140640 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 16 16:54:05.140677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 16:54:05.140700 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 16 16:54:05.140721 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 16 16:54:05.140741 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 16 16:54:05.140774 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 16 16:54:05.147857 systemd[1]: Stopped systemd-fsck-usr.service. Dec 16 16:54:05.147919 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 16:54:05.147955 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 16 16:54:05.147995 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 16 16:54:05.148018 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 16 16:54:05.148048 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 16 16:54:05.148070 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Dec 16 16:54:05.148108 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 16 16:54:05.148146 systemd[1]: verity-setup.service: Deactivated successfully. Dec 16 16:54:05.148169 systemd[1]: Stopped verity-setup.service. Dec 16 16:54:05.148190 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 16:54:05.148217 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 16 16:54:05.148302 systemd-journald[1200]: Collecting audit messages is disabled. Dec 16 16:54:05.148392 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 16 16:54:05.148418 systemd[1]: Mounted media.mount - External Media Directory. Dec 16 16:54:05.148441 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 16 16:54:05.148469 kernel: fuse: init (API version 7.41) Dec 16 16:54:05.148498 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 16 16:54:05.148537 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 16 16:54:05.148560 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 16 16:54:05.148581 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 16 16:54:05.148614 kernel: loop: module loaded Dec 16 16:54:05.148655 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 16 16:54:05.148677 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 16:54:05.148708 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 16:54:05.148736 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 16:54:05.148771 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 16:54:05.148813 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 16 16:54:05.148847 systemd-journald[1200]: Journal started Dec 16 16:54:05.148895 systemd-journald[1200]: Runtime Journal (/run/log/journal/6522ab67f60d4d39a71cfe792b489154) is 4.7M, max 37.8M, 33.1M free. Dec 16 16:54:04.718335 systemd[1]: Queued start job for default target multi-user.target. Dec 16 16:54:04.728394 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 16 16:54:05.166951 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 16 16:54:05.167028 systemd[1]: Started systemd-journald.service - Journal Service. Dec 16 16:54:04.729251 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 16 16:54:05.155296 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 16:54:05.155585 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 16:54:05.156710 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 16 16:54:05.164904 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 16 16:54:05.168926 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 16 16:54:05.169701 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 16 16:54:05.169748 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 16 16:54:05.173561 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Dec 16 16:54:05.176598 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 16 16:54:05.177533 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 16:54:05.186488 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 16 16:54:05.190026 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 16 16:54:05.190852 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 16:54:05.197968 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 16 16:54:05.198853 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 16:54:05.221949 systemd-journald[1200]: Time spent on flushing to /var/log/journal/6522ab67f60d4d39a71cfe792b489154 is 172.760ms for 1153 entries. Dec 16 16:54:05.221949 systemd-journald[1200]: System Journal (/var/log/journal/6522ab67f60d4d39a71cfe792b489154) is 8M, max 584.8M, 576.8M free. Dec 16 16:54:05.446173 systemd-journald[1200]: Received client request to flush runtime journal. Dec 16 16:54:05.446253 kernel: ACPI: bus type drm_connector registered Dec 16 16:54:05.446304 kernel: loop0: detected capacity change from 0 to 128560 Dec 16 16:54:05.446340 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 16 16:54:05.446374 kernel: loop1: detected capacity change from 0 to 8 Dec 16 16:54:05.230081 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 16 16:54:05.235160 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 16 16:54:05.241959 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 16 16:54:05.243201 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 16 16:54:05.245399 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Dec 16 16:54:05.246447 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 16 16:54:05.247657 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 16 16:54:05.256941 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 16 16:54:05.258968 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 16 16:54:05.281637 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 16 16:54:05.283156 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 16 16:54:05.288049 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Dec 16 16:54:05.297117 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 16 16:54:05.325169 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 16:54:05.325530 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 16:54:05.403687 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Dec 16 16:54:05.416979 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 16 16:54:05.418515 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Dec 16 16:54:05.418536 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Dec 16 16:54:05.436499 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 16 16:54:05.441974 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 16 16:54:05.453483 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 16 16:54:05.493078 kernel: loop2: detected capacity change from 0 to 110984 Dec 16 16:54:05.544887 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 16 16:54:05.566696 kernel: loop3: detected capacity change from 0 to 224512 Dec 16 16:54:05.573410 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 16 16:54:05.578136 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 16 16:54:05.611493 kernel: loop4: detected capacity change from 0 to 128560 Dec 16 16:54:05.646705 kernel: loop5: detected capacity change from 0 to 8 Dec 16 16:54:05.653234 kernel: loop6: detected capacity change from 0 to 110984 Dec 16 16:54:05.665203 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Dec 16 16:54:05.665231 systemd-tmpfiles[1271]: ACLs are not supported, ignoring. Dec 16 16:54:05.680304 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 16 16:54:05.698324 kernel: loop7: detected capacity change from 0 to 224512 Dec 16 16:54:05.733131 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 16 16:54:05.739144 (sd-merge)[1273]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 16 16:54:05.741156 (sd-merge)[1273]: Merged extensions into '/usr'. Dec 16 16:54:05.756639 systemd[1]: Reload requested from client PID 1227 ('systemd-sysext') (unit systemd-sysext.service)... Dec 16 16:54:05.756670 systemd[1]: Reloading... Dec 16 16:54:05.989881 zram_generator::config[1301]: No configuration found. Dec 16 16:54:06.107404 ldconfig[1223]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 16 16:54:06.327188 systemd[1]: Reloading finished in 569 ms. Dec 16 16:54:06.371221 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 16 16:54:06.372763 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 16 16:54:06.374050 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 16 16:54:06.385858 systemd[1]: Starting ensure-sysext.service... Dec 16 16:54:06.389092 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 16 16:54:06.397310 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 16 16:54:06.417605 systemd[1]: Reload requested from client PID 1358 ('systemctl') (unit ensure-sysext.service)... Dec 16 16:54:06.417631 systemd[1]: Reloading... Dec 16 16:54:06.441351 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Dec 16 16:54:06.441683 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Dec 16 16:54:06.442285 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 16 16:54:06.442739 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 16 16:54:06.445182 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 16 16:54:06.445592 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Dec 16 16:54:06.445752 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Dec 16 16:54:06.456048 systemd-udevd[1360]: Using default interface naming scheme 'v255'. Dec 16 16:54:06.460460 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 16:54:06.460483 systemd-tmpfiles[1359]: Skipping /boot Dec 16 16:54:06.492419 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Dec 16 16:54:06.492441 systemd-tmpfiles[1359]: Skipping /boot Dec 16 16:54:06.562872 zram_generator::config[1382]: No configuration found. Dec 16 16:54:06.973835 kernel: mousedev: PS/2 mouse device common for all mice Dec 16 16:54:07.014653 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 16 16:54:07.014962 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 16 16:54:07.015989 systemd[1]: Reloading finished in 597 ms. Dec 16 16:54:07.028619 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 16 16:54:07.034821 kernel: ACPI: button: Power Button [PWRF] Dec 16 16:54:07.033848 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 16 16:54:07.094739 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 16 16:54:07.097490 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 16:54:07.101067 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 16 16:54:07.105130 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 16 16:54:07.106176 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 16:54:07.109338 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 16:54:07.115446 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 16:54:07.124179 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 16 16:54:07.126050 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 16:54:07.127791 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 16 16:54:07.132833 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 16 16:54:07.129879 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 16:54:07.131882 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 16 16:54:07.136817 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 16 16:54:07.163530 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 16 16:54:07.175820 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 16 16:54:07.183607 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 16 16:54:07.184748 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 16:54:07.190019 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 16:54:07.191241 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 16:54:07.200725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 16:54:07.203048 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 16:54:07.206116 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 16:54:07.208181 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 16:54:07.217085 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 16 16:54:07.218028 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 16:54:07.218829 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 16:54:07.219009 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 16:54:07.225980 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 16:54:07.226353 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 16 16:54:07.247347 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 16 16:54:07.252957 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 16 16:54:07.253961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 16 16:54:07.254133 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Dec 16 16:54:07.254341 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 16 16:54:07.255988 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 16 16:54:07.256340 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 16 16:54:07.257769 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 16 16:54:07.268323 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 16 16:54:07.270509 systemd[1]: Finished ensure-sysext.service. Dec 16 16:54:07.285236 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 16 16:54:07.291889 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 16 16:54:07.319267 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 16 16:54:07.328298 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 16 16:54:07.346921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 16 16:54:07.348104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 16 16:54:07.354379 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 16 16:54:07.361148 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 16 16:54:07.365057 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 16 16:54:07.367442 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 16 16:54:07.367889 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 16 16:54:07.370291 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 16 16:54:07.370908 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 16 16:54:07.371912 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 16 16:54:07.391053 augenrules[1532]: No rules Dec 16 16:54:07.422175 systemd[1]: audit-rules.service: Deactivated successfully. Dec 16 16:54:07.423919 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 16 16:54:07.432847 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 16 16:54:07.464217 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 16 16:54:07.490923 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 16 16:54:07.749758 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 16 16:54:07.786898 systemd-networkd[1482]: lo: Link UP Dec 16 16:54:07.786913 systemd-networkd[1482]: lo: Gained carrier Dec 16 16:54:07.797410 systemd-networkd[1482]: Enumeration completed Dec 16 16:54:07.801109 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 16:54:07.801128 systemd-networkd[1482]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 16 16:54:07.806931 systemd-networkd[1482]: eth0: Link UP Dec 16 16:54:07.808494 systemd-networkd[1482]: eth0: Gained carrier Dec 16 16:54:07.808884 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 16 16:54:07.836891 systemd-networkd[1482]: eth0: DHCPv4 address 10.230.10.122/30, gateway 10.230.10.121 acquired from 10.230.10.121 Dec 16 16:54:07.840511 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Dec 16 16:54:07.847434 systemd-resolved[1490]: Positive Trust Anchors: Dec 16 16:54:07.848850 systemd-resolved[1490]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 16 16:54:07.848896 systemd-resolved[1490]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 16 16:54:07.865360 systemd-resolved[1490]: Using system hostname 'srv-jrcza.gb1.brightbox.com'. Dec 16 16:54:07.884695 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 16 16:54:07.885627 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 16 16:54:07.886956 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 16 16:54:07.888714 systemd[1]: Reached target network.target - Network. Dec 16 16:54:07.889485 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 16 16:54:07.890715 systemd[1]: Reached target sysinit.target - System Initialization. Dec 16 16:54:07.891646 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 16 16:54:07.892725 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 16 16:54:07.893517 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Dec 16 16:54:07.894281 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 16 16:54:07.895061 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 16 16:54:07.895113 systemd[1]: Reached target paths.target - Path Units. Dec 16 16:54:07.895746 systemd[1]: Reached target time-set.target - System Time Set. Dec 16 16:54:07.896738 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 16 16:54:07.897645 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 16 16:54:07.898429 systemd[1]: Reached target timers.target - Timer Units. Dec 16 16:54:07.900989 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 16 16:54:07.904095 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 16 16:54:07.908401 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Dec 16 16:54:07.909538 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Dec 16 16:54:07.910349 systemd[1]: Reached target ssh-access.target - SSH Access Available. Dec 16 16:54:07.919736 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 16 16:54:07.921825 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Dec 16 16:54:07.924973 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Dec 16 16:54:07.931002 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 16 16:54:07.933976 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 16 16:54:07.938036 systemd[1]: Reached target sockets.target - Socket Units. Dec 16 16:54:07.938878 systemd[1]: Reached target basic.target - Basic System. Dec 16 16:54:07.939641 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 16 16:54:07.939750 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 16 16:54:07.946635 systemd[1]: Starting containerd.service - containerd container runtime... Dec 16 16:54:07.953024 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 16 16:54:07.958053 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 16 16:54:07.961100 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 16 16:54:07.967294 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 16 16:54:07.972318 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 16 16:54:07.973140 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 16 16:54:07.983057 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Dec 16 16:54:07.990039 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 16 16:54:08.001039 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 16 16:54:08.009842 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:54:08.007178 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 16 16:54:08.011086 jq[1564]: false Dec 16 16:54:08.016293 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 16 16:54:08.019830 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing passwd entry cache Dec 16 16:54:08.017768 oslogin_cache_refresh[1566]: Refreshing passwd entry cache Dec 16 16:54:08.028859 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 16 16:54:08.032356 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 16 16:54:08.033340 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 16 16:54:08.037636 systemd[1]: Starting update-engine.service - Update Engine... Dec 16 16:54:08.044028 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 16 16:54:08.046210 extend-filesystems[1565]: Found /dev/vda6 Dec 16 16:54:08.048470 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting users, quitting Dec 16 16:54:08.048470 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 16:54:08.048470 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Refreshing group entry cache Dec 16 16:54:08.047561 oslogin_cache_refresh[1566]: Failure getting users, quitting Dec 16 16:54:08.047602 oslogin_cache_refresh[1566]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Dec 16 16:54:08.047713 oslogin_cache_refresh[1566]: Refreshing group entry cache Dec 16 16:54:08.050848 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Failure getting groups, quitting Dec 16 16:54:08.050848 google_oslogin_nss_cache[1566]: oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 16:54:08.049583 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Dec 16 16:54:08.049420 oslogin_cache_refresh[1566]: Failure getting groups, quitting Dec 16 16:54:08.049436 oslogin_cache_refresh[1566]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Dec 16 16:54:08.055881 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 16 16:54:08.057201 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 16 16:54:08.059936 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 16 16:54:08.060521 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Dec 16 16:54:08.061002 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Dec 16 16:54:08.065370 extend-filesystems[1565]: Found /dev/vda9 Dec 16 16:54:08.078294 extend-filesystems[1565]: Checking size of /dev/vda9 Dec 16 16:54:08.083613 update_engine[1576]: I20251216 16:54:08.083462 1576 main.cc:92] Flatcar Update Engine starting Dec 16 16:54:08.130530 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 16 16:54:08.130992 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 16 16:54:08.139133 jq[1577]: true Dec 16 16:54:08.158832 extend-filesystems[1565]: Resized partition /dev/vda9 Dec 16 16:54:08.164365 extend-filesystems[1607]: resize2fs 1.47.3 (8-Jul-2025) Dec 16 16:54:08.173734 dbus-daemon[1562]: [system] SELinux support is enabled Dec 16 16:54:08.172400 systemd[1]: motdgen.service: Deactivated successfully. Dec 16 16:54:08.172877 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 16 16:54:08.179049 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 16 16:54:08.187776 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 16 16:54:08.188218 (ntainerd)[1604]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 16 16:54:08.189089 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 16 16:54:08.202466 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 16 16:54:08.202522 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 16 16:54:08.205617 tar[1589]: linux-amd64/LICENSE Dec 16 16:54:08.206243 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 16 16:54:08.207206 tar[1589]: linux-amd64/helm Dec 16 16:54:08.231785 jq[1606]: true Dec 16 16:54:08.231073 dbus-daemon[1562]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1482 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 16 16:54:08.244202 update_engine[1576]: I20251216 16:54:08.243021 1576 update_check_scheduler.cc:74] Next update check in 9m18s Dec 16 16:54:08.245754 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 16 16:54:08.254308 systemd[1]: Started update-engine.service - Update Engine. Dec 16 16:54:08.280555 systemd-logind[1574]: Watching system buttons on /dev/input/event3 (Power Button) Dec 16 16:54:08.280593 systemd-logind[1574]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 16 16:54:08.282683 systemd-logind[1574]: New seat seat0. Dec 16 16:54:08.286882 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 16 16:54:08.293968 systemd[1]: Started systemd-logind.service - User Login Management. Dec 16 16:54:08.559635 bash[1628]: Updated "/home/core/.ssh/authorized_keys" Dec 16 16:54:08.563773 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 16 16:54:08.575123 systemd[1]: Starting sshkeys.service... Dec 16 16:54:08.578931 sshd_keygen[1594]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 16 16:54:08.657440 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 16 16:54:08.662172 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 16 16:54:08.696845 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 16 16:54:08.701206 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:54:08.707397 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 16 16:54:08.715492 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 16 16:54:08.725577 extend-filesystems[1607]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 16 16:54:08.725577 extend-filesystems[1607]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 16 16:54:08.725577 extend-filesystems[1607]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 16 16:54:08.724858 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 16 16:54:08.730747 dbus-daemon[1562]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 16 16:54:08.744445 extend-filesystems[1565]: Resized filesystem in /dev/vda9 Dec 16 16:54:08.734458 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 16 16:54:08.732928 dbus-daemon[1562]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1611 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 16 16:54:08.736057 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 16 16:54:08.756033 systemd[1]: Starting polkit.service - Authorization Manager... Dec 16 16:54:08.775489 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 16 16:54:08.798367 systemd[1]: issuegen.service: Deactivated successfully. Dec 16 16:54:08.799948 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 16 16:54:08.810983 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 16 16:54:08.841257 containerd[1604]: time="2025-12-16T16:54:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Dec 16 16:54:08.842840 containerd[1604]: time="2025-12-16T16:54:08.842061021Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Dec 16 16:54:08.864463 containerd[1604]: time="2025-12-16T16:54:08.862450062Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="28.433µs" Dec 16 16:54:08.864463 containerd[1604]: time="2025-12-16T16:54:08.862518082Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Dec 16 16:54:08.864463 containerd[1604]: time="2025-12-16T16:54:08.862560045Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Dec 16 16:54:08.864463 containerd[1604]: time="2025-12-16T16:54:08.862916577Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Dec 16 16:54:08.864463 containerd[1604]: time="2025-12-16T16:54:08.862955530Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Dec 16 16:54:08.864463 containerd[1604]: time="2025-12-16T16:54:08.863017445Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 16:54:08.864463 containerd[1604]: time="2025-12-16T16:54:08.863128339Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Dec 16 16:54:08.864463 containerd[1604]: time="2025-12-16T16:54:08.863156862Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 16:54:08.864463 containerd[1604]: time="2025-12-16T16:54:08.863461490Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Dec 16 16:54:08.864463 containerd[1604]: time="2025-12-16T16:54:08.863484024Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 16:54:08.864463 containerd[1604]: time="2025-12-16T16:54:08.863502848Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Dec 16 16:54:08.864463 containerd[1604]: time="2025-12-16T16:54:08.863518091Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Dec 16 16:54:08.865138 containerd[1604]: time="2025-12-16T16:54:08.863668756Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Dec 16 16:54:08.865138 containerd[1604]: time="2025-12-16T16:54:08.864145481Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 16:54:08.865138 containerd[1604]: time="2025-12-16T16:54:08.864201528Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Dec 16 16:54:08.865138 containerd[1604]: time="2025-12-16T16:54:08.864225392Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Dec 16 16:54:08.865138 containerd[1604]: time="2025-12-16T16:54:08.864300437Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Dec 16 16:54:08.865138 containerd[1604]: time="2025-12-16T16:54:08.864926249Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Dec 16 16:54:08.865138 containerd[1604]: time="2025-12-16T16:54:08.865033799Z" level=info msg="metadata content store policy set" policy=shared Dec 16 16:54:08.869848 containerd[1604]: time="2025-12-16T16:54:08.869272951Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Dec 16 16:54:08.869848 containerd[1604]: time="2025-12-16T16:54:08.869393163Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Dec 16 16:54:08.869848 containerd[1604]: time="2025-12-16T16:54:08.869424410Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Dec 16 16:54:08.869848 containerd[1604]: time="2025-12-16T16:54:08.869532387Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Dec 16 16:54:08.869848 containerd[1604]: time="2025-12-16T16:54:08.869563812Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Dec 16 16:54:08.869848 containerd[1604]: time="2025-12-16T16:54:08.869582878Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Dec 16 16:54:08.869848 containerd[1604]: time="2025-12-16T16:54:08.869606088Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Dec 16 16:54:08.869848 containerd[1604]: time="2025-12-16T16:54:08.869637191Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Dec 16 16:54:08.869848 containerd[1604]: time="2025-12-16T16:54:08.869671661Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Dec 16 16:54:08.869848 containerd[1604]: time="2025-12-16T16:54:08.869693060Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Dec 16 16:54:08.869848 containerd[1604]: time="2025-12-16T16:54:08.869712924Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Dec 16 16:54:08.869848 containerd[1604]: time="2025-12-16T16:54:08.869735189Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Dec 16 16:54:08.870317 containerd[1604]: time="2025-12-16T16:54:08.869952526Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Dec 16 16:54:08.870317 containerd[1604]: time="2025-12-16T16:54:08.869997821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Dec 16 16:54:08.870317 containerd[1604]: time="2025-12-16T16:54:08.870037147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Dec 16 16:54:08.870317 containerd[1604]: time="2025-12-16T16:54:08.870062273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Dec 16 16:54:08.870317 containerd[1604]: time="2025-12-16T16:54:08.870082918Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Dec 16 16:54:08.870317 containerd[1604]: time="2025-12-16T16:54:08.870104004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Dec 16 16:54:08.870317 containerd[1604]: time="2025-12-16T16:54:08.870123484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Dec 16 16:54:08.870317 containerd[1604]: time="2025-12-16T16:54:08.870148958Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Dec 16 16:54:08.870317 containerd[1604]: time="2025-12-16T16:54:08.870171709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Dec 16 16:54:08.870317 containerd[1604]: time="2025-12-16T16:54:08.870190444Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Dec 16 16:54:08.870317 containerd[1604]: time="2025-12-16T16:54:08.870209121Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Dec 16 16:54:08.870751 containerd[1604]: time="2025-12-16T16:54:08.870338083Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Dec 16 16:54:08.870751 containerd[1604]: time="2025-12-16T16:54:08.870371189Z" level=info msg="Start snapshots syncer" Dec 16 16:54:08.870751 containerd[1604]: time="2025-12-16T16:54:08.870455804Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Dec 16 16:54:08.874832 containerd[1604]: time="2025-12-16T16:54:08.871048827Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Dec 16 16:54:08.874832 containerd[1604]: time="2025-12-16T16:54:08.871136594Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Dec 16 16:54:08.875175 containerd[1604]: time="2025-12-16T16:54:08.871245404Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Dec 16 16:54:08.875175 containerd[1604]: time="2025-12-16T16:54:08.871473080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Dec 16 16:54:08.875175 containerd[1604]: time="2025-12-16T16:54:08.871505908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Dec 16 16:54:08.875175 containerd[1604]: time="2025-12-16T16:54:08.871524712Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Dec 16 16:54:08.875175 containerd[1604]: time="2025-12-16T16:54:08.871542876Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Dec 16 16:54:08.875175 containerd[1604]: time="2025-12-16T16:54:08.871565809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Dec 16 16:54:08.875175 containerd[1604]: time="2025-12-16T16:54:08.871584840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Dec 16 16:54:08.875175 containerd[1604]: time="2025-12-16T16:54:08.871603700Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Dec 16 16:54:08.875175 containerd[1604]: time="2025-12-16T16:54:08.871665642Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Dec 16 16:54:08.875175 containerd[1604]: time="2025-12-16T16:54:08.871690849Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Dec 16 16:54:08.875175 containerd[1604]: time="2025-12-16T16:54:08.871710044Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Dec 16 16:54:08.875175 containerd[1604]: time="2025-12-16T16:54:08.871783334Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 16:54:08.875175 containerd[1604]: time="2025-12-16T16:54:08.871916872Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Dec 16 16:54:08.875175 containerd[1604]: time="2025-12-16T16:54:08.871935982Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 16:54:08.875647 containerd[1604]: time="2025-12-16T16:54:08.871954703Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Dec 16 16:54:08.875647 containerd[1604]: time="2025-12-16T16:54:08.871968985Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Dec 16 16:54:08.875647 containerd[1604]: time="2025-12-16T16:54:08.871986042Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Dec 16 16:54:08.875647 containerd[1604]: time="2025-12-16T16:54:08.872020670Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Dec 16 16:54:08.875647 containerd[1604]: time="2025-12-16T16:54:08.872062234Z" level=info msg="runtime interface created" Dec 16 16:54:08.875647 containerd[1604]: time="2025-12-16T16:54:08.872074852Z" level=info msg="created NRI interface" Dec 16 16:54:08.875647 containerd[1604]: time="2025-12-16T16:54:08.872094265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Dec 16 16:54:08.875647 containerd[1604]: time="2025-12-16T16:54:08.872113825Z" level=info msg="Connect containerd service" Dec 16 16:54:08.875647 containerd[1604]: time="2025-12-16T16:54:08.872152599Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 16 16:54:08.875647 containerd[1604]: time="2025-12-16T16:54:08.873735438Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 16 16:54:08.893073 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 16 16:54:08.901020 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 16 16:54:08.907462 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 16 16:54:08.910292 systemd[1]: Reached target getty.target - Login Prompts. Dec 16 16:54:09.045955 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:54:09.051880 polkitd[1655]: Started polkitd version 126 Dec 16 16:54:09.071060 polkitd[1655]: Loading rules from directory /etc/polkit-1/rules.d Dec 16 16:54:09.071579 polkitd[1655]: Loading rules from directory /run/polkit-1/rules.d Dec 16 16:54:09.071695 polkitd[1655]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 16:54:09.072094 polkitd[1655]: Loading rules from directory /usr/local/share/polkit-1/rules.d Dec 16 16:54:09.072144 polkitd[1655]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Dec 16 16:54:09.072203 polkitd[1655]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 16 16:54:09.074083 polkitd[1655]: Finished loading, compiling and executing 2 rules Dec 16 16:54:09.075042 systemd[1]: Started polkit.service - Authorization Manager. Dec 16 16:54:09.076273 dbus-daemon[1562]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 16 16:54:09.077150 polkitd[1655]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 16 16:54:09.088253 containerd[1604]: time="2025-12-16T16:54:09.088169149Z" level=info msg="Start subscribing containerd event" Dec 16 16:54:09.088424 containerd[1604]: time="2025-12-16T16:54:09.088272293Z" level=info msg="Start recovering state" Dec 16 16:54:09.088508 containerd[1604]: time="2025-12-16T16:54:09.088478594Z" level=info msg="Start event monitor" Dec 16 16:54:09.088585 containerd[1604]: time="2025-12-16T16:54:09.088507618Z" level=info msg="Start cni network conf syncer for default" Dec 16 16:54:09.088585 containerd[1604]: time="2025-12-16T16:54:09.088527541Z" level=info msg="Start streaming server" Dec 16 16:54:09.088585 containerd[1604]: time="2025-12-16T16:54:09.088561071Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Dec 16 16:54:09.088585 containerd[1604]: time="2025-12-16T16:54:09.088578540Z" level=info msg="runtime interface starting up..." Dec 16 16:54:09.088816 containerd[1604]: time="2025-12-16T16:54:09.088594253Z" level=info msg="starting plugins..." Dec 16 16:54:09.088816 containerd[1604]: time="2025-12-16T16:54:09.088634481Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Dec 16 16:54:09.089274 containerd[1604]: time="2025-12-16T16:54:09.089233058Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 16 16:54:09.089741 containerd[1604]: time="2025-12-16T16:54:09.089565195Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 16 16:54:09.090032 containerd[1604]: time="2025-12-16T16:54:09.089910933Z" level=info msg="containerd successfully booted in 0.249375s" Dec 16 16:54:09.090249 systemd[1]: Started containerd.service - containerd container runtime. Dec 16 16:54:09.112059 systemd-hostnamed[1611]: Hostname set to (static) Dec 16 16:54:09.268406 tar[1589]: linux-amd64/README.md Dec 16 16:54:09.298375 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 16 16:54:09.302050 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 16 16:54:09.304761 systemd[1]: Started sshd@0-10.230.10.122:22-139.178.68.195:43422.service - OpenSSH per-connection server daemon (139.178.68.195:43422). Dec 16 16:54:09.730852 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:54:09.760080 systemd-networkd[1482]: eth0: Gained IPv6LL Dec 16 16:54:09.761963 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Dec 16 16:54:09.764612 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 16 16:54:09.767075 systemd[1]: Reached target network-online.target - Network is Online. Dec 16 16:54:09.771192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:54:09.775982 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 16 16:54:09.815735 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 16 16:54:10.260043 sshd[1690]: Accepted publickey for core from 139.178.68.195 port 43422 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:54:10.263474 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:54:10.279084 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 16 16:54:10.283420 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 16 16:54:10.318185 systemd-logind[1574]: New session 1 of user core. Dec 16 16:54:10.327536 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 16 16:54:10.337120 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 16 16:54:10.356825 (systemd)[1708]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 16 16:54:10.364597 systemd-logind[1574]: New session c1 of user core. Dec 16 16:54:10.565754 systemd[1708]: Queued start job for default target default.target. Dec 16 16:54:10.572585 systemd[1708]: Created slice app.slice - User Application Slice. Dec 16 16:54:10.572847 systemd[1708]: Reached target paths.target - Paths. Dec 16 16:54:10.573117 systemd[1708]: Reached target timers.target - Timers. Dec 16 16:54:10.577931 systemd[1708]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 16 16:54:10.598130 systemd[1708]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 16 16:54:10.599766 systemd[1708]: Reached target sockets.target - Sockets. Dec 16 16:54:10.600059 systemd[1708]: Reached target basic.target - Basic System. Dec 16 16:54:10.600327 systemd[1708]: Reached target default.target - Main User Target. Dec 16 16:54:10.600368 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 16 16:54:10.600637 systemd[1708]: Startup finished in 220ms. Dec 16 16:54:10.609196 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 16 16:54:10.800667 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Dec 16 16:54:10.802982 systemd-networkd[1482]: eth0: Ignoring DHCPv6 address 2a02:1348:179:829e:24:19ff:fee6:a7a/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:829e:24:19ff:fee6:a7a/64 assigned by NDisc. Dec 16 16:54:10.802996 systemd-networkd[1482]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 16 16:54:10.972998 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:54:10.988571 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 16:54:11.075842 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:54:11.262260 systemd[1]: Started sshd@1-10.230.10.122:22-139.178.68.195:59890.service - OpenSSH per-connection server daemon (139.178.68.195:59890). Dec 16 16:54:11.746832 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:54:11.752810 kubelet[1724]: E1216 16:54:11.752630 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 16:54:11.755467 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 16:54:11.755757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 16:54:11.756810 systemd[1]: kubelet.service: Consumed 1.151s CPU time, 265.1M memory peak. Dec 16 16:54:12.129115 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Dec 16 16:54:12.196908 sshd[1731]: Accepted publickey for core from 139.178.68.195 port 59890 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:54:12.199193 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:54:12.208876 systemd-logind[1574]: New session 2 of user core. Dec 16 16:54:12.217152 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 16 16:54:12.828238 sshd[1738]: Connection closed by 139.178.68.195 port 59890 Dec 16 16:54:12.829302 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Dec 16 16:54:12.836332 systemd[1]: sshd@1-10.230.10.122:22-139.178.68.195:59890.service: Deactivated successfully. Dec 16 16:54:12.839365 systemd[1]: session-2.scope: Deactivated successfully. Dec 16 16:54:12.840657 systemd-logind[1574]: Session 2 logged out. Waiting for processes to exit. Dec 16 16:54:12.843273 systemd-logind[1574]: Removed session 2. Dec 16 16:54:12.989881 systemd[1]: Started sshd@2-10.230.10.122:22-139.178.68.195:59902.service - OpenSSH per-connection server daemon (139.178.68.195:59902). Dec 16 16:54:13.930309 sshd[1744]: Accepted publickey for core from 139.178.68.195 port 59902 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:54:13.932555 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:54:13.940659 systemd-logind[1574]: New session 3 of user core. Dec 16 16:54:13.951178 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 16 16:54:14.030315 login[1667]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 16:54:14.038861 systemd-logind[1574]: New session 4 of user core. Dec 16 16:54:14.045601 login[1666]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 16 16:54:14.050066 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 16 16:54:14.062862 systemd-logind[1574]: New session 5 of user core. Dec 16 16:54:14.069197 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 16 16:54:14.559614 sshd[1747]: Connection closed by 139.178.68.195 port 59902 Dec 16 16:54:14.560623 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Dec 16 16:54:14.567080 systemd[1]: sshd@2-10.230.10.122:22-139.178.68.195:59902.service: Deactivated successfully. Dec 16 16:54:14.570551 systemd[1]: session-3.scope: Deactivated successfully. Dec 16 16:54:14.572887 systemd-logind[1574]: Session 3 logged out. Waiting for processes to exit. Dec 16 16:54:14.574611 systemd-logind[1574]: Removed session 3. Dec 16 16:54:15.087860 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:54:15.104630 coreos-metadata[1561]: Dec 16 16:54:15.104 WARN failed to locate config-drive, using the metadata service API instead Dec 16 16:54:15.130225 coreos-metadata[1561]: Dec 16 16:54:15.130 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 16 16:54:15.137852 coreos-metadata[1561]: Dec 16 16:54:15.137 INFO Fetch failed with 404: resource not found Dec 16 16:54:15.137852 coreos-metadata[1561]: Dec 16 16:54:15.137 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 16 16:54:15.138657 coreos-metadata[1561]: Dec 16 16:54:15.138 INFO Fetch successful Dec 16 16:54:15.138868 coreos-metadata[1561]: Dec 16 16:54:15.138 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 16 16:54:15.150998 coreos-metadata[1561]: Dec 16 16:54:15.150 INFO Fetch successful Dec 16 16:54:15.150998 coreos-metadata[1561]: Dec 16 16:54:15.150 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 16 16:54:15.169117 coreos-metadata[1561]: Dec 16 16:54:15.168 INFO Fetch successful Dec 16 16:54:15.169308 coreos-metadata[1561]: Dec 16 16:54:15.169 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 16 16:54:15.183823 coreos-metadata[1561]: Dec 16 16:54:15.183 INFO Fetch successful Dec 16 16:54:15.183970 coreos-metadata[1561]: Dec 16 16:54:15.183 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 16 16:54:15.211729 coreos-metadata[1561]: Dec 16 16:54:15.211 INFO Fetch successful Dec 16 16:54:15.248397 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 16 16:54:15.250247 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 16 16:54:15.763839 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Dec 16 16:54:15.775231 coreos-metadata[1645]: Dec 16 16:54:15.775 WARN failed to locate config-drive, using the metadata service API instead Dec 16 16:54:15.799293 coreos-metadata[1645]: Dec 16 16:54:15.799 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 16 16:54:15.824711 coreos-metadata[1645]: Dec 16 16:54:15.824 INFO Fetch successful Dec 16 16:54:15.824711 coreos-metadata[1645]: Dec 16 16:54:15.824 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 16 16:54:15.858990 coreos-metadata[1645]: Dec 16 16:54:15.858 INFO Fetch successful Dec 16 16:54:15.861479 unknown[1645]: wrote ssh authorized keys file for user: core Dec 16 16:54:15.890295 update-ssh-keys[1786]: Updated "/home/core/.ssh/authorized_keys" Dec 16 16:54:15.891437 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 16 16:54:15.894596 systemd[1]: Finished sshkeys.service. Dec 16 16:54:15.897901 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 16 16:54:15.898119 systemd[1]: Startup finished in 3.648s (kernel) + 15.166s (initrd) + 12.130s (userspace) = 30.945s. Dec 16 16:54:21.794713 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 16 16:54:21.797633 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:54:22.002604 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:54:22.013640 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 16:54:22.118608 kubelet[1798]: E1216 16:54:22.118388 1798 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 16:54:22.123344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 16:54:22.123609 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 16:54:22.124227 systemd[1]: kubelet.service: Consumed 242ms CPU time, 110.7M memory peak. Dec 16 16:54:24.724472 systemd[1]: Started sshd@3-10.230.10.122:22-139.178.68.195:42646.service - OpenSSH per-connection server daemon (139.178.68.195:42646). Dec 16 16:54:25.665215 sshd[1806]: Accepted publickey for core from 139.178.68.195 port 42646 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:54:25.667310 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:54:25.675027 systemd-logind[1574]: New session 6 of user core. Dec 16 16:54:25.690133 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 16 16:54:26.296869 sshd[1809]: Connection closed by 139.178.68.195 port 42646 Dec 16 16:54:26.298046 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Dec 16 16:54:26.304257 systemd[1]: sshd@3-10.230.10.122:22-139.178.68.195:42646.service: Deactivated successfully. Dec 16 16:54:26.306652 systemd[1]: session-6.scope: Deactivated successfully. Dec 16 16:54:26.307976 systemd-logind[1574]: Session 6 logged out. Waiting for processes to exit. Dec 16 16:54:26.309963 systemd-logind[1574]: Removed session 6. Dec 16 16:54:26.454660 systemd[1]: Started sshd@4-10.230.10.122:22-139.178.68.195:42656.service - OpenSSH per-connection server daemon (139.178.68.195:42656). Dec 16 16:54:27.386885 sshd[1815]: Accepted publickey for core from 139.178.68.195 port 42656 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:54:27.389054 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:54:27.397746 systemd-logind[1574]: New session 7 of user core. Dec 16 16:54:27.400021 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 16 16:54:28.019827 sshd[1818]: Connection closed by 139.178.68.195 port 42656 Dec 16 16:54:28.020672 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Dec 16 16:54:28.025621 systemd-logind[1574]: Session 7 logged out. Waiting for processes to exit. Dec 16 16:54:28.026781 systemd[1]: sshd@4-10.230.10.122:22-139.178.68.195:42656.service: Deactivated successfully. Dec 16 16:54:28.029031 systemd[1]: session-7.scope: Deactivated successfully. Dec 16 16:54:28.031638 systemd-logind[1574]: Removed session 7. Dec 16 16:54:28.179891 systemd[1]: Started sshd@5-10.230.10.122:22-139.178.68.195:42658.service - OpenSSH per-connection server daemon (139.178.68.195:42658). Dec 16 16:54:29.093698 sshd[1824]: Accepted publickey for core from 139.178.68.195 port 42658 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:54:29.095471 sshd-session[1824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:54:29.103397 systemd-logind[1574]: New session 8 of user core. Dec 16 16:54:29.116017 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 16 16:54:29.720826 sshd[1827]: Connection closed by 139.178.68.195 port 42658 Dec 16 16:54:29.721981 sshd-session[1824]: pam_unix(sshd:session): session closed for user core Dec 16 16:54:29.728985 systemd[1]: sshd@5-10.230.10.122:22-139.178.68.195:42658.service: Deactivated successfully. Dec 16 16:54:29.731669 systemd[1]: session-8.scope: Deactivated successfully. Dec 16 16:54:29.733185 systemd-logind[1574]: Session 8 logged out. Waiting for processes to exit. Dec 16 16:54:29.735043 systemd-logind[1574]: Removed session 8. Dec 16 16:54:29.878281 systemd[1]: Started sshd@6-10.230.10.122:22-139.178.68.195:48080.service - OpenSSH per-connection server daemon (139.178.68.195:48080). Dec 16 16:54:30.797274 sshd[1833]: Accepted publickey for core from 139.178.68.195 port 48080 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:54:30.799281 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:54:30.806752 systemd-logind[1574]: New session 9 of user core. Dec 16 16:54:30.819411 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 16 16:54:31.292729 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 16 16:54:31.293186 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 16 16:54:31.817974 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 16 16:54:31.835374 (dockerd)[1855]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 16 16:54:32.214898 dockerd[1855]: time="2025-12-16T16:54:32.214677202Z" level=info msg="Starting up" Dec 16 16:54:32.215946 dockerd[1855]: time="2025-12-16T16:54:32.215912086Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Dec 16 16:54:32.219715 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 16 16:54:32.223143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:54:32.240834 dockerd[1855]: time="2025-12-16T16:54:32.240259008Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Dec 16 16:54:32.267854 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3819488962-merged.mount: Deactivated successfully. Dec 16 16:54:32.279537 systemd[1]: var-lib-docker-metacopy\x2dcheck2612689799-merged.mount: Deactivated successfully. Dec 16 16:54:32.520562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:54:32.531296 (kubelet)[1887]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 16:54:32.549126 dockerd[1855]: time="2025-12-16T16:54:32.549072654Z" level=info msg="Loading containers: start." Dec 16 16:54:32.566847 kernel: Initializing XFRM netlink socket Dec 16 16:54:32.600198 kubelet[1887]: E1216 16:54:32.600134 1887 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 16:54:32.602769 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 16:54:32.603053 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 16:54:32.603721 systemd[1]: kubelet.service: Consumed 208ms CPU time, 110M memory peak. Dec 16 16:54:32.875401 systemd-timesyncd[1516]: Network configuration changed, trying to establish connection. Dec 16 16:54:32.950675 systemd-networkd[1482]: docker0: Link UP Dec 16 16:54:32.956632 dockerd[1855]: time="2025-12-16T16:54:32.956567338Z" level=info msg="Loading containers: done." Dec 16 16:54:32.985156 dockerd[1855]: time="2025-12-16T16:54:32.985102657Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 16 16:54:32.985375 dockerd[1855]: time="2025-12-16T16:54:32.985216064Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Dec 16 16:54:32.985375 dockerd[1855]: time="2025-12-16T16:54:32.985357917Z" level=info msg="Initializing buildkit" Dec 16 16:54:33.013681 dockerd[1855]: time="2025-12-16T16:54:33.013631536Z" level=info msg="Completed buildkit initialization" Dec 16 16:54:33.023229 dockerd[1855]: time="2025-12-16T16:54:33.023181355Z" level=info msg="Daemon has completed initialization" Dec 16 16:54:33.023853 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 16 16:54:33.025230 dockerd[1855]: time="2025-12-16T16:54:33.023513845Z" level=info msg="API listen on /run/docker.sock" Dec 16 16:54:33.939771 systemd-resolved[1490]: Clock change detected. Flushing caches. Dec 16 16:54:33.940424 systemd-timesyncd[1516]: Contacted time server [2a01:7e00::f03c:91ff:fe69:38e7]:123 (2.flatcar.pool.ntp.org). Dec 16 16:54:33.940503 systemd-timesyncd[1516]: Initial clock synchronization to Tue 2025-12-16 16:54:33.939236 UTC. Dec 16 16:54:34.744444 containerd[1604]: time="2025-12-16T16:54:34.744258994Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\"" Dec 16 16:54:35.753096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4034931724.mount: Deactivated successfully. Dec 16 16:54:38.356291 containerd[1604]: time="2025-12-16T16:54:38.356191277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:38.357630 containerd[1604]: time="2025-12-16T16:54:38.357593169Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.10: active requests=0, bytes read=29072191" Dec 16 16:54:38.358727 containerd[1604]: time="2025-12-16T16:54:38.358687246Z" level=info msg="ImageCreate event name:\"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:38.363427 containerd[1604]: time="2025-12-16T16:54:38.363374104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:38.365572 containerd[1604]: time="2025-12-16T16:54:38.365534339Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.10\" with image id \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:af4ee57c047e31a7f58422b94a9ec4c62221d3deebb16755bdeff720df796189\", size \"29068782\" in 3.621107034s" Dec 16 16:54:38.365706 containerd[1604]: time="2025-12-16T16:54:38.365679826Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.10\" returns image reference \"sha256:77f8b0de97da9ee43e174b170c363c893ab69a20b03878e1bf6b54b10d44ef6f\"" Dec 16 16:54:38.366784 containerd[1604]: time="2025-12-16T16:54:38.366736662Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\"" Dec 16 16:54:40.500149 containerd[1604]: time="2025-12-16T16:54:40.498377430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:40.500149 containerd[1604]: time="2025-12-16T16:54:40.499982754Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.10: active requests=0, bytes read=24992018" Dec 16 16:54:40.501900 containerd[1604]: time="2025-12-16T16:54:40.500934550Z" level=info msg="ImageCreate event name:\"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:40.503950 containerd[1604]: time="2025-12-16T16:54:40.503880591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:40.505523 containerd[1604]: time="2025-12-16T16:54:40.505483815Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.10\" with image id \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:efbd9d1dfcd2940e1c73a1476c880c3c2cdf04cc60722d329b21cd48745c8660\", size \"26649046\" in 2.138699841s" Dec 16 16:54:40.505681 containerd[1604]: time="2025-12-16T16:54:40.505647251Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.10\" returns image reference \"sha256:34e0beef266f1ca24c0093506853b1cc0ed91e873aeef655f39721813f10f924\"" Dec 16 16:54:40.506800 containerd[1604]: time="2025-12-16T16:54:40.506667700Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\"" Dec 16 16:54:41.421553 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 16 16:54:42.213154 containerd[1604]: time="2025-12-16T16:54:42.211757738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:42.213154 containerd[1604]: time="2025-12-16T16:54:42.213090859Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.10: active requests=0, bytes read=19404256" Dec 16 16:54:42.213918 containerd[1604]: time="2025-12-16T16:54:42.213884833Z" level=info msg="ImageCreate event name:\"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:42.216760 containerd[1604]: time="2025-12-16T16:54:42.216725643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:42.218076 containerd[1604]: time="2025-12-16T16:54:42.218040982Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.10\" with image id \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9c58e1adcad5af66d1d9ca5cf9a4c266e4054b8f19f91a8fff1993549e657b10\", size \"21061302\" in 1.711312974s" Dec 16 16:54:42.218248 containerd[1604]: time="2025-12-16T16:54:42.218221050Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.10\" returns image reference \"sha256:fd6f6aae834c2ec73b534bc30902f1602089a8f4d1bbd8c521fe2b39968efe4a\"" Dec 16 16:54:42.218841 containerd[1604]: time="2025-12-16T16:54:42.218796926Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\"" Dec 16 16:54:43.365511 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 16 16:54:43.369494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:54:43.638191 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:54:43.651153 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 16:54:43.734501 kubelet[2162]: E1216 16:54:43.734436 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 16:54:43.739055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 16:54:43.739352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 16:54:43.741652 systemd[1]: kubelet.service: Consumed 242ms CPU time, 108M memory peak. Dec 16 16:54:44.292506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2176665809.mount: Deactivated successfully. Dec 16 16:54:45.069560 containerd[1604]: time="2025-12-16T16:54:45.068477236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:45.069560 containerd[1604]: time="2025-12-16T16:54:45.069498457Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.10: active requests=0, bytes read=31161431" Dec 16 16:54:45.070543 containerd[1604]: time="2025-12-16T16:54:45.070505210Z" level=info msg="ImageCreate event name:\"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:45.072978 containerd[1604]: time="2025-12-16T16:54:45.072945659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:45.073779 containerd[1604]: time="2025-12-16T16:54:45.073730748Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.10\" with image id \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\", repo tag \"registry.k8s.io/kube-proxy:v1.32.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e3dda1c7b384f9eb5b2fa1c27493b23b80e6204b9fa2ee8791b2de078f468cbf\", size \"31160442\" in 2.85464074s" Dec 16 16:54:45.073866 containerd[1604]: time="2025-12-16T16:54:45.073785767Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.10\" returns image reference \"sha256:db4bcdca85a39c02add2db5eed4fc6ab21eb20616fbf8cd2cf824e59e384a956\"" Dec 16 16:54:45.074573 containerd[1604]: time="2025-12-16T16:54:45.074537943Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Dec 16 16:54:45.783999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1278535131.mount: Deactivated successfully. Dec 16 16:54:47.905736 containerd[1604]: time="2025-12-16T16:54:47.905677307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:47.920643 containerd[1604]: time="2025-12-16T16:54:47.920566020Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Dec 16 16:54:47.923417 containerd[1604]: time="2025-12-16T16:54:47.923353164Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:47.927063 containerd[1604]: time="2025-12-16T16:54:47.927003475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:47.928704 containerd[1604]: time="2025-12-16T16:54:47.928509002Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.853928349s" Dec 16 16:54:47.928704 containerd[1604]: time="2025-12-16T16:54:47.928577495Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Dec 16 16:54:47.930015 containerd[1604]: time="2025-12-16T16:54:47.929957017Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 16 16:54:48.574722 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2718483221.mount: Deactivated successfully. Dec 16 16:54:48.579933 containerd[1604]: time="2025-12-16T16:54:48.579882697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 16:54:48.581006 containerd[1604]: time="2025-12-16T16:54:48.580819587Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Dec 16 16:54:48.582011 containerd[1604]: time="2025-12-16T16:54:48.581968631Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 16:54:48.584876 containerd[1604]: time="2025-12-16T16:54:48.584813541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 16 16:54:48.586236 containerd[1604]: time="2025-12-16T16:54:48.585760817Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 655.743637ms" Dec 16 16:54:48.586236 containerd[1604]: time="2025-12-16T16:54:48.585801000Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 16 16:54:48.586669 containerd[1604]: time="2025-12-16T16:54:48.586617843Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Dec 16 16:54:49.303854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2565335779.mount: Deactivated successfully. Dec 16 16:54:52.039155 containerd[1604]: time="2025-12-16T16:54:52.039065262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:52.041743 containerd[1604]: time="2025-12-16T16:54:52.041665119Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Dec 16 16:54:52.042446 containerd[1604]: time="2025-12-16T16:54:52.042397195Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:52.046918 containerd[1604]: time="2025-12-16T16:54:52.046866993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:54:52.049826 containerd[1604]: time="2025-12-16T16:54:52.049703289Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.463038077s" Dec 16 16:54:52.049826 containerd[1604]: time="2025-12-16T16:54:52.049770966Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Dec 16 16:54:53.864935 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 16 16:54:53.869264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:54:54.167354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:54:54.177828 (kubelet)[2315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 16 16:54:54.260411 kubelet[2315]: E1216 16:54:54.260324 2315 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 16 16:54:54.264759 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 16 16:54:54.265253 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 16 16:54:54.266303 systemd[1]: kubelet.service: Consumed 241ms CPU time, 111.1M memory peak. Dec 16 16:54:54.536675 update_engine[1576]: I20251216 16:54:54.536461 1576 update_attempter.cc:509] Updating boot flags... Dec 16 16:54:55.857611 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:54:55.858052 systemd[1]: kubelet.service: Consumed 241ms CPU time, 111.1M memory peak. Dec 16 16:54:55.861080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:54:55.898441 systemd[1]: Reload requested from client PID 2343 ('systemctl') (unit session-9.scope)... Dec 16 16:54:55.898657 systemd[1]: Reloading... Dec 16 16:54:56.103346 zram_generator::config[2388]: No configuration found. Dec 16 16:54:56.406324 systemd[1]: Reloading finished in 506 ms. Dec 16 16:54:56.484538 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:54:56.486482 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 16:54:56.487037 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:54:56.487258 systemd[1]: kubelet.service: Consumed 150ms CPU time, 98.2M memory peak. Dec 16 16:54:56.490227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:54:56.660906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:54:56.680829 (kubelet)[2456]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 16:54:56.746186 kubelet[2456]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 16:54:56.746186 kubelet[2456]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 16:54:56.746186 kubelet[2456]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 16:54:56.746812 kubelet[2456]: I1216 16:54:56.746263 2456 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 16:54:57.425793 kubelet[2456]: I1216 16:54:57.425748 2456 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 16:54:57.426016 kubelet[2456]: I1216 16:54:57.425995 2456 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 16:54:57.426735 kubelet[2456]: I1216 16:54:57.426704 2456 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 16:54:57.470979 kubelet[2456]: E1216 16:54:57.470329 2456 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.10.122:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.10.122:6443: connect: connection refused" logger="UnhandledError" Dec 16 16:54:57.473136 kubelet[2456]: I1216 16:54:57.472871 2456 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 16:54:57.483997 kubelet[2456]: I1216 16:54:57.483969 2456 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 16:54:57.493271 kubelet[2456]: I1216 16:54:57.493237 2456 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 16:54:57.495521 kubelet[2456]: I1216 16:54:57.495457 2456 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 16:54:57.495895 kubelet[2456]: I1216 16:54:57.495618 2456 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-jrcza.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 16:54:57.498067 kubelet[2456]: I1216 16:54:57.498040 2456 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 16:54:57.498310 kubelet[2456]: I1216 16:54:57.498238 2456 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 16:54:57.499548 kubelet[2456]: I1216 16:54:57.499513 2456 state_mem.go:36] "Initialized new in-memory state store" Dec 16 16:54:57.505194 kubelet[2456]: I1216 16:54:57.504988 2456 kubelet.go:446] "Attempting to sync node with API server" Dec 16 16:54:57.505194 kubelet[2456]: I1216 16:54:57.505045 2456 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 16:54:57.506583 kubelet[2456]: I1216 16:54:57.506444 2456 kubelet.go:352] "Adding apiserver pod source" Dec 16 16:54:57.506583 kubelet[2456]: I1216 16:54:57.506485 2456 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 16:54:57.510180 kubelet[2456]: W1216 16:54:57.509499 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.10.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jrcza.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.10.122:6443: connect: connection refused Dec 16 16:54:57.510180 kubelet[2456]: E1216 16:54:57.509583 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.10.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jrcza.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.10.122:6443: connect: connection refused" logger="UnhandledError" Dec 16 16:54:57.511967 kubelet[2456]: I1216 16:54:57.511940 2456 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 16:54:57.515265 kubelet[2456]: I1216 16:54:57.515241 2456 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 16:54:57.516595 kubelet[2456]: W1216 16:54:57.516571 2456 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 16 16:54:57.517755 kubelet[2456]: I1216 16:54:57.517732 2456 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 16:54:57.517886 kubelet[2456]: I1216 16:54:57.517868 2456 server.go:1287] "Started kubelet" Dec 16 16:54:57.518205 kubelet[2456]: W1216 16:54:57.518137 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.10.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.10.122:6443: connect: connection refused Dec 16 16:54:57.518389 kubelet[2456]: E1216 16:54:57.518359 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.10.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.10.122:6443: connect: connection refused" logger="UnhandledError" Dec 16 16:54:57.541432 kubelet[2456]: I1216 16:54:57.541347 2456 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 16:54:57.542078 kubelet[2456]: I1216 16:54:57.542049 2456 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 16:54:57.546009 kubelet[2456]: I1216 16:54:57.545954 2456 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 16:54:57.549842 kubelet[2456]: I1216 16:54:57.547438 2456 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 16:54:57.564846 kubelet[2456]: I1216 16:54:57.547633 2456 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 16:54:57.564846 kubelet[2456]: I1216 16:54:57.564460 2456 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 16:54:57.564846 kubelet[2456]: E1216 16:54:57.564797 2456 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-jrcza.gb1.brightbox.com\" not found" Dec 16 16:54:57.565485 kubelet[2456]: I1216 16:54:57.565445 2456 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 16:54:57.565565 kubelet[2456]: I1216 16:54:57.565539 2456 reconciler.go:26] "Reconciler: start to sync state" Dec 16 16:54:57.566834 kubelet[2456]: I1216 16:54:57.566810 2456 server.go:479] "Adding debug handlers to kubelet server" Dec 16 16:54:57.597186 kubelet[2456]: W1216 16:54:57.596546 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.10.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.10.122:6443: connect: connection refused Dec 16 16:54:57.597186 kubelet[2456]: E1216 16:54:57.596641 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.10.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.10.122:6443: connect: connection refused" logger="UnhandledError" Dec 16 16:54:57.597186 kubelet[2456]: E1216 16:54:57.596772 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.10.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jrcza.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.10.122:6443: connect: connection refused" interval="200ms" Dec 16 16:54:57.615557 kubelet[2456]: E1216 16:54:57.615506 2456 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 16 16:54:57.616368 kubelet[2456]: I1216 16:54:57.616342 2456 factory.go:221] Registration of the containerd container factory successfully Dec 16 16:54:57.616498 kubelet[2456]: I1216 16:54:57.616480 2456 factory.go:221] Registration of the systemd container factory successfully Dec 16 16:54:57.616741 kubelet[2456]: I1216 16:54:57.616698 2456 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 16:54:57.618346 kubelet[2456]: E1216 16:54:57.597842 2456 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.10.122:6443/api/v1/namespaces/default/events\": dial tcp 10.230.10.122:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-jrcza.gb1.brightbox.com.1881c06ba7f5181a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-jrcza.gb1.brightbox.com,UID:srv-jrcza.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-jrcza.gb1.brightbox.com,},FirstTimestamp:2025-12-16 16:54:57.517836314 +0000 UTC m=+0.832374344,LastTimestamp:2025-12-16 16:54:57.517836314 +0000 UTC m=+0.832374344,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-jrcza.gb1.brightbox.com,}" Dec 16 16:54:57.643420 kubelet[2456]: I1216 16:54:57.643248 2456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 16:54:57.643959 kubelet[2456]: I1216 16:54:57.643929 2456 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 16:54:57.643959 kubelet[2456]: I1216 16:54:57.643954 2456 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 16:54:57.644075 kubelet[2456]: I1216 16:54:57.643987 2456 state_mem.go:36] "Initialized new in-memory state store" Dec 16 16:54:57.646851 kubelet[2456]: I1216 16:54:57.646359 2456 policy_none.go:49] "None policy: Start" Dec 16 16:54:57.646851 kubelet[2456]: I1216 16:54:57.646394 2456 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 16:54:57.646851 kubelet[2456]: I1216 16:54:57.646424 2456 state_mem.go:35] "Initializing new in-memory state store" Dec 16 16:54:57.647112 kubelet[2456]: I1216 16:54:57.647079 2456 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 16:54:57.647313 kubelet[2456]: I1216 16:54:57.647293 2456 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 16:54:57.647548 kubelet[2456]: I1216 16:54:57.647411 2456 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 16:54:57.647548 kubelet[2456]: I1216 16:54:57.647427 2456 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 16:54:57.647840 kubelet[2456]: E1216 16:54:57.647803 2456 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 16:54:57.650383 kubelet[2456]: W1216 16:54:57.650353 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.10.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.10.122:6443: connect: connection refused Dec 16 16:54:57.650561 kubelet[2456]: E1216 16:54:57.650521 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.10.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.10.122:6443: connect: connection refused" logger="UnhandledError" Dec 16 16:54:57.659629 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 16 16:54:57.665394 kubelet[2456]: E1216 16:54:57.665330 2456 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-jrcza.gb1.brightbox.com\" not found" Dec 16 16:54:57.671858 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 16 16:54:57.676832 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 16 16:54:57.689479 kubelet[2456]: I1216 16:54:57.689447 2456 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 16:54:57.690225 kubelet[2456]: I1216 16:54:57.689757 2456 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 16:54:57.690225 kubelet[2456]: I1216 16:54:57.689782 2456 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 16:54:57.690225 kubelet[2456]: I1216 16:54:57.690137 2456 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 16:54:57.694844 kubelet[2456]: E1216 16:54:57.694811 2456 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 16:54:57.694933 kubelet[2456]: E1216 16:54:57.694885 2456 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-jrcza.gb1.brightbox.com\" not found" Dec 16 16:54:57.768639 systemd[1]: Created slice kubepods-burstable-pod848838c33d4331c4d56a059d5769b2f7.slice - libcontainer container kubepods-burstable-pod848838c33d4331c4d56a059d5769b2f7.slice. Dec 16 16:54:57.781420 kubelet[2456]: E1216 16:54:57.781366 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrcza.gb1.brightbox.com\" not found" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:54:57.786736 systemd[1]: Created slice kubepods-burstable-poddf4833444462f8a41e8f49ce49017d0f.slice - libcontainer container kubepods-burstable-poddf4833444462f8a41e8f49ce49017d0f.slice. Dec 16 16:54:57.790229 kubelet[2456]: E1216 16:54:57.790203 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrcza.gb1.brightbox.com\" not found" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:54:57.793127 systemd[1]: Created slice kubepods-burstable-podd4e33ef8e6c243158bcc21999e00ae12.slice - libcontainer container kubepods-burstable-podd4e33ef8e6c243158bcc21999e00ae12.slice. Dec 16 16:54:57.797120 kubelet[2456]: I1216 16:54:57.796687 2456 kubelet_node_status.go:75] "Attempting to register node" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:54:57.797741 kubelet[2456]: E1216 16:54:57.797697 2456 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.10.122:6443/api/v1/nodes\": dial tcp 10.230.10.122:6443: connect: connection refused" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:54:57.798104 kubelet[2456]: E1216 16:54:57.798061 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.10.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jrcza.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.10.122:6443: connect: connection refused" interval="400ms" Dec 16 16:54:57.798513 kubelet[2456]: E1216 16:54:57.798490 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrcza.gb1.brightbox.com\" not found" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:54:57.866853 kubelet[2456]: I1216 16:54:57.866691 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df4833444462f8a41e8f49ce49017d0f-flexvolume-dir\") pod \"kube-controller-manager-srv-jrcza.gb1.brightbox.com\" (UID: \"df4833444462f8a41e8f49ce49017d0f\") " pod="kube-system/kube-controller-manager-srv-jrcza.gb1.brightbox.com" Dec 16 16:54:57.866853 kubelet[2456]: I1216 16:54:57.866766 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df4833444462f8a41e8f49ce49017d0f-k8s-certs\") pod \"kube-controller-manager-srv-jrcza.gb1.brightbox.com\" (UID: \"df4833444462f8a41e8f49ce49017d0f\") " pod="kube-system/kube-controller-manager-srv-jrcza.gb1.brightbox.com" Dec 16 16:54:57.866853 kubelet[2456]: I1216 16:54:57.866800 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4e33ef8e6c243158bcc21999e00ae12-kubeconfig\") pod \"kube-scheduler-srv-jrcza.gb1.brightbox.com\" (UID: \"d4e33ef8e6c243158bcc21999e00ae12\") " pod="kube-system/kube-scheduler-srv-jrcza.gb1.brightbox.com" Dec 16 16:54:57.866853 kubelet[2456]: I1216 16:54:57.866831 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/848838c33d4331c4d56a059d5769b2f7-ca-certs\") pod \"kube-apiserver-srv-jrcza.gb1.brightbox.com\" (UID: \"848838c33d4331c4d56a059d5769b2f7\") " pod="kube-system/kube-apiserver-srv-jrcza.gb1.brightbox.com" Dec 16 16:54:57.867146 kubelet[2456]: I1216 16:54:57.866892 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/848838c33d4331c4d56a059d5769b2f7-k8s-certs\") pod \"kube-apiserver-srv-jrcza.gb1.brightbox.com\" (UID: \"848838c33d4331c4d56a059d5769b2f7\") " pod="kube-system/kube-apiserver-srv-jrcza.gb1.brightbox.com" Dec 16 16:54:57.867146 kubelet[2456]: I1216 16:54:57.866954 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df4833444462f8a41e8f49ce49017d0f-ca-certs\") pod \"kube-controller-manager-srv-jrcza.gb1.brightbox.com\" (UID: \"df4833444462f8a41e8f49ce49017d0f\") " pod="kube-system/kube-controller-manager-srv-jrcza.gb1.brightbox.com" Dec 16 16:54:57.867146 kubelet[2456]: I1216 16:54:57.866993 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df4833444462f8a41e8f49ce49017d0f-kubeconfig\") pod \"kube-controller-manager-srv-jrcza.gb1.brightbox.com\" (UID: \"df4833444462f8a41e8f49ce49017d0f\") " pod="kube-system/kube-controller-manager-srv-jrcza.gb1.brightbox.com" Dec 16 16:54:57.867146 kubelet[2456]: I1216 16:54:57.867048 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df4833444462f8a41e8f49ce49017d0f-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-jrcza.gb1.brightbox.com\" (UID: \"df4833444462f8a41e8f49ce49017d0f\") " pod="kube-system/kube-controller-manager-srv-jrcza.gb1.brightbox.com" Dec 16 16:54:57.867146 kubelet[2456]: I1216 16:54:57.867110 2456 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/848838c33d4331c4d56a059d5769b2f7-usr-share-ca-certificates\") pod \"kube-apiserver-srv-jrcza.gb1.brightbox.com\" (UID: \"848838c33d4331c4d56a059d5769b2f7\") " pod="kube-system/kube-apiserver-srv-jrcza.gb1.brightbox.com" Dec 16 16:54:58.001305 kubelet[2456]: I1216 16:54:58.001177 2456 kubelet_node_status.go:75] "Attempting to register node" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:54:58.001726 kubelet[2456]: E1216 16:54:58.001667 2456 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.10.122:6443/api/v1/nodes\": dial tcp 10.230.10.122:6443: connect: connection refused" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:54:58.085228 containerd[1604]: time="2025-12-16T16:54:58.085145927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-jrcza.gb1.brightbox.com,Uid:848838c33d4331c4d56a059d5769b2f7,Namespace:kube-system,Attempt:0,}" Dec 16 16:54:58.092730 containerd[1604]: time="2025-12-16T16:54:58.092501874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-jrcza.gb1.brightbox.com,Uid:df4833444462f8a41e8f49ce49017d0f,Namespace:kube-system,Attempt:0,}" Dec 16 16:54:58.109400 containerd[1604]: time="2025-12-16T16:54:58.109360175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-jrcza.gb1.brightbox.com,Uid:d4e33ef8e6c243158bcc21999e00ae12,Namespace:kube-system,Attempt:0,}" Dec 16 16:54:58.200211 kubelet[2456]: E1216 16:54:58.199486 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.10.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jrcza.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.10.122:6443: connect: connection refused" interval="800ms" Dec 16 16:54:58.253877 containerd[1604]: time="2025-12-16T16:54:58.253715882Z" level=info msg="connecting to shim 31b14915691ad2eb0ba9211f762849781ef6a2f4802e618072e9867bc58b5035" address="unix:///run/containerd/s/e1c51314a341462a459840f71aaa4d8146173e293ed1c6a0255830731e37b53d" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:54:58.254289 containerd[1604]: time="2025-12-16T16:54:58.253723218Z" level=info msg="connecting to shim 2b47ed39a683913590797ac4542b9cf3fd533d01911a23d6ba14ec879b6b45af" address="unix:///run/containerd/s/72bfb492762cbd7b9bf453d3ea7a9fcb53ca37aeabd468195188626e211751f8" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:54:58.277040 containerd[1604]: time="2025-12-16T16:54:58.276653584Z" level=info msg="connecting to shim 3bc35f4db93ba80f1750f818a1370f2aa713b3ad131b81e46f7f586927daee06" address="unix:///run/containerd/s/07fa361006df1db933b9397123119d952185c0d0bf44b1b8fbdb8c2204e4c775" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:54:58.389541 systemd[1]: Started cri-containerd-2b47ed39a683913590797ac4542b9cf3fd533d01911a23d6ba14ec879b6b45af.scope - libcontainer container 2b47ed39a683913590797ac4542b9cf3fd533d01911a23d6ba14ec879b6b45af. Dec 16 16:54:58.393595 systemd[1]: Started cri-containerd-31b14915691ad2eb0ba9211f762849781ef6a2f4802e618072e9867bc58b5035.scope - libcontainer container 31b14915691ad2eb0ba9211f762849781ef6a2f4802e618072e9867bc58b5035. Dec 16 16:54:58.397434 systemd[1]: Started cri-containerd-3bc35f4db93ba80f1750f818a1370f2aa713b3ad131b81e46f7f586927daee06.scope - libcontainer container 3bc35f4db93ba80f1750f818a1370f2aa713b3ad131b81e46f7f586927daee06. Dec 16 16:54:58.408907 kubelet[2456]: I1216 16:54:58.407931 2456 kubelet_node_status.go:75] "Attempting to register node" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:54:58.408907 kubelet[2456]: E1216 16:54:58.408547 2456 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.10.122:6443/api/v1/nodes\": dial tcp 10.230.10.122:6443: connect: connection refused" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:54:58.513258 containerd[1604]: time="2025-12-16T16:54:58.512087672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-jrcza.gb1.brightbox.com,Uid:df4833444462f8a41e8f49ce49017d0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b47ed39a683913590797ac4542b9cf3fd533d01911a23d6ba14ec879b6b45af\"" Dec 16 16:54:58.523185 containerd[1604]: time="2025-12-16T16:54:58.523084006Z" level=info msg="CreateContainer within sandbox \"2b47ed39a683913590797ac4542b9cf3fd533d01911a23d6ba14ec879b6b45af\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 16 16:54:58.561829 containerd[1604]: time="2025-12-16T16:54:58.561780620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-jrcza.gb1.brightbox.com,Uid:848838c33d4331c4d56a059d5769b2f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bc35f4db93ba80f1750f818a1370f2aa713b3ad131b81e46f7f586927daee06\"" Dec 16 16:54:58.564418 containerd[1604]: time="2025-12-16T16:54:58.564380799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-jrcza.gb1.brightbox.com,Uid:d4e33ef8e6c243158bcc21999e00ae12,Namespace:kube-system,Attempt:0,} returns sandbox id \"31b14915691ad2eb0ba9211f762849781ef6a2f4802e618072e9867bc58b5035\"" Dec 16 16:54:58.567153 containerd[1604]: time="2025-12-16T16:54:58.566634745Z" level=info msg="CreateContainer within sandbox \"3bc35f4db93ba80f1750f818a1370f2aa713b3ad131b81e46f7f586927daee06\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 16 16:54:58.568584 containerd[1604]: time="2025-12-16T16:54:58.568552338Z" level=info msg="CreateContainer within sandbox \"31b14915691ad2eb0ba9211f762849781ef6a2f4802e618072e9867bc58b5035\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 16 16:54:58.570678 containerd[1604]: time="2025-12-16T16:54:58.570647638Z" level=info msg="Container 68f639fe62c211765bb90f8dfcb464c9d95c7416b136dc89ef70a7748a3d99f3: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:54:58.583699 containerd[1604]: time="2025-12-16T16:54:58.583648562Z" level=info msg="CreateContainer within sandbox \"2b47ed39a683913590797ac4542b9cf3fd533d01911a23d6ba14ec879b6b45af\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"68f639fe62c211765bb90f8dfcb464c9d95c7416b136dc89ef70a7748a3d99f3\"" Dec 16 16:54:58.584990 containerd[1604]: time="2025-12-16T16:54:58.584955680Z" level=info msg="StartContainer for \"68f639fe62c211765bb90f8dfcb464c9d95c7416b136dc89ef70a7748a3d99f3\"" Dec 16 16:54:58.588239 containerd[1604]: time="2025-12-16T16:54:58.587599694Z" level=info msg="Container cfe06ad8ccfc0a2916ccc72838205cd14bbda8bc2061eafe8f84c33d98485cd6: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:54:58.588239 containerd[1604]: time="2025-12-16T16:54:58.588095105Z" level=info msg="connecting to shim 68f639fe62c211765bb90f8dfcb464c9d95c7416b136dc89ef70a7748a3d99f3" address="unix:///run/containerd/s/72bfb492762cbd7b9bf453d3ea7a9fcb53ca37aeabd468195188626e211751f8" protocol=ttrpc version=3 Dec 16 16:54:58.589545 containerd[1604]: time="2025-12-16T16:54:58.589516606Z" level=info msg="Container 167d749cedbffb7e1ee7eb8ec70c64fb3f3b97d6ac11c789c664d0114cbd12e8: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:54:58.597630 containerd[1604]: time="2025-12-16T16:54:58.597596090Z" level=info msg="CreateContainer within sandbox \"31b14915691ad2eb0ba9211f762849781ef6a2f4802e618072e9867bc58b5035\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cfe06ad8ccfc0a2916ccc72838205cd14bbda8bc2061eafe8f84c33d98485cd6\"" Dec 16 16:54:58.598953 containerd[1604]: time="2025-12-16T16:54:58.598923135Z" level=info msg="StartContainer for \"cfe06ad8ccfc0a2916ccc72838205cd14bbda8bc2061eafe8f84c33d98485cd6\"" Dec 16 16:54:58.600105 containerd[1604]: time="2025-12-16T16:54:58.600069136Z" level=info msg="CreateContainer within sandbox \"3bc35f4db93ba80f1750f818a1370f2aa713b3ad131b81e46f7f586927daee06\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"167d749cedbffb7e1ee7eb8ec70c64fb3f3b97d6ac11c789c664d0114cbd12e8\"" Dec 16 16:54:58.600660 containerd[1604]: time="2025-12-16T16:54:58.600620781Z" level=info msg="StartContainer for \"167d749cedbffb7e1ee7eb8ec70c64fb3f3b97d6ac11c789c664d0114cbd12e8\"" Dec 16 16:54:58.602425 containerd[1604]: time="2025-12-16T16:54:58.602378192Z" level=info msg="connecting to shim 167d749cedbffb7e1ee7eb8ec70c64fb3f3b97d6ac11c789c664d0114cbd12e8" address="unix:///run/containerd/s/07fa361006df1db933b9397123119d952185c0d0bf44b1b8fbdb8c2204e4c775" protocol=ttrpc version=3 Dec 16 16:54:58.604832 containerd[1604]: time="2025-12-16T16:54:58.604487243Z" level=info msg="connecting to shim cfe06ad8ccfc0a2916ccc72838205cd14bbda8bc2061eafe8f84c33d98485cd6" address="unix:///run/containerd/s/e1c51314a341462a459840f71aaa4d8146173e293ed1c6a0255830731e37b53d" protocol=ttrpc version=3 Dec 16 16:54:58.635488 systemd[1]: Started cri-containerd-68f639fe62c211765bb90f8dfcb464c9d95c7416b136dc89ef70a7748a3d99f3.scope - libcontainer container 68f639fe62c211765bb90f8dfcb464c9d95c7416b136dc89ef70a7748a3d99f3. Dec 16 16:54:58.647387 systemd[1]: Started cri-containerd-167d749cedbffb7e1ee7eb8ec70c64fb3f3b97d6ac11c789c664d0114cbd12e8.scope - libcontainer container 167d749cedbffb7e1ee7eb8ec70c64fb3f3b97d6ac11c789c664d0114cbd12e8. Dec 16 16:54:58.658644 systemd[1]: Started cri-containerd-cfe06ad8ccfc0a2916ccc72838205cd14bbda8bc2061eafe8f84c33d98485cd6.scope - libcontainer container cfe06ad8ccfc0a2916ccc72838205cd14bbda8bc2061eafe8f84c33d98485cd6. Dec 16 16:54:58.764719 kubelet[2456]: W1216 16:54:58.762109 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.10.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.10.122:6443: connect: connection refused Dec 16 16:54:58.764719 kubelet[2456]: E1216 16:54:58.762429 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.10.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.10.122:6443: connect: connection refused" logger="UnhandledError" Dec 16 16:54:58.785991 containerd[1604]: time="2025-12-16T16:54:58.785869677Z" level=info msg="StartContainer for \"cfe06ad8ccfc0a2916ccc72838205cd14bbda8bc2061eafe8f84c33d98485cd6\" returns successfully" Dec 16 16:54:58.823714 kubelet[2456]: W1216 16:54:58.823493 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.10.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jrcza.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.10.122:6443: connect: connection refused Dec 16 16:54:58.825432 kubelet[2456]: E1216 16:54:58.824399 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.10.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-jrcza.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.10.122:6443: connect: connection refused" logger="UnhandledError" Dec 16 16:54:58.832494 containerd[1604]: time="2025-12-16T16:54:58.832445201Z" level=info msg="StartContainer for \"68f639fe62c211765bb90f8dfcb464c9d95c7416b136dc89ef70a7748a3d99f3\" returns successfully" Dec 16 16:54:58.854601 containerd[1604]: time="2025-12-16T16:54:58.854487661Z" level=info msg="StartContainer for \"167d749cedbffb7e1ee7eb8ec70c64fb3f3b97d6ac11c789c664d0114cbd12e8\" returns successfully" Dec 16 16:54:59.001000 kubelet[2456]: E1216 16:54:59.000916 2456 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.10.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-jrcza.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.10.122:6443: connect: connection refused" interval="1.6s" Dec 16 16:54:59.118541 kubelet[2456]: W1216 16:54:59.118399 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.10.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.10.122:6443: connect: connection refused Dec 16 16:54:59.118541 kubelet[2456]: E1216 16:54:59.118496 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.10.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.10.122:6443: connect: connection refused" logger="UnhandledError" Dec 16 16:54:59.194147 kubelet[2456]: W1216 16:54:59.193739 2456 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.10.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.10.122:6443: connect: connection refused Dec 16 16:54:59.194147 kubelet[2456]: E1216 16:54:59.193829 2456 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.10.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.10.122:6443: connect: connection refused" logger="UnhandledError" Dec 16 16:54:59.214234 kubelet[2456]: I1216 16:54:59.213863 2456 kubelet_node_status.go:75] "Attempting to register node" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:54:59.214496 kubelet[2456]: E1216 16:54:59.214466 2456 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.10.122:6443/api/v1/nodes\": dial tcp 10.230.10.122:6443: connect: connection refused" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:54:59.695148 kubelet[2456]: E1216 16:54:59.694872 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrcza.gb1.brightbox.com\" not found" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:54:59.702884 kubelet[2456]: E1216 16:54:59.702861 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrcza.gb1.brightbox.com\" not found" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:54:59.706364 kubelet[2456]: E1216 16:54:59.706107 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrcza.gb1.brightbox.com\" not found" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:55:00.711369 kubelet[2456]: E1216 16:55:00.711293 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrcza.gb1.brightbox.com\" not found" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:55:00.711369 kubelet[2456]: E1216 16:55:00.711311 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrcza.gb1.brightbox.com\" not found" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:55:00.712368 kubelet[2456]: E1216 16:55:00.711909 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrcza.gb1.brightbox.com\" not found" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:55:00.819533 kubelet[2456]: I1216 16:55:00.818821 2456 kubelet_node_status.go:75] "Attempting to register node" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:55:01.713872 kubelet[2456]: E1216 16:55:01.713828 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrcza.gb1.brightbox.com\" not found" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:55:01.715021 kubelet[2456]: E1216 16:55:01.714994 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrcza.gb1.brightbox.com\" not found" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:55:01.715388 kubelet[2456]: E1216 16:55:01.715365 2456 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-jrcza.gb1.brightbox.com\" not found" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:55:01.839607 kubelet[2456]: E1216 16:55:01.839501 2456 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-jrcza.gb1.brightbox.com\" not found" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:55:01.957797 kubelet[2456]: I1216 16:55:01.957745 2456 kubelet_node_status.go:78] "Successfully registered node" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:55:01.957797 kubelet[2456]: E1216 16:55:01.957796 2456 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-jrcza.gb1.brightbox.com\": node \"srv-jrcza.gb1.brightbox.com\" not found" Dec 16 16:55:01.966234 kubelet[2456]: I1216 16:55:01.965311 2456 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:02.037429 kubelet[2456]: E1216 16:55:02.037376 2456 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-jrcza.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:02.037429 kubelet[2456]: I1216 16:55:02.037426 2456 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:02.041049 kubelet[2456]: E1216 16:55:02.040997 2456 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-jrcza.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:02.041049 kubelet[2456]: I1216 16:55:02.041035 2456 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:02.044299 kubelet[2456]: E1216 16:55:02.044248 2456 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-jrcza.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:02.525515 kubelet[2456]: I1216 16:55:02.525432 2456 apiserver.go:52] "Watching apiserver" Dec 16 16:55:02.566352 kubelet[2456]: I1216 16:55:02.566293 2456 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 16:55:02.713327 kubelet[2456]: I1216 16:55:02.712963 2456 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:02.718010 kubelet[2456]: E1216 16:55:02.717959 2456 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-jrcza.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:03.846198 systemd[1]: Reload requested from client PID 2730 ('systemctl') (unit session-9.scope)... Dec 16 16:55:03.846245 systemd[1]: Reloading... Dec 16 16:55:03.893343 kubelet[2456]: I1216 16:55:03.892675 2456 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:03.911378 kubelet[2456]: W1216 16:55:03.911313 2456 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 16:55:04.015673 zram_generator::config[2775]: No configuration found. Dec 16 16:55:04.428148 systemd[1]: Reloading finished in 581 ms. Dec 16 16:55:04.480416 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:55:04.495811 systemd[1]: kubelet.service: Deactivated successfully. Dec 16 16:55:04.496479 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:55:04.496573 systemd[1]: kubelet.service: Consumed 1.289s CPU time, 130.2M memory peak. Dec 16 16:55:04.500443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 16 16:55:04.786677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 16 16:55:04.801692 (kubelet)[2839]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 16 16:55:04.900085 kubelet[2839]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 16:55:04.900085 kubelet[2839]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 16 16:55:04.902643 kubelet[2839]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 16 16:55:04.902643 kubelet[2839]: I1216 16:55:04.900477 2839 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 16 16:55:04.912865 kubelet[2839]: I1216 16:55:04.912830 2839 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Dec 16 16:55:04.913009 kubelet[2839]: I1216 16:55:04.912991 2839 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 16 16:55:04.913588 kubelet[2839]: I1216 16:55:04.913565 2839 server.go:954] "Client rotation is on, will bootstrap in background" Dec 16 16:55:04.916062 kubelet[2839]: I1216 16:55:04.916037 2839 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 16 16:55:04.927064 kubelet[2839]: I1216 16:55:04.927025 2839 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 16 16:55:04.937800 kubelet[2839]: I1216 16:55:04.937760 2839 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 16 16:55:04.945775 kubelet[2839]: I1216 16:55:04.945743 2839 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 16 16:55:04.946439 kubelet[2839]: I1216 16:55:04.946363 2839 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 16 16:55:04.946922 kubelet[2839]: I1216 16:55:04.946618 2839 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-jrcza.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 16 16:55:04.947250 kubelet[2839]: I1216 16:55:04.947228 2839 topology_manager.go:138] "Creating topology manager with none policy" Dec 16 16:55:04.947356 kubelet[2839]: I1216 16:55:04.947340 2839 container_manager_linux.go:304] "Creating device plugin manager" Dec 16 16:55:04.947534 kubelet[2839]: I1216 16:55:04.947504 2839 state_mem.go:36] "Initialized new in-memory state store" Dec 16 16:55:04.947931 kubelet[2839]: I1216 16:55:04.947913 2839 kubelet.go:446] "Attempting to sync node with API server" Dec 16 16:55:04.948235 kubelet[2839]: I1216 16:55:04.948215 2839 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 16 16:55:04.948419 kubelet[2839]: I1216 16:55:04.948398 2839 kubelet.go:352] "Adding apiserver pod source" Dec 16 16:55:04.952339 kubelet[2839]: I1216 16:55:04.952230 2839 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 16 16:55:04.959556 kubelet[2839]: I1216 16:55:04.959521 2839 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Dec 16 16:55:04.960382 kubelet[2839]: I1216 16:55:04.960359 2839 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 16 16:55:04.961216 kubelet[2839]: I1216 16:55:04.961196 2839 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 16 16:55:04.961339 kubelet[2839]: I1216 16:55:04.961322 2839 server.go:1287] "Started kubelet" Dec 16 16:55:04.973022 kubelet[2839]: I1216 16:55:04.972988 2839 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 16 16:55:04.989873 kubelet[2839]: I1216 16:55:04.989215 2839 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Dec 16 16:55:04.991842 kubelet[2839]: I1216 16:55:04.991353 2839 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 16 16:55:04.992540 kubelet[2839]: I1216 16:55:04.992335 2839 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 16 16:55:04.996537 kubelet[2839]: I1216 16:55:04.996205 2839 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 16 16:55:04.996604 kubelet[2839]: E1216 16:55:04.996497 2839 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-jrcza.gb1.brightbox.com\" not found" Dec 16 16:55:04.998443 kubelet[2839]: I1216 16:55:04.998109 2839 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 16 16:55:04.998813 kubelet[2839]: I1216 16:55:04.998613 2839 reconciler.go:26] "Reconciler: start to sync state" Dec 16 16:55:05.000227 kubelet[2839]: I1216 16:55:04.999697 2839 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 16 16:55:05.015662 kubelet[2839]: I1216 16:55:05.014797 2839 server.go:479] "Adding debug handlers to kubelet server" Dec 16 16:55:05.024238 kubelet[2839]: I1216 16:55:05.023848 2839 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 16 16:55:05.036156 kubelet[2839]: I1216 16:55:05.036110 2839 factory.go:221] Registration of the containerd container factory successfully Dec 16 16:55:05.036156 kubelet[2839]: I1216 16:55:05.036141 2839 factory.go:221] Registration of the systemd container factory successfully Dec 16 16:55:05.036449 kubelet[2839]: I1216 16:55:05.036280 2839 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 16 16:55:05.041821 kubelet[2839]: I1216 16:55:05.041672 2839 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 16 16:55:05.042018 kubelet[2839]: I1216 16:55:05.041996 2839 status_manager.go:227] "Starting to sync pod status with apiserver" Dec 16 16:55:05.043259 kubelet[2839]: I1216 16:55:05.043151 2839 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 16 16:55:05.043623 kubelet[2839]: I1216 16:55:05.043601 2839 kubelet.go:2382] "Starting kubelet main sync loop" Dec 16 16:55:05.043842 kubelet[2839]: E1216 16:55:05.043805 2839 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 16 16:55:05.112851 kubelet[2839]: I1216 16:55:05.112811 2839 cpu_manager.go:221] "Starting CPU manager" policy="none" Dec 16 16:55:05.113945 kubelet[2839]: I1216 16:55:05.113274 2839 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Dec 16 16:55:05.113945 kubelet[2839]: I1216 16:55:05.113313 2839 state_mem.go:36] "Initialized new in-memory state store" Dec 16 16:55:05.113945 kubelet[2839]: I1216 16:55:05.113595 2839 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 16 16:55:05.113945 kubelet[2839]: I1216 16:55:05.113615 2839 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 16 16:55:05.113945 kubelet[2839]: I1216 16:55:05.113648 2839 policy_none.go:49] "None policy: Start" Dec 16 16:55:05.113945 kubelet[2839]: I1216 16:55:05.113663 2839 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 16 16:55:05.113945 kubelet[2839]: I1216 16:55:05.113683 2839 state_mem.go:35] "Initializing new in-memory state store" Dec 16 16:55:05.113945 kubelet[2839]: I1216 16:55:05.113847 2839 state_mem.go:75] "Updated machine memory state" Dec 16 16:55:05.126369 kubelet[2839]: I1216 16:55:05.126332 2839 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 16 16:55:05.127221 kubelet[2839]: I1216 16:55:05.127193 2839 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 16 16:55:05.127438 kubelet[2839]: I1216 16:55:05.127353 2839 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 16 16:55:05.128106 kubelet[2839]: I1216 16:55:05.128086 2839 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 16 16:55:05.130949 kubelet[2839]: E1216 16:55:05.130922 2839 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Dec 16 16:55:05.154175 kubelet[2839]: I1216 16:55:05.154066 2839 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.159298 kubelet[2839]: I1216 16:55:05.155993 2839 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.163029 kubelet[2839]: I1216 16:55:05.157463 2839 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.173820 kubelet[2839]: W1216 16:55:05.173672 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 16:55:05.187447 kubelet[2839]: W1216 16:55:05.187094 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 16:55:05.187447 kubelet[2839]: E1216 16:55:05.187209 2839 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-jrcza.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.189602 kubelet[2839]: W1216 16:55:05.189065 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 16:55:05.203146 kubelet[2839]: I1216 16:55:05.203091 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df4833444462f8a41e8f49ce49017d0f-ca-certs\") pod \"kube-controller-manager-srv-jrcza.gb1.brightbox.com\" (UID: \"df4833444462f8a41e8f49ce49017d0f\") " pod="kube-system/kube-controller-manager-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.204067 kubelet[2839]: I1216 16:55:05.203588 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df4833444462f8a41e8f49ce49017d0f-kubeconfig\") pod \"kube-controller-manager-srv-jrcza.gb1.brightbox.com\" (UID: \"df4833444462f8a41e8f49ce49017d0f\") " pod="kube-system/kube-controller-manager-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.204067 kubelet[2839]: I1216 16:55:05.203633 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df4833444462f8a41e8f49ce49017d0f-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-jrcza.gb1.brightbox.com\" (UID: \"df4833444462f8a41e8f49ce49017d0f\") " pod="kube-system/kube-controller-manager-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.204067 kubelet[2839]: I1216 16:55:05.203675 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4e33ef8e6c243158bcc21999e00ae12-kubeconfig\") pod \"kube-scheduler-srv-jrcza.gb1.brightbox.com\" (UID: \"d4e33ef8e6c243158bcc21999e00ae12\") " pod="kube-system/kube-scheduler-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.204067 kubelet[2839]: I1216 16:55:05.203708 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/848838c33d4331c4d56a059d5769b2f7-k8s-certs\") pod \"kube-apiserver-srv-jrcza.gb1.brightbox.com\" (UID: \"848838c33d4331c4d56a059d5769b2f7\") " pod="kube-system/kube-apiserver-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.204067 kubelet[2839]: I1216 16:55:05.203739 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/848838c33d4331c4d56a059d5769b2f7-usr-share-ca-certificates\") pod \"kube-apiserver-srv-jrcza.gb1.brightbox.com\" (UID: \"848838c33d4331c4d56a059d5769b2f7\") " pod="kube-system/kube-apiserver-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.204999 kubelet[2839]: I1216 16:55:05.203766 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df4833444462f8a41e8f49ce49017d0f-flexvolume-dir\") pod \"kube-controller-manager-srv-jrcza.gb1.brightbox.com\" (UID: \"df4833444462f8a41e8f49ce49017d0f\") " pod="kube-system/kube-controller-manager-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.204999 kubelet[2839]: I1216 16:55:05.203793 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df4833444462f8a41e8f49ce49017d0f-k8s-certs\") pod \"kube-controller-manager-srv-jrcza.gb1.brightbox.com\" (UID: \"df4833444462f8a41e8f49ce49017d0f\") " pod="kube-system/kube-controller-manager-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.204999 kubelet[2839]: I1216 16:55:05.203821 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/848838c33d4331c4d56a059d5769b2f7-ca-certs\") pod \"kube-apiserver-srv-jrcza.gb1.brightbox.com\" (UID: \"848838c33d4331c4d56a059d5769b2f7\") " pod="kube-system/kube-apiserver-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.268200 kubelet[2839]: I1216 16:55:05.267562 2839 kubelet_node_status.go:75] "Attempting to register node" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.282270 kubelet[2839]: I1216 16:55:05.282217 2839 kubelet_node_status.go:124] "Node was previously registered" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.282444 kubelet[2839]: I1216 16:55:05.282347 2839 kubelet_node_status.go:78] "Successfully registered node" node="srv-jrcza.gb1.brightbox.com" Dec 16 16:55:05.955189 kubelet[2839]: I1216 16:55:05.953548 2839 apiserver.go:52] "Watching apiserver" Dec 16 16:55:05.999322 kubelet[2839]: I1216 16:55:05.999243 2839 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 16 16:55:06.083874 kubelet[2839]: I1216 16:55:06.083823 2839 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:06.084999 kubelet[2839]: I1216 16:55:06.084963 2839 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:06.092921 kubelet[2839]: W1216 16:55:06.092822 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 16:55:06.093079 kubelet[2839]: E1216 16:55:06.092976 2839 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-jrcza.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:06.095671 kubelet[2839]: W1216 16:55:06.095394 2839 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 16 16:55:06.095671 kubelet[2839]: E1216 16:55:06.095444 2839 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-jrcza.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-jrcza.gb1.brightbox.com" Dec 16 16:55:06.144202 kubelet[2839]: I1216 16:55:06.143944 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-jrcza.gb1.brightbox.com" podStartSLOduration=1.143907623 podStartE2EDuration="1.143907623s" podCreationTimestamp="2025-12-16 16:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 16:55:06.132093119 +0000 UTC m=+1.320136915" watchObservedRunningTime="2025-12-16 16:55:06.143907623 +0000 UTC m=+1.331951421" Dec 16 16:55:06.175456 kubelet[2839]: I1216 16:55:06.175376 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-jrcza.gb1.brightbox.com" podStartSLOduration=1.175350491 podStartE2EDuration="1.175350491s" podCreationTimestamp="2025-12-16 16:55:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 16:55:06.144817987 +0000 UTC m=+1.332861777" watchObservedRunningTime="2025-12-16 16:55:06.175350491 +0000 UTC m=+1.363394289" Dec 16 16:55:06.288127 sudo[1837]: pam_unix(sudo:session): session closed for user root Dec 16 16:55:06.437498 sshd[1836]: Connection closed by 139.178.68.195 port 48080 Dec 16 16:55:06.439069 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Dec 16 16:55:06.445259 systemd[1]: sshd@6-10.230.10.122:22-139.178.68.195:48080.service: Deactivated successfully. Dec 16 16:55:06.448134 systemd[1]: session-9.scope: Deactivated successfully. Dec 16 16:55:06.448667 systemd[1]: session-9.scope: Consumed 4.961s CPU time, 157.8M memory peak. Dec 16 16:55:06.451020 systemd-logind[1574]: Session 9 logged out. Waiting for processes to exit. Dec 16 16:55:06.453324 systemd-logind[1574]: Removed session 9. Dec 16 16:55:08.566997 kubelet[2839]: I1216 16:55:08.566915 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-jrcza.gb1.brightbox.com" podStartSLOduration=5.566860409 podStartE2EDuration="5.566860409s" podCreationTimestamp="2025-12-16 16:55:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 16:55:06.17633052 +0000 UTC m=+1.364374327" watchObservedRunningTime="2025-12-16 16:55:08.566860409 +0000 UTC m=+3.754904192" Dec 16 16:55:08.881188 kubelet[2839]: I1216 16:55:08.881070 2839 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 16 16:55:08.882340 containerd[1604]: time="2025-12-16T16:55:08.882291983Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 16 16:55:08.883974 kubelet[2839]: I1216 16:55:08.882763 2839 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 16 16:55:09.737709 systemd[1]: Created slice kubepods-burstable-pod5653fb98_8f6f_470b_a61f_5dd1aab8fe7d.slice - libcontainer container kubepods-burstable-pod5653fb98_8f6f_470b_a61f_5dd1aab8fe7d.slice. Dec 16 16:55:09.744103 kubelet[2839]: I1216 16:55:09.741152 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5653fb98-8f6f-470b-a61f-5dd1aab8fe7d-run\") pod \"kube-flannel-ds-v4jh8\" (UID: \"5653fb98-8f6f-470b-a61f-5dd1aab8fe7d\") " pod="kube-flannel/kube-flannel-ds-v4jh8" Dec 16 16:55:09.744103 kubelet[2839]: I1216 16:55:09.741844 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/5653fb98-8f6f-470b-a61f-5dd1aab8fe7d-flannel-cfg\") pod \"kube-flannel-ds-v4jh8\" (UID: \"5653fb98-8f6f-470b-a61f-5dd1aab8fe7d\") " pod="kube-flannel/kube-flannel-ds-v4jh8" Dec 16 16:55:09.744103 kubelet[2839]: I1216 16:55:09.742006 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74lmm\" (UniqueName: \"kubernetes.io/projected/5653fb98-8f6f-470b-a61f-5dd1aab8fe7d-kube-api-access-74lmm\") pod \"kube-flannel-ds-v4jh8\" (UID: \"5653fb98-8f6f-470b-a61f-5dd1aab8fe7d\") " pod="kube-flannel/kube-flannel-ds-v4jh8" Dec 16 16:55:09.744103 kubelet[2839]: I1216 16:55:09.742718 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/5653fb98-8f6f-470b-a61f-5dd1aab8fe7d-cni\") pod \"kube-flannel-ds-v4jh8\" (UID: \"5653fb98-8f6f-470b-a61f-5dd1aab8fe7d\") " pod="kube-flannel/kube-flannel-ds-v4jh8" Dec 16 16:55:09.744103 kubelet[2839]: I1216 16:55:09.742821 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/5653fb98-8f6f-470b-a61f-5dd1aab8fe7d-cni-plugin\") pod \"kube-flannel-ds-v4jh8\" (UID: \"5653fb98-8f6f-470b-a61f-5dd1aab8fe7d\") " pod="kube-flannel/kube-flannel-ds-v4jh8" Dec 16 16:55:09.745777 kubelet[2839]: I1216 16:55:09.742895 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5653fb98-8f6f-470b-a61f-5dd1aab8fe7d-xtables-lock\") pod \"kube-flannel-ds-v4jh8\" (UID: \"5653fb98-8f6f-470b-a61f-5dd1aab8fe7d\") " pod="kube-flannel/kube-flannel-ds-v4jh8" Dec 16 16:55:09.755352 systemd[1]: Created slice kubepods-besteffort-pod0f293304_f558_450f_bf63_cb38e63ed5b1.slice - libcontainer container kubepods-besteffort-pod0f293304_f558_450f_bf63_cb38e63ed5b1.slice. Dec 16 16:55:09.844365 kubelet[2839]: I1216 16:55:09.843433 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptcl9\" (UniqueName: \"kubernetes.io/projected/0f293304-f558-450f-bf63-cb38e63ed5b1-kube-api-access-ptcl9\") pod \"kube-proxy-kkrj8\" (UID: \"0f293304-f558-450f-bf63-cb38e63ed5b1\") " pod="kube-system/kube-proxy-kkrj8" Dec 16 16:55:09.844365 kubelet[2839]: I1216 16:55:09.843526 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0f293304-f558-450f-bf63-cb38e63ed5b1-kube-proxy\") pod \"kube-proxy-kkrj8\" (UID: \"0f293304-f558-450f-bf63-cb38e63ed5b1\") " pod="kube-system/kube-proxy-kkrj8" Dec 16 16:55:09.844365 kubelet[2839]: I1216 16:55:09.843612 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f293304-f558-450f-bf63-cb38e63ed5b1-lib-modules\") pod \"kube-proxy-kkrj8\" (UID: \"0f293304-f558-450f-bf63-cb38e63ed5b1\") " pod="kube-system/kube-proxy-kkrj8" Dec 16 16:55:09.844365 kubelet[2839]: I1216 16:55:09.843689 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f293304-f558-450f-bf63-cb38e63ed5b1-xtables-lock\") pod \"kube-proxy-kkrj8\" (UID: \"0f293304-f558-450f-bf63-cb38e63ed5b1\") " pod="kube-system/kube-proxy-kkrj8" Dec 16 16:55:10.049889 containerd[1604]: time="2025-12-16T16:55:10.049264719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-v4jh8,Uid:5653fb98-8f6f-470b-a61f-5dd1aab8fe7d,Namespace:kube-flannel,Attempt:0,}" Dec 16 16:55:10.066875 containerd[1604]: time="2025-12-16T16:55:10.066617274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkrj8,Uid:0f293304-f558-450f-bf63-cb38e63ed5b1,Namespace:kube-system,Attempt:0,}" Dec 16 16:55:10.093399 containerd[1604]: time="2025-12-16T16:55:10.093350586Z" level=info msg="connecting to shim 9a93a0c272b95c43630715ea4d91cd9d7996a07f6a08252ac2ff35a542064569" address="unix:///run/containerd/s/c8e69ac5bb83ec73b8bd35398f2b4bc1461d111759fc84a85fb99439eab24171" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:55:10.103453 containerd[1604]: time="2025-12-16T16:55:10.103398028Z" level=info msg="connecting to shim 4db0a6f6d97334bb58c512081f3028a0bc2c9610858161627f837269cc7e9b58" address="unix:///run/containerd/s/551a20ca18ea43a53fb885367ab7ed796255f32592a307f362d6c07fc5d98de3" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:55:10.152470 systemd[1]: Started cri-containerd-9a93a0c272b95c43630715ea4d91cd9d7996a07f6a08252ac2ff35a542064569.scope - libcontainer container 9a93a0c272b95c43630715ea4d91cd9d7996a07f6a08252ac2ff35a542064569. Dec 16 16:55:10.163094 systemd[1]: Started cri-containerd-4db0a6f6d97334bb58c512081f3028a0bc2c9610858161627f837269cc7e9b58.scope - libcontainer container 4db0a6f6d97334bb58c512081f3028a0bc2c9610858161627f837269cc7e9b58. Dec 16 16:55:10.239590 containerd[1604]: time="2025-12-16T16:55:10.239512931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kkrj8,Uid:0f293304-f558-450f-bf63-cb38e63ed5b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4db0a6f6d97334bb58c512081f3028a0bc2c9610858161627f837269cc7e9b58\"" Dec 16 16:55:10.248436 containerd[1604]: time="2025-12-16T16:55:10.248388609Z" level=info msg="CreateContainer within sandbox \"4db0a6f6d97334bb58c512081f3028a0bc2c9610858161627f837269cc7e9b58\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 16 16:55:10.259105 containerd[1604]: time="2025-12-16T16:55:10.258998178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-v4jh8,Uid:5653fb98-8f6f-470b-a61f-5dd1aab8fe7d,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"9a93a0c272b95c43630715ea4d91cd9d7996a07f6a08252ac2ff35a542064569\"" Dec 16 16:55:10.262005 containerd[1604]: time="2025-12-16T16:55:10.261956586Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 16 16:55:10.273395 containerd[1604]: time="2025-12-16T16:55:10.273337047Z" level=info msg="Container 4026caccffc1f05b014d665bf481cf9f91f301291dcb68145597cadc1fe4fe04: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:55:10.282888 containerd[1604]: time="2025-12-16T16:55:10.282741888Z" level=info msg="CreateContainer within sandbox \"4db0a6f6d97334bb58c512081f3028a0bc2c9610858161627f837269cc7e9b58\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4026caccffc1f05b014d665bf481cf9f91f301291dcb68145597cadc1fe4fe04\"" Dec 16 16:55:10.284179 containerd[1604]: time="2025-12-16T16:55:10.283805765Z" level=info msg="StartContainer for \"4026caccffc1f05b014d665bf481cf9f91f301291dcb68145597cadc1fe4fe04\"" Dec 16 16:55:10.287014 containerd[1604]: time="2025-12-16T16:55:10.286223725Z" level=info msg="connecting to shim 4026caccffc1f05b014d665bf481cf9f91f301291dcb68145597cadc1fe4fe04" address="unix:///run/containerd/s/551a20ca18ea43a53fb885367ab7ed796255f32592a307f362d6c07fc5d98de3" protocol=ttrpc version=3 Dec 16 16:55:10.316429 systemd[1]: Started cri-containerd-4026caccffc1f05b014d665bf481cf9f91f301291dcb68145597cadc1fe4fe04.scope - libcontainer container 4026caccffc1f05b014d665bf481cf9f91f301291dcb68145597cadc1fe4fe04. Dec 16 16:55:10.406735 containerd[1604]: time="2025-12-16T16:55:10.406686590Z" level=info msg="StartContainer for \"4026caccffc1f05b014d665bf481cf9f91f301291dcb68145597cadc1fe4fe04\" returns successfully" Dec 16 16:55:12.756503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2263222038.mount: Deactivated successfully. Dec 16 16:55:12.832774 containerd[1604]: time="2025-12-16T16:55:12.832667454Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:55:12.834260 containerd[1604]: time="2025-12-16T16:55:12.834222070Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852936" Dec 16 16:55:12.835896 containerd[1604]: time="2025-12-16T16:55:12.835849142Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:55:12.839389 containerd[1604]: time="2025-12-16T16:55:12.839305387Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:55:12.840357 containerd[1604]: time="2025-12-16T16:55:12.840309165Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.578008012s" Dec 16 16:55:12.840796 containerd[1604]: time="2025-12-16T16:55:12.840355090Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 16 16:55:12.845419 containerd[1604]: time="2025-12-16T16:55:12.845077220Z" level=info msg="CreateContainer within sandbox \"9a93a0c272b95c43630715ea4d91cd9d7996a07f6a08252ac2ff35a542064569\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 16 16:55:12.857191 containerd[1604]: time="2025-12-16T16:55:12.855089849Z" level=info msg="Container cfb052dbfabbadf809ba25d3bff366a1cedf9f0c9f7f792dc844c77612bcebc3: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:55:12.864423 containerd[1604]: time="2025-12-16T16:55:12.864362402Z" level=info msg="CreateContainer within sandbox \"9a93a0c272b95c43630715ea4d91cd9d7996a07f6a08252ac2ff35a542064569\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"cfb052dbfabbadf809ba25d3bff366a1cedf9f0c9f7f792dc844c77612bcebc3\"" Dec 16 16:55:12.866768 containerd[1604]: time="2025-12-16T16:55:12.866466267Z" level=info msg="StartContainer for \"cfb052dbfabbadf809ba25d3bff366a1cedf9f0c9f7f792dc844c77612bcebc3\"" Dec 16 16:55:12.868096 containerd[1604]: time="2025-12-16T16:55:12.867880480Z" level=info msg="connecting to shim cfb052dbfabbadf809ba25d3bff366a1cedf9f0c9f7f792dc844c77612bcebc3" address="unix:///run/containerd/s/c8e69ac5bb83ec73b8bd35398f2b4bc1461d111759fc84a85fb99439eab24171" protocol=ttrpc version=3 Dec 16 16:55:12.909515 systemd[1]: Started cri-containerd-cfb052dbfabbadf809ba25d3bff366a1cedf9f0c9f7f792dc844c77612bcebc3.scope - libcontainer container cfb052dbfabbadf809ba25d3bff366a1cedf9f0c9f7f792dc844c77612bcebc3. Dec 16 16:55:12.979963 systemd[1]: cri-containerd-cfb052dbfabbadf809ba25d3bff366a1cedf9f0c9f7f792dc844c77612bcebc3.scope: Deactivated successfully. Dec 16 16:55:12.987100 containerd[1604]: time="2025-12-16T16:55:12.987021193Z" level=info msg="received container exit event container_id:\"cfb052dbfabbadf809ba25d3bff366a1cedf9f0c9f7f792dc844c77612bcebc3\" id:\"cfb052dbfabbadf809ba25d3bff366a1cedf9f0c9f7f792dc844c77612bcebc3\" pid:3171 exited_at:{seconds:1765904112 nanos:985128750}" Dec 16 16:55:12.987465 kubelet[2839]: I1216 16:55:12.984149 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kkrj8" podStartSLOduration=3.984124765 podStartE2EDuration="3.984124765s" podCreationTimestamp="2025-12-16 16:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 16:55:11.114965984 +0000 UTC m=+6.303009794" watchObservedRunningTime="2025-12-16 16:55:12.984124765 +0000 UTC m=+8.172168563" Dec 16 16:55:12.991195 containerd[1604]: time="2025-12-16T16:55:12.990856983Z" level=info msg="StartContainer for \"cfb052dbfabbadf809ba25d3bff366a1cedf9f0c9f7f792dc844c77612bcebc3\" returns successfully" Dec 16 16:55:13.035744 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfb052dbfabbadf809ba25d3bff366a1cedf9f0c9f7f792dc844c77612bcebc3-rootfs.mount: Deactivated successfully. Dec 16 16:55:13.116485 containerd[1604]: time="2025-12-16T16:55:13.115592963Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 16 16:55:15.497863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount682486292.mount: Deactivated successfully. Dec 16 16:55:18.117473 containerd[1604]: time="2025-12-16T16:55:18.117392176Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:55:18.118920 containerd[1604]: time="2025-12-16T16:55:18.118678822Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Dec 16 16:55:18.119706 containerd[1604]: time="2025-12-16T16:55:18.119657605Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:55:18.123006 containerd[1604]: time="2025-12-16T16:55:18.122968400Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 16 16:55:18.126210 containerd[1604]: time="2025-12-16T16:55:18.125853787Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 5.009373339s" Dec 16 16:55:18.126210 containerd[1604]: time="2025-12-16T16:55:18.125911957Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 16 16:55:18.129280 containerd[1604]: time="2025-12-16T16:55:18.129240824Z" level=info msg="CreateContainer within sandbox \"9a93a0c272b95c43630715ea4d91cd9d7996a07f6a08252ac2ff35a542064569\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 16 16:55:18.142530 containerd[1604]: time="2025-12-16T16:55:18.140206948Z" level=info msg="Container fc8ad46399512e62a5baa226d4356ede1a304d19b08a66d9f7de6724447d82ee: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:55:18.145608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3653757268.mount: Deactivated successfully. Dec 16 16:55:18.156298 containerd[1604]: time="2025-12-16T16:55:18.156238837Z" level=info msg="CreateContainer within sandbox \"9a93a0c272b95c43630715ea4d91cd9d7996a07f6a08252ac2ff35a542064569\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fc8ad46399512e62a5baa226d4356ede1a304d19b08a66d9f7de6724447d82ee\"" Dec 16 16:55:18.157108 containerd[1604]: time="2025-12-16T16:55:18.157060604Z" level=info msg="StartContainer for \"fc8ad46399512e62a5baa226d4356ede1a304d19b08a66d9f7de6724447d82ee\"" Dec 16 16:55:18.160902 containerd[1604]: time="2025-12-16T16:55:18.159869749Z" level=info msg="connecting to shim fc8ad46399512e62a5baa226d4356ede1a304d19b08a66d9f7de6724447d82ee" address="unix:///run/containerd/s/c8e69ac5bb83ec73b8bd35398f2b4bc1461d111759fc84a85fb99439eab24171" protocol=ttrpc version=3 Dec 16 16:55:18.195449 systemd[1]: Started cri-containerd-fc8ad46399512e62a5baa226d4356ede1a304d19b08a66d9f7de6724447d82ee.scope - libcontainer container fc8ad46399512e62a5baa226d4356ede1a304d19b08a66d9f7de6724447d82ee. Dec 16 16:55:18.260859 systemd[1]: cri-containerd-fc8ad46399512e62a5baa226d4356ede1a304d19b08a66d9f7de6724447d82ee.scope: Deactivated successfully. Dec 16 16:55:18.263426 containerd[1604]: time="2025-12-16T16:55:18.263257512Z" level=info msg="received container exit event container_id:\"fc8ad46399512e62a5baa226d4356ede1a304d19b08a66d9f7de6724447d82ee\" id:\"fc8ad46399512e62a5baa226d4356ede1a304d19b08a66d9f7de6724447d82ee\" pid:3243 exited_at:{seconds:1765904118 nanos:262415744}" Dec 16 16:55:18.265967 containerd[1604]: time="2025-12-16T16:55:18.265937060Z" level=info msg="StartContainer for \"fc8ad46399512e62a5baa226d4356ede1a304d19b08a66d9f7de6724447d82ee\" returns successfully" Dec 16 16:55:18.298852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc8ad46399512e62a5baa226d4356ede1a304d19b08a66d9f7de6724447d82ee-rootfs.mount: Deactivated successfully. Dec 16 16:55:18.366205 kubelet[2839]: I1216 16:55:18.365638 2839 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Dec 16 16:55:18.402803 kubelet[2839]: I1216 16:55:18.402224 2839 status_manager.go:890] "Failed to get status for pod" podUID="47e2d8f9-02bc-4abf-befc-30879fceddd2" pod="kube-system/coredns-668d6bf9bc-k8qsm" err="pods \"coredns-668d6bf9bc-k8qsm\" is forbidden: User \"system:node:srv-jrcza.gb1.brightbox.com\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'srv-jrcza.gb1.brightbox.com' and this object" Dec 16 16:55:18.418682 systemd[1]: Created slice kubepods-burstable-pod47e2d8f9_02bc_4abf_befc_30879fceddd2.slice - libcontainer container kubepods-burstable-pod47e2d8f9_02bc_4abf_befc_30879fceddd2.slice. Dec 16 16:55:18.432507 systemd[1]: Created slice kubepods-burstable-podecc8b387_66dd_4be7_b859_c61b99e0e459.slice - libcontainer container kubepods-burstable-podecc8b387_66dd_4be7_b859_c61b99e0e459.slice. Dec 16 16:55:18.507367 kubelet[2839]: I1216 16:55:18.507299 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ecc8b387-66dd-4be7-b859-c61b99e0e459-config-volume\") pod \"coredns-668d6bf9bc-67bld\" (UID: \"ecc8b387-66dd-4be7-b859-c61b99e0e459\") " pod="kube-system/coredns-668d6bf9bc-67bld" Dec 16 16:55:18.507648 kubelet[2839]: I1216 16:55:18.507622 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47e2d8f9-02bc-4abf-befc-30879fceddd2-config-volume\") pod \"coredns-668d6bf9bc-k8qsm\" (UID: \"47e2d8f9-02bc-4abf-befc-30879fceddd2\") " pod="kube-system/coredns-668d6bf9bc-k8qsm" Dec 16 16:55:18.507821 kubelet[2839]: I1216 16:55:18.507792 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vf4b4\" (UniqueName: \"kubernetes.io/projected/47e2d8f9-02bc-4abf-befc-30879fceddd2-kube-api-access-vf4b4\") pod \"coredns-668d6bf9bc-k8qsm\" (UID: \"47e2d8f9-02bc-4abf-befc-30879fceddd2\") " pod="kube-system/coredns-668d6bf9bc-k8qsm" Dec 16 16:55:18.507959 kubelet[2839]: I1216 16:55:18.507935 2839 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8vbx\" (UniqueName: \"kubernetes.io/projected/ecc8b387-66dd-4be7-b859-c61b99e0e459-kube-api-access-r8vbx\") pod \"coredns-668d6bf9bc-67bld\" (UID: \"ecc8b387-66dd-4be7-b859-c61b99e0e459\") " pod="kube-system/coredns-668d6bf9bc-67bld" Dec 16 16:55:18.726303 containerd[1604]: time="2025-12-16T16:55:18.726131328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8qsm,Uid:47e2d8f9-02bc-4abf-befc-30879fceddd2,Namespace:kube-system,Attempt:0,}" Dec 16 16:55:18.759251 containerd[1604]: time="2025-12-16T16:55:18.759133362Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8qsm,Uid:47e2d8f9-02bc-4abf-befc-30879fceddd2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee7745410e1d832d2a3e324a9b8edfb1af8b0338a72fb62c7f502f11ce6f9c41\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 16:55:18.761100 kubelet[2839]: E1216 16:55:18.760438 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee7745410e1d832d2a3e324a9b8edfb1af8b0338a72fb62c7f502f11ce6f9c41\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 16:55:18.761100 kubelet[2839]: E1216 16:55:18.760577 2839 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee7745410e1d832d2a3e324a9b8edfb1af8b0338a72fb62c7f502f11ce6f9c41\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-k8qsm" Dec 16 16:55:18.761100 kubelet[2839]: E1216 16:55:18.760617 2839 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee7745410e1d832d2a3e324a9b8edfb1af8b0338a72fb62c7f502f11ce6f9c41\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-k8qsm" Dec 16 16:55:18.761100 kubelet[2839]: E1216 16:55:18.760711 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-k8qsm_kube-system(47e2d8f9-02bc-4abf-befc-30879fceddd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-k8qsm_kube-system(47e2d8f9-02bc-4abf-befc-30879fceddd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee7745410e1d832d2a3e324a9b8edfb1af8b0338a72fb62c7f502f11ce6f9c41\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-k8qsm" podUID="47e2d8f9-02bc-4abf-befc-30879fceddd2" Dec 16 16:55:18.780425 containerd[1604]: time="2025-12-16T16:55:18.780362606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-67bld,Uid:ecc8b387-66dd-4be7-b859-c61b99e0e459,Namespace:kube-system,Attempt:0,}" Dec 16 16:55:18.804505 containerd[1604]: time="2025-12-16T16:55:18.804443207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-67bld,Uid:ecc8b387-66dd-4be7-b859-c61b99e0e459,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7a58b9be7e44fe0c3fb935ef26cfb778dd2fa3e7321b89dafa83064416735c2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 16:55:18.805430 kubelet[2839]: E1216 16:55:18.804718 2839 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7a58b9be7e44fe0c3fb935ef26cfb778dd2fa3e7321b89dafa83064416735c2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 16 16:55:18.805430 kubelet[2839]: E1216 16:55:18.804791 2839 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7a58b9be7e44fe0c3fb935ef26cfb778dd2fa3e7321b89dafa83064416735c2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-67bld" Dec 16 16:55:18.805430 kubelet[2839]: E1216 16:55:18.804824 2839 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7a58b9be7e44fe0c3fb935ef26cfb778dd2fa3e7321b89dafa83064416735c2\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-67bld" Dec 16 16:55:18.805430 kubelet[2839]: E1216 16:55:18.804877 2839 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-67bld_kube-system(ecc8b387-66dd-4be7-b859-c61b99e0e459)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-67bld_kube-system(ecc8b387-66dd-4be7-b859-c61b99e0e459)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7a58b9be7e44fe0c3fb935ef26cfb778dd2fa3e7321b89dafa83064416735c2\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-67bld" podUID="ecc8b387-66dd-4be7-b859-c61b99e0e459" Dec 16 16:55:19.135824 containerd[1604]: time="2025-12-16T16:55:19.135715563Z" level=info msg="CreateContainer within sandbox \"9a93a0c272b95c43630715ea4d91cd9d7996a07f6a08252ac2ff35a542064569\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 16 16:55:19.161533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2877899009.mount: Deactivated successfully. Dec 16 16:55:19.163277 containerd[1604]: time="2025-12-16T16:55:19.163101478Z" level=info msg="Container e9a2fe03f3b68a0bf9ab336d61e6309bb15040bba7d57c92f3f8f57fc368de31: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:55:19.172950 containerd[1604]: time="2025-12-16T16:55:19.172859139Z" level=info msg="CreateContainer within sandbox \"9a93a0c272b95c43630715ea4d91cd9d7996a07f6a08252ac2ff35a542064569\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"e9a2fe03f3b68a0bf9ab336d61e6309bb15040bba7d57c92f3f8f57fc368de31\"" Dec 16 16:55:19.173824 containerd[1604]: time="2025-12-16T16:55:19.173791660Z" level=info msg="StartContainer for \"e9a2fe03f3b68a0bf9ab336d61e6309bb15040bba7d57c92f3f8f57fc368de31\"" Dec 16 16:55:19.175517 containerd[1604]: time="2025-12-16T16:55:19.175482377Z" level=info msg="connecting to shim e9a2fe03f3b68a0bf9ab336d61e6309bb15040bba7d57c92f3f8f57fc368de31" address="unix:///run/containerd/s/c8e69ac5bb83ec73b8bd35398f2b4bc1461d111759fc84a85fb99439eab24171" protocol=ttrpc version=3 Dec 16 16:55:19.212431 systemd[1]: Started cri-containerd-e9a2fe03f3b68a0bf9ab336d61e6309bb15040bba7d57c92f3f8f57fc368de31.scope - libcontainer container e9a2fe03f3b68a0bf9ab336d61e6309bb15040bba7d57c92f3f8f57fc368de31. Dec 16 16:55:19.263989 containerd[1604]: time="2025-12-16T16:55:19.263936390Z" level=info msg="StartContainer for \"e9a2fe03f3b68a0bf9ab336d61e6309bb15040bba7d57c92f3f8f57fc368de31\" returns successfully" Dec 16 16:55:20.172242 kubelet[2839]: I1216 16:55:20.172118 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-v4jh8" podStartSLOduration=3.305283647 podStartE2EDuration="11.172087215s" podCreationTimestamp="2025-12-16 16:55:09 +0000 UTC" firstStartedPulling="2025-12-16 16:55:10.260494749 +0000 UTC m=+5.448538538" lastFinishedPulling="2025-12-16 16:55:18.127298324 +0000 UTC m=+13.315342106" observedRunningTime="2025-12-16 16:55:20.17109505 +0000 UTC m=+15.359138855" watchObservedRunningTime="2025-12-16 16:55:20.172087215 +0000 UTC m=+15.360131034" Dec 16 16:55:20.352207 systemd-networkd[1482]: flannel.1: Link UP Dec 16 16:55:20.352223 systemd-networkd[1482]: flannel.1: Gained carrier Dec 16 16:55:22.074461 systemd-networkd[1482]: flannel.1: Gained IPv6LL Dec 16 16:55:30.046232 containerd[1604]: time="2025-12-16T16:55:30.046106994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8qsm,Uid:47e2d8f9-02bc-4abf-befc-30879fceddd2,Namespace:kube-system,Attempt:0,}" Dec 16 16:55:30.086232 systemd-networkd[1482]: cni0: Link UP Dec 16 16:55:30.086250 systemd-networkd[1482]: cni0: Gained carrier Dec 16 16:55:30.096371 systemd-networkd[1482]: cni0: Lost carrier Dec 16 16:55:30.104532 systemd-networkd[1482]: veth9f4b4f38: Link UP Dec 16 16:55:30.105835 kernel: cni0: port 1(veth9f4b4f38) entered blocking state Dec 16 16:55:30.105949 kernel: cni0: port 1(veth9f4b4f38) entered disabled state Dec 16 16:55:30.109265 kernel: veth9f4b4f38: entered allmulticast mode Dec 16 16:55:30.109553 kernel: veth9f4b4f38: entered promiscuous mode Dec 16 16:55:30.122841 kernel: cni0: port 1(veth9f4b4f38) entered blocking state Dec 16 16:55:30.123011 kernel: cni0: port 1(veth9f4b4f38) entered forwarding state Dec 16 16:55:30.121884 systemd-networkd[1482]: veth9f4b4f38: Gained carrier Dec 16 16:55:30.124220 systemd-networkd[1482]: cni0: Gained carrier Dec 16 16:55:30.127492 containerd[1604]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009c8e8), "name":"cbr0", "type":"bridge"} Dec 16 16:55:30.127492 containerd[1604]: delegateAdd: netconf sent to delegate plugin: Dec 16 16:55:30.180852 containerd[1604]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-16T16:55:30.180794131Z" level=info msg="connecting to shim 5c146042c3de3320edea9df21014ef67b8e05c2ec64649872568a1bc496677d9" address="unix:///run/containerd/s/0ea5a5675b97cd5f149a46e17cc0862897efbfcf85beb22f2444342b118ef959" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:55:30.226399 systemd[1]: Started cri-containerd-5c146042c3de3320edea9df21014ef67b8e05c2ec64649872568a1bc496677d9.scope - libcontainer container 5c146042c3de3320edea9df21014ef67b8e05c2ec64649872568a1bc496677d9. Dec 16 16:55:30.319338 containerd[1604]: time="2025-12-16T16:55:30.318912161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-k8qsm,Uid:47e2d8f9-02bc-4abf-befc-30879fceddd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c146042c3de3320edea9df21014ef67b8e05c2ec64649872568a1bc496677d9\"" Dec 16 16:55:30.323217 containerd[1604]: time="2025-12-16T16:55:30.323154965Z" level=info msg="CreateContainer within sandbox \"5c146042c3de3320edea9df21014ef67b8e05c2ec64649872568a1bc496677d9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 16:55:30.339450 containerd[1604]: time="2025-12-16T16:55:30.339403626Z" level=info msg="Container 4f73b0c21d818da2003f78249e697dcd84618091addd5dbe5c3af44eeee5f92f: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:55:30.345928 containerd[1604]: time="2025-12-16T16:55:30.345863582Z" level=info msg="CreateContainer within sandbox \"5c146042c3de3320edea9df21014ef67b8e05c2ec64649872568a1bc496677d9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4f73b0c21d818da2003f78249e697dcd84618091addd5dbe5c3af44eeee5f92f\"" Dec 16 16:55:30.347430 containerd[1604]: time="2025-12-16T16:55:30.346655163Z" level=info msg="StartContainer for \"4f73b0c21d818da2003f78249e697dcd84618091addd5dbe5c3af44eeee5f92f\"" Dec 16 16:55:30.349422 containerd[1604]: time="2025-12-16T16:55:30.349375893Z" level=info msg="connecting to shim 4f73b0c21d818da2003f78249e697dcd84618091addd5dbe5c3af44eeee5f92f" address="unix:///run/containerd/s/0ea5a5675b97cd5f149a46e17cc0862897efbfcf85beb22f2444342b118ef959" protocol=ttrpc version=3 Dec 16 16:55:30.374429 systemd[1]: Started cri-containerd-4f73b0c21d818da2003f78249e697dcd84618091addd5dbe5c3af44eeee5f92f.scope - libcontainer container 4f73b0c21d818da2003f78249e697dcd84618091addd5dbe5c3af44eeee5f92f. Dec 16 16:55:30.421261 containerd[1604]: time="2025-12-16T16:55:30.421127846Z" level=info msg="StartContainer for \"4f73b0c21d818da2003f78249e697dcd84618091addd5dbe5c3af44eeee5f92f\" returns successfully" Dec 16 16:55:31.208207 kubelet[2839]: I1216 16:55:31.208107 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-k8qsm" podStartSLOduration=22.208084864 podStartE2EDuration="22.208084864s" podCreationTimestamp="2025-12-16 16:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 16:55:31.195249554 +0000 UTC m=+26.383293359" watchObservedRunningTime="2025-12-16 16:55:31.208084864 +0000 UTC m=+26.396128662" Dec 16 16:55:31.418498 systemd-networkd[1482]: cni0: Gained IPv6LL Dec 16 16:55:31.674488 systemd-networkd[1482]: veth9f4b4f38: Gained IPv6LL Dec 16 16:55:32.045335 containerd[1604]: time="2025-12-16T16:55:32.045150371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-67bld,Uid:ecc8b387-66dd-4be7-b859-c61b99e0e459,Namespace:kube-system,Attempt:0,}" Dec 16 16:55:32.067497 systemd-networkd[1482]: vethfe2e82a5: Link UP Dec 16 16:55:32.074370 kernel: cni0: port 2(vethfe2e82a5) entered blocking state Dec 16 16:55:32.074675 kernel: cni0: port 2(vethfe2e82a5) entered disabled state Dec 16 16:55:32.083429 kernel: vethfe2e82a5: entered allmulticast mode Dec 16 16:55:32.092203 kernel: vethfe2e82a5: entered promiscuous mode Dec 16 16:55:32.109675 kernel: cni0: port 2(vethfe2e82a5) entered blocking state Dec 16 16:55:32.109790 kernel: cni0: port 2(vethfe2e82a5) entered forwarding state Dec 16 16:55:32.110843 systemd-networkd[1482]: vethfe2e82a5: Gained carrier Dec 16 16:55:32.113838 containerd[1604]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000a48e8), "name":"cbr0", "type":"bridge"} Dec 16 16:55:32.113838 containerd[1604]: delegateAdd: netconf sent to delegate plugin: Dec 16 16:55:32.152910 containerd[1604]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-12-16T16:55:32.152335913Z" level=info msg="connecting to shim 131211e2a4da0910c9c48ab1b6d3ab0a4b151788b1181517ed4fa51f2a49208b" address="unix:///run/containerd/s/cc16d0470cdbb9440741b56f17eae75b531568252d267d8abf770c7b69e9c0fd" namespace=k8s.io protocol=ttrpc version=3 Dec 16 16:55:32.201371 systemd[1]: Started cri-containerd-131211e2a4da0910c9c48ab1b6d3ab0a4b151788b1181517ed4fa51f2a49208b.scope - libcontainer container 131211e2a4da0910c9c48ab1b6d3ab0a4b151788b1181517ed4fa51f2a49208b. Dec 16 16:55:32.290001 containerd[1604]: time="2025-12-16T16:55:32.289890275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-67bld,Uid:ecc8b387-66dd-4be7-b859-c61b99e0e459,Namespace:kube-system,Attempt:0,} returns sandbox id \"131211e2a4da0910c9c48ab1b6d3ab0a4b151788b1181517ed4fa51f2a49208b\"" Dec 16 16:55:32.300415 containerd[1604]: time="2025-12-16T16:55:32.298378609Z" level=info msg="CreateContainer within sandbox \"131211e2a4da0910c9c48ab1b6d3ab0a4b151788b1181517ed4fa51f2a49208b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 16 16:55:32.311194 containerd[1604]: time="2025-12-16T16:55:32.310936850Z" level=info msg="Container 5b169eaf14aa2edfa8574ae5a4db4555d9e1df9c68b9c9c70138401dfec641d2: CDI devices from CRI Config.CDIDevices: []" Dec 16 16:55:32.321792 containerd[1604]: time="2025-12-16T16:55:32.321699476Z" level=info msg="CreateContainer within sandbox \"131211e2a4da0910c9c48ab1b6d3ab0a4b151788b1181517ed4fa51f2a49208b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5b169eaf14aa2edfa8574ae5a4db4555d9e1df9c68b9c9c70138401dfec641d2\"" Dec 16 16:55:32.323799 containerd[1604]: time="2025-12-16T16:55:32.322437420Z" level=info msg="StartContainer for \"5b169eaf14aa2edfa8574ae5a4db4555d9e1df9c68b9c9c70138401dfec641d2\"" Dec 16 16:55:32.324512 containerd[1604]: time="2025-12-16T16:55:32.324446640Z" level=info msg="connecting to shim 5b169eaf14aa2edfa8574ae5a4db4555d9e1df9c68b9c9c70138401dfec641d2" address="unix:///run/containerd/s/cc16d0470cdbb9440741b56f17eae75b531568252d267d8abf770c7b69e9c0fd" protocol=ttrpc version=3 Dec 16 16:55:32.352359 systemd[1]: Started cri-containerd-5b169eaf14aa2edfa8574ae5a4db4555d9e1df9c68b9c9c70138401dfec641d2.scope - libcontainer container 5b169eaf14aa2edfa8574ae5a4db4555d9e1df9c68b9c9c70138401dfec641d2. Dec 16 16:55:32.398865 containerd[1604]: time="2025-12-16T16:55:32.398809643Z" level=info msg="StartContainer for \"5b169eaf14aa2edfa8574ae5a4db4555d9e1df9c68b9c9c70138401dfec641d2\" returns successfully" Dec 16 16:55:33.204926 kubelet[2839]: I1216 16:55:33.204673 2839 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-67bld" podStartSLOduration=24.204654769 podStartE2EDuration="24.204654769s" podCreationTimestamp="2025-12-16 16:55:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-16 16:55:33.204438951 +0000 UTC m=+28.392482749" watchObservedRunningTime="2025-12-16 16:55:33.204654769 +0000 UTC m=+28.392698567" Dec 16 16:55:33.274473 systemd-networkd[1482]: vethfe2e82a5: Gained IPv6LL Dec 16 16:56:16.095824 systemd[1]: Started sshd@7-10.230.10.122:22-139.178.68.195:39424.service - OpenSSH per-connection server daemon (139.178.68.195:39424). Dec 16 16:56:17.045368 sshd[3929]: Accepted publickey for core from 139.178.68.195 port 39424 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:56:17.049228 sshd-session[3929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:56:17.058783 systemd-logind[1574]: New session 10 of user core. Dec 16 16:56:17.063352 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 16 16:56:17.813103 sshd[3932]: Connection closed by 139.178.68.195 port 39424 Dec 16 16:56:17.813644 sshd-session[3929]: pam_unix(sshd:session): session closed for user core Dec 16 16:56:17.819944 systemd[1]: sshd@7-10.230.10.122:22-139.178.68.195:39424.service: Deactivated successfully. Dec 16 16:56:17.823369 systemd[1]: session-10.scope: Deactivated successfully. Dec 16 16:56:17.824994 systemd-logind[1574]: Session 10 logged out. Waiting for processes to exit. Dec 16 16:56:17.826720 systemd-logind[1574]: Removed session 10. Dec 16 16:56:22.971354 systemd[1]: Started sshd@8-10.230.10.122:22-139.178.68.195:40286.service - OpenSSH per-connection server daemon (139.178.68.195:40286). Dec 16 16:56:23.913056 sshd[3967]: Accepted publickey for core from 139.178.68.195 port 40286 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:56:23.915352 sshd-session[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:56:23.923838 systemd-logind[1574]: New session 11 of user core. Dec 16 16:56:23.937446 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 16 16:56:24.648966 sshd[3970]: Connection closed by 139.178.68.195 port 40286 Dec 16 16:56:24.650401 sshd-session[3967]: pam_unix(sshd:session): session closed for user core Dec 16 16:56:24.657965 systemd[1]: sshd@8-10.230.10.122:22-139.178.68.195:40286.service: Deactivated successfully. Dec 16 16:56:24.660967 systemd[1]: session-11.scope: Deactivated successfully. Dec 16 16:56:24.662341 systemd-logind[1574]: Session 11 logged out. Waiting for processes to exit. Dec 16 16:56:24.664572 systemd-logind[1574]: Removed session 11. Dec 16 16:56:29.812293 systemd[1]: Started sshd@9-10.230.10.122:22-139.178.68.195:40290.service - OpenSSH per-connection server daemon (139.178.68.195:40290). Dec 16 16:56:30.743939 sshd[4004]: Accepted publickey for core from 139.178.68.195 port 40290 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:56:30.746098 sshd-session[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:56:30.755138 systemd-logind[1574]: New session 12 of user core. Dec 16 16:56:30.759568 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 16 16:56:31.463092 sshd[4013]: Connection closed by 139.178.68.195 port 40290 Dec 16 16:56:31.464029 sshd-session[4004]: pam_unix(sshd:session): session closed for user core Dec 16 16:56:31.471053 systemd[1]: sshd@9-10.230.10.122:22-139.178.68.195:40290.service: Deactivated successfully. Dec 16 16:56:31.474254 systemd[1]: session-12.scope: Deactivated successfully. Dec 16 16:56:31.475823 systemd-logind[1574]: Session 12 logged out. Waiting for processes to exit. Dec 16 16:56:31.478679 systemd-logind[1574]: Removed session 12. Dec 16 16:56:31.623876 systemd[1]: Started sshd@10-10.230.10.122:22-139.178.68.195:37858.service - OpenSSH per-connection server daemon (139.178.68.195:37858). Dec 16 16:56:32.547718 sshd[4041]: Accepted publickey for core from 139.178.68.195 port 37858 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:56:32.549740 sshd-session[4041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:56:32.558399 systemd-logind[1574]: New session 13 of user core. Dec 16 16:56:32.564410 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 16 16:56:33.333951 sshd[4044]: Connection closed by 139.178.68.195 port 37858 Dec 16 16:56:33.335434 sshd-session[4041]: pam_unix(sshd:session): session closed for user core Dec 16 16:56:33.342538 systemd[1]: sshd@10-10.230.10.122:22-139.178.68.195:37858.service: Deactivated successfully. Dec 16 16:56:33.346617 systemd[1]: session-13.scope: Deactivated successfully. Dec 16 16:56:33.348305 systemd-logind[1574]: Session 13 logged out. Waiting for processes to exit. Dec 16 16:56:33.350720 systemd-logind[1574]: Removed session 13. Dec 16 16:56:33.494507 systemd[1]: Started sshd@11-10.230.10.122:22-139.178.68.195:37862.service - OpenSSH per-connection server daemon (139.178.68.195:37862). Dec 16 16:56:34.421684 sshd[4054]: Accepted publickey for core from 139.178.68.195 port 37862 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:56:34.423782 sshd-session[4054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:56:34.431239 systemd-logind[1574]: New session 14 of user core. Dec 16 16:56:34.443387 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 16 16:56:35.142468 sshd[4057]: Connection closed by 139.178.68.195 port 37862 Dec 16 16:56:35.143629 sshd-session[4054]: pam_unix(sshd:session): session closed for user core Dec 16 16:56:35.153022 systemd[1]: sshd@11-10.230.10.122:22-139.178.68.195:37862.service: Deactivated successfully. Dec 16 16:56:35.156494 systemd[1]: session-14.scope: Deactivated successfully. Dec 16 16:56:35.163136 systemd-logind[1574]: Session 14 logged out. Waiting for processes to exit. Dec 16 16:56:35.166867 systemd-logind[1574]: Removed session 14. Dec 16 16:56:40.302136 systemd[1]: Started sshd@12-10.230.10.122:22-139.178.68.195:37866.service - OpenSSH per-connection server daemon (139.178.68.195:37866). Dec 16 16:56:41.246570 sshd[4091]: Accepted publickey for core from 139.178.68.195 port 37866 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:56:41.248561 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:56:41.257461 systemd-logind[1574]: New session 15 of user core. Dec 16 16:56:41.262428 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 16 16:56:41.967688 sshd[4117]: Connection closed by 139.178.68.195 port 37866 Dec 16 16:56:41.968940 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Dec 16 16:56:41.974958 systemd[1]: sshd@12-10.230.10.122:22-139.178.68.195:37866.service: Deactivated successfully. Dec 16 16:56:41.977995 systemd[1]: session-15.scope: Deactivated successfully. Dec 16 16:56:41.979669 systemd-logind[1574]: Session 15 logged out. Waiting for processes to exit. Dec 16 16:56:41.982284 systemd-logind[1574]: Removed session 15. Dec 16 16:56:42.127336 systemd[1]: Started sshd@13-10.230.10.122:22-139.178.68.195:56744.service - OpenSSH per-connection server daemon (139.178.68.195:56744). Dec 16 16:56:43.058047 sshd[4129]: Accepted publickey for core from 139.178.68.195 port 56744 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:56:43.060078 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:56:43.068291 systemd-logind[1574]: New session 16 of user core. Dec 16 16:56:43.072382 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 16 16:56:44.110678 sshd[4132]: Connection closed by 139.178.68.195 port 56744 Dec 16 16:56:44.111959 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Dec 16 16:56:44.118599 systemd[1]: sshd@13-10.230.10.122:22-139.178.68.195:56744.service: Deactivated successfully. Dec 16 16:56:44.121961 systemd[1]: session-16.scope: Deactivated successfully. Dec 16 16:56:44.123524 systemd-logind[1574]: Session 16 logged out. Waiting for processes to exit. Dec 16 16:56:44.125881 systemd-logind[1574]: Removed session 16. Dec 16 16:56:44.273651 systemd[1]: Started sshd@14-10.230.10.122:22-139.178.68.195:56760.service - OpenSSH per-connection server daemon (139.178.68.195:56760). Dec 16 16:56:45.200147 sshd[4142]: Accepted publickey for core from 139.178.68.195 port 56760 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:56:45.202595 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:56:45.210988 systemd-logind[1574]: New session 17 of user core. Dec 16 16:56:45.218392 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 16 16:56:46.662415 sshd[4145]: Connection closed by 139.178.68.195 port 56760 Dec 16 16:56:46.663686 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Dec 16 16:56:46.670818 systemd[1]: sshd@14-10.230.10.122:22-139.178.68.195:56760.service: Deactivated successfully. Dec 16 16:56:46.674458 systemd[1]: session-17.scope: Deactivated successfully. Dec 16 16:56:46.676004 systemd-logind[1574]: Session 17 logged out. Waiting for processes to exit. Dec 16 16:56:46.678826 systemd-logind[1574]: Removed session 17. Dec 16 16:56:46.824082 systemd[1]: Started sshd@15-10.230.10.122:22-139.178.68.195:56762.service - OpenSSH per-connection server daemon (139.178.68.195:56762). Dec 16 16:56:47.747193 sshd[4183]: Accepted publickey for core from 139.178.68.195 port 56762 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:56:47.749181 sshd-session[4183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:56:47.757498 systemd-logind[1574]: New session 18 of user core. Dec 16 16:56:47.767441 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 16 16:56:48.671917 sshd[4186]: Connection closed by 139.178.68.195 port 56762 Dec 16 16:56:48.670903 sshd-session[4183]: pam_unix(sshd:session): session closed for user core Dec 16 16:56:48.676211 systemd[1]: sshd@15-10.230.10.122:22-139.178.68.195:56762.service: Deactivated successfully. Dec 16 16:56:48.679688 systemd[1]: session-18.scope: Deactivated successfully. Dec 16 16:56:48.682316 systemd-logind[1574]: Session 18 logged out. Waiting for processes to exit. Dec 16 16:56:48.685066 systemd-logind[1574]: Removed session 18. Dec 16 16:56:48.826383 systemd[1]: Started sshd@16-10.230.10.122:22-139.178.68.195:56766.service - OpenSSH per-connection server daemon (139.178.68.195:56766). Dec 16 16:56:49.733957 sshd[4197]: Accepted publickey for core from 139.178.68.195 port 56766 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:56:49.735916 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:56:49.744065 systemd-logind[1574]: New session 19 of user core. Dec 16 16:56:49.755454 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 16 16:56:50.443230 sshd[4200]: Connection closed by 139.178.68.195 port 56766 Dec 16 16:56:50.444052 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Dec 16 16:56:50.449752 systemd[1]: sshd@16-10.230.10.122:22-139.178.68.195:56766.service: Deactivated successfully. Dec 16 16:56:50.453703 systemd[1]: session-19.scope: Deactivated successfully. Dec 16 16:56:50.455678 systemd-logind[1574]: Session 19 logged out. Waiting for processes to exit. Dec 16 16:56:50.457510 systemd-logind[1574]: Removed session 19. Dec 16 16:56:55.600452 systemd[1]: Started sshd@17-10.230.10.122:22-139.178.68.195:40976.service - OpenSSH per-connection server daemon (139.178.68.195:40976). Dec 16 16:56:56.517469 sshd[4235]: Accepted publickey for core from 139.178.68.195 port 40976 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:56:56.519243 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:56:56.526944 systemd-logind[1574]: New session 20 of user core. Dec 16 16:56:56.535418 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 16 16:56:57.244599 sshd[4259]: Connection closed by 139.178.68.195 port 40976 Dec 16 16:56:57.245781 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Dec 16 16:56:57.252315 systemd[1]: sshd@17-10.230.10.122:22-139.178.68.195:40976.service: Deactivated successfully. Dec 16 16:56:57.255022 systemd[1]: session-20.scope: Deactivated successfully. Dec 16 16:56:57.256754 systemd-logind[1574]: Session 20 logged out. Waiting for processes to exit. Dec 16 16:56:57.258785 systemd-logind[1574]: Removed session 20. Dec 16 16:57:02.407560 systemd[1]: Started sshd@18-10.230.10.122:22-139.178.68.195:38596.service - OpenSSH per-connection server daemon (139.178.68.195:38596). Dec 16 16:57:03.326731 sshd[4293]: Accepted publickey for core from 139.178.68.195 port 38596 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:57:03.328782 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:57:03.338225 systemd-logind[1574]: New session 21 of user core. Dec 16 16:57:03.347464 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 16 16:57:04.043187 sshd[4296]: Connection closed by 139.178.68.195 port 38596 Dec 16 16:57:04.043028 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Dec 16 16:57:04.048719 systemd[1]: sshd@18-10.230.10.122:22-139.178.68.195:38596.service: Deactivated successfully. Dec 16 16:57:04.051937 systemd[1]: session-21.scope: Deactivated successfully. Dec 16 16:57:04.054388 systemd-logind[1574]: Session 21 logged out. Waiting for processes to exit. Dec 16 16:57:04.057354 systemd-logind[1574]: Removed session 21. Dec 16 16:57:09.202753 systemd[1]: Started sshd@19-10.230.10.122:22-139.178.68.195:38608.service - OpenSSH per-connection server daemon (139.178.68.195:38608). Dec 16 16:57:10.128688 sshd[4331]: Accepted publickey for core from 139.178.68.195 port 38608 ssh2: RSA SHA256:aWRHM7yqDy00ChHK+O7mKYt3bRdoTZshpl3R3naUTkM Dec 16 16:57:10.130833 sshd-session[4331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 16 16:57:10.138826 systemd-logind[1574]: New session 22 of user core. Dec 16 16:57:10.145394 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 16 16:57:10.856607 sshd[4334]: Connection closed by 139.178.68.195 port 38608 Dec 16 16:57:10.857646 sshd-session[4331]: pam_unix(sshd:session): session closed for user core Dec 16 16:57:10.863379 systemd[1]: sshd@19-10.230.10.122:22-139.178.68.195:38608.service: Deactivated successfully. Dec 16 16:57:10.867527 systemd[1]: session-22.scope: Deactivated successfully. Dec 16 16:57:10.869270 systemd-logind[1574]: Session 22 logged out. Waiting for processes to exit. Dec 16 16:57:10.871727 systemd-logind[1574]: Removed session 22.