Feb 13 22:23:00.051271 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:44:05 -00 2025 Feb 13 22:23:00.051323 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 22:23:00.051338 kernel: BIOS-provided physical RAM map: Feb 13 22:23:00.051432 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 22:23:00.051449 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 22:23:00.051460 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 22:23:00.051472 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Feb 13 22:23:00.051482 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Feb 13 22:23:00.051493 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 22:23:00.051503 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 22:23:00.051513 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 22:23:00.051524 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 22:23:00.051541 kernel: NX (Execute Disable) protection: active Feb 13 22:23:00.051552 kernel: APIC: Static calls initialized Feb 13 22:23:00.051564 kernel: SMBIOS 2.8 present. Feb 13 22:23:00.051576 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Feb 13 22:23:00.051588 kernel: Hypervisor detected: KVM Feb 13 22:23:00.051603 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 22:23:00.051615 kernel: kvm-clock: using sched offset of 4448933637 cycles Feb 13 22:23:00.051627 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 22:23:00.051639 kernel: tsc: Detected 2499.998 MHz processor Feb 13 22:23:00.051650 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 22:23:00.051662 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 22:23:00.051673 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Feb 13 22:23:00.051697 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 22:23:00.051709 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 22:23:00.051724 kernel: Using GB pages for direct mapping Feb 13 22:23:00.051735 kernel: ACPI: Early table checksum verification disabled Feb 13 22:23:00.051759 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 13 22:23:00.051769 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:23:00.051780 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:23:00.051791 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:23:00.051801 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Feb 13 22:23:00.051812 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:23:00.051822 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:23:00.051837 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:23:00.051848 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 22:23:00.051859 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Feb 13 22:23:00.051869 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Feb 13 22:23:00.051880 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Feb 13 22:23:00.051897 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Feb 13 22:23:00.051908 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Feb 13 22:23:00.051923 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Feb 13 22:23:00.051934 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Feb 13 22:23:00.051945 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 22:23:00.051957 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 22:23:00.051968 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 13 22:23:00.051979 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 13 22:23:00.052001 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 13 22:23:00.052013 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Feb 13 22:23:00.052029 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 13 22:23:00.052040 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Feb 13 22:23:00.052064 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 13 22:23:00.052076 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Feb 13 22:23:00.052088 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 13 22:23:00.052099 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Feb 13 22:23:00.052123 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 13 22:23:00.052134 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Feb 13 22:23:00.052146 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 13 22:23:00.052161 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Feb 13 22:23:00.052173 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 22:23:00.052185 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 22:23:00.052196 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Feb 13 22:23:00.052208 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Feb 13 22:23:00.052232 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Feb 13 22:23:00.052268 kernel: Zone ranges: Feb 13 22:23:00.052280 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 22:23:00.052292 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Feb 13 22:23:00.052310 kernel: Normal empty Feb 13 22:23:00.052322 kernel: Movable zone start for each node Feb 13 22:23:00.052334 kernel: Early memory node ranges Feb 13 22:23:00.052346 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 22:23:00.053391 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Feb 13 22:23:00.053410 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Feb 13 22:23:00.053423 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 22:23:00.053435 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 22:23:00.053447 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Feb 13 22:23:00.053459 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 22:23:00.053478 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 22:23:00.053490 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 22:23:00.053502 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 22:23:00.053514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 22:23:00.053526 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 22:23:00.053538 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 22:23:00.053549 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 22:23:00.053561 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 22:23:00.053573 kernel: TSC deadline timer available Feb 13 22:23:00.053590 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Feb 13 22:23:00.053602 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 22:23:00.053613 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 22:23:00.053625 kernel: Booting paravirtualized kernel on KVM Feb 13 22:23:00.053637 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 22:23:00.053650 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 22:23:00.053662 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 22:23:00.053674 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 22:23:00.053686 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 22:23:00.053702 kernel: kvm-guest: PV spinlocks enabled Feb 13 22:23:00.053714 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 22:23:00.053728 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 22:23:00.053741 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 22:23:00.053753 kernel: random: crng init done Feb 13 22:23:00.053764 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 22:23:00.053776 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 22:23:00.053788 kernel: Fallback order for Node 0: 0 Feb 13 22:23:00.053805 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Feb 13 22:23:00.053817 kernel: Policy zone: DMA32 Feb 13 22:23:00.053829 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 22:23:00.053841 kernel: software IO TLB: area num 16. Feb 13 22:23:00.053853 kernel: Memory: 1901524K/2096616K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42976K init, 2216K bss, 194832K reserved, 0K cma-reserved) Feb 13 22:23:00.053865 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 22:23:00.053877 kernel: Kernel/User page tables isolation: enabled Feb 13 22:23:00.053889 kernel: ftrace: allocating 37923 entries in 149 pages Feb 13 22:23:00.053901 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 22:23:00.053918 kernel: Dynamic Preempt: voluntary Feb 13 22:23:00.053930 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 22:23:00.053955 kernel: rcu: RCU event tracing is enabled. Feb 13 22:23:00.053967 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 22:23:00.053979 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 22:23:00.054014 kernel: Rude variant of Tasks RCU enabled. Feb 13 22:23:00.054029 kernel: Tracing variant of Tasks RCU enabled. Feb 13 22:23:00.054041 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 22:23:00.054053 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 22:23:00.054078 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Feb 13 22:23:00.054090 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 22:23:00.054102 kernel: Console: colour VGA+ 80x25 Feb 13 22:23:00.054118 kernel: printk: console [tty0] enabled Feb 13 22:23:00.054143 kernel: printk: console [ttyS0] enabled Feb 13 22:23:00.054155 kernel: ACPI: Core revision 20230628 Feb 13 22:23:00.054168 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 22:23:00.054180 kernel: x2apic enabled Feb 13 22:23:00.054197 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 22:23:00.054210 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 13 22:23:00.054223 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Feb 13 22:23:00.054246 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 22:23:00.054260 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 22:23:00.054272 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 22:23:00.054285 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 22:23:00.054297 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 22:23:00.054309 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 22:23:00.054327 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 22:23:00.054340 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 22:23:00.054352 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 22:23:00.054384 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 22:23:00.054397 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 22:23:00.054409 kernel: MMIO Stale Data: Unknown: No mitigations Feb 13 22:23:00.054421 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 13 22:23:00.054434 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 22:23:00.054447 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 22:23:00.054459 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 22:23:00.054472 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 22:23:00.054490 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 22:23:00.054503 kernel: Freeing SMP alternatives memory: 32K Feb 13 22:23:00.054516 kernel: pid_max: default: 32768 minimum: 301 Feb 13 22:23:00.054528 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 22:23:00.054541 kernel: landlock: Up and running. Feb 13 22:23:00.054553 kernel: SELinux: Initializing. Feb 13 22:23:00.054565 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 22:23:00.054578 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 22:23:00.054591 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Feb 13 22:23:00.054603 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 22:23:00.054616 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 22:23:00.054634 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 22:23:00.054647 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Feb 13 22:23:00.054659 kernel: signal: max sigframe size: 1776 Feb 13 22:23:00.054672 kernel: rcu: Hierarchical SRCU implementation. Feb 13 22:23:00.054685 kernel: rcu: Max phase no-delay instances is 400. Feb 13 22:23:00.054698 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 22:23:00.054710 kernel: smp: Bringing up secondary CPUs ... Feb 13 22:23:00.054723 kernel: smpboot: x86: Booting SMP configuration: Feb 13 22:23:00.054735 kernel: .... node #0, CPUs: #1 Feb 13 22:23:00.054764 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 13 22:23:00.054777 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 22:23:00.054789 kernel: smpboot: Max logical packages: 16 Feb 13 22:23:00.054801 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Feb 13 22:23:00.054826 kernel: devtmpfs: initialized Feb 13 22:23:00.054838 kernel: x86/mm: Memory block size: 128MB Feb 13 22:23:00.054850 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 22:23:00.054861 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 22:23:00.054873 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 22:23:00.054889 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 22:23:00.054901 kernel: audit: initializing netlink subsys (disabled) Feb 13 22:23:00.054925 kernel: audit: type=2000 audit(1739485378.419:1): state=initialized audit_enabled=0 res=1 Feb 13 22:23:00.054937 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 22:23:00.054949 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 22:23:00.054961 kernel: cpuidle: using governor menu Feb 13 22:23:00.054973 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 22:23:00.054985 kernel: dca service started, version 1.12.1 Feb 13 22:23:00.054998 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 22:23:00.055014 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 22:23:00.055026 kernel: PCI: Using configuration type 1 for base access Feb 13 22:23:00.055039 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 22:23:00.055051 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 22:23:00.055063 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 22:23:00.055076 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 22:23:00.055088 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 22:23:00.055100 kernel: ACPI: Added _OSI(Module Device) Feb 13 22:23:00.055124 kernel: ACPI: Added _OSI(Processor Device) Feb 13 22:23:00.055140 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 22:23:00.055152 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 22:23:00.055164 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 22:23:00.055176 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 22:23:00.055187 kernel: ACPI: Interpreter enabled Feb 13 22:23:00.055199 kernel: ACPI: PM: (supports S0 S5) Feb 13 22:23:00.055211 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 22:23:00.055245 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 22:23:00.055260 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 22:23:00.055277 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 22:23:00.055290 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 22:23:00.055997 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 22:23:00.056169 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 22:23:00.056382 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 22:23:00.056403 kernel: PCI host bridge to bus 0000:00 Feb 13 22:23:00.056589 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 22:23:00.056749 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 22:23:00.056924 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 22:23:00.057074 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Feb 13 22:23:00.057223 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 22:23:00.057403 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Feb 13 22:23:00.057555 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 22:23:00.057749 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 22:23:00.057974 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Feb 13 22:23:00.060520 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Feb 13 22:23:00.060703 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Feb 13 22:23:00.060897 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Feb 13 22:23:00.061082 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 22:23:00.061287 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 22:23:00.061558 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Feb 13 22:23:00.061758 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 22:23:00.061924 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Feb 13 22:23:00.062118 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 22:23:00.062311 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Feb 13 22:23:00.062504 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 22:23:00.062710 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Feb 13 22:23:00.062886 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 22:23:00.064537 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Feb 13 22:23:00.064725 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 22:23:00.064894 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Feb 13 22:23:00.065073 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 22:23:00.065264 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Feb 13 22:23:00.065468 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 22:23:00.065637 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Feb 13 22:23:00.065816 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 22:23:00.065981 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 22:23:00.066146 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Feb 13 22:23:00.066325 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Feb 13 22:23:00.069541 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Feb 13 22:23:00.069740 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 13 22:23:00.069910 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 22:23:00.070075 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Feb 13 22:23:00.070252 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Feb 13 22:23:00.070454 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 22:23:00.070622 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 22:23:00.070805 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 22:23:00.070975 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Feb 13 22:23:00.071137 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Feb 13 22:23:00.071324 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 22:23:00.072672 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 22:23:00.072860 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Feb 13 22:23:00.073041 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Feb 13 22:23:00.073208 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 22:23:00.074452 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 22:23:00.074623 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 22:23:00.074802 kernel: pci_bus 0000:02: extended config space not accessible Feb 13 22:23:00.074989 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Feb 13 22:23:00.075176 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Feb 13 22:23:00.075938 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 22:23:00.076116 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 22:23:00.076314 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 22:23:00.076532 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Feb 13 22:23:00.076702 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 22:23:00.076866 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 22:23:00.077038 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 22:23:00.077230 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 22:23:00.077434 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Feb 13 22:23:00.077614 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 22:23:00.077792 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 22:23:00.077956 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 22:23:00.078122 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 22:23:00.078301 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 22:23:00.081688 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 22:23:00.081866 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 22:23:00.082052 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 22:23:00.082216 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 22:23:00.082455 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 22:23:00.082624 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 22:23:00.082791 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 22:23:00.083001 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 22:23:00.083196 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 22:23:00.085441 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 22:23:00.085650 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 22:23:00.085819 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 22:23:00.085976 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 22:23:00.085995 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 22:23:00.086009 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 22:23:00.086021 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 22:23:00.086054 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 22:23:00.086066 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 22:23:00.086078 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 22:23:00.086090 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 22:23:00.086102 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 22:23:00.086126 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 22:23:00.086138 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 22:23:00.086150 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 22:23:00.086163 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 22:23:00.086193 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 22:23:00.086206 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 22:23:00.086218 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 22:23:00.086231 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 22:23:00.086255 kernel: iommu: Default domain type: Translated Feb 13 22:23:00.086268 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 22:23:00.086281 kernel: PCI: Using ACPI for IRQ routing Feb 13 22:23:00.086293 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 22:23:00.086306 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 22:23:00.086324 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Feb 13 22:23:00.086564 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 22:23:00.086719 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 22:23:00.086884 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 22:23:00.086904 kernel: vgaarb: loaded Feb 13 22:23:00.086917 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 22:23:00.086929 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 22:23:00.086942 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 22:23:00.086954 kernel: pnp: PnP ACPI init Feb 13 22:23:00.087179 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 22:23:00.087201 kernel: pnp: PnP ACPI: found 5 devices Feb 13 22:23:00.087214 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 22:23:00.087227 kernel: NET: Registered PF_INET protocol family Feb 13 22:23:00.087251 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 22:23:00.087264 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 22:23:00.087278 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 22:23:00.087291 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 22:23:00.087311 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 22:23:00.087324 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 22:23:00.087336 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 22:23:00.087349 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 22:23:00.089393 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 22:23:00.089408 kernel: NET: Registered PF_XDP protocol family Feb 13 22:23:00.089599 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Feb 13 22:23:00.089775 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 22:23:00.089984 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 22:23:00.090150 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 22:23:00.090331 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 22:23:00.090552 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 22:23:00.090716 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 22:23:00.090878 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 22:23:00.091049 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 22:23:00.091211 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 22:23:00.092423 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 22:23:00.092591 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 22:23:00.092756 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 22:23:00.092919 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 22:23:00.093083 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 22:23:00.093271 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 22:23:00.093515 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 22:23:00.093704 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 22:23:00.093885 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 22:23:00.094048 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 22:23:00.094254 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 22:23:00.096465 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 22:23:00.096633 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 22:23:00.096797 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 22:23:00.096968 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 22:23:00.097130 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 22:23:00.097329 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 22:23:00.098525 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 22:23:00.098690 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 22:23:00.098883 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 22:23:00.099037 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 22:23:00.099184 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 22:23:00.101404 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 22:23:00.101575 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 22:23:00.101739 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 22:23:00.101911 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 22:23:00.102091 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 22:23:00.102278 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 22:23:00.102481 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 22:23:00.102653 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 22:23:00.102814 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 22:23:00.102978 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 22:23:00.103151 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 22:23:00.103338 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 22:23:00.105587 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 22:23:00.105752 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 22:23:00.105915 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 22:23:00.106077 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 22:23:00.106249 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 22:23:00.106458 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 22:23:00.106615 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 22:23:00.106763 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 22:23:00.106910 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 22:23:00.107067 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Feb 13 22:23:00.107224 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 22:23:00.109411 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Feb 13 22:23:00.109605 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 22:23:00.109789 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Feb 13 22:23:00.109945 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 22:23:00.110127 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Feb 13 22:23:00.110340 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Feb 13 22:23:00.110542 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Feb 13 22:23:00.110695 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 22:23:00.110888 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Feb 13 22:23:00.111047 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Feb 13 22:23:00.111204 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 22:23:00.113432 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 22:23:00.113653 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Feb 13 22:23:00.113927 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 22:23:00.114127 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Feb 13 22:23:00.114311 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Feb 13 22:23:00.114502 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 22:23:00.114695 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Feb 13 22:23:00.114918 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Feb 13 22:23:00.115087 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 22:23:00.115286 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Feb 13 22:23:00.117513 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Feb 13 22:23:00.117674 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 22:23:00.117839 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Feb 13 22:23:00.117996 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Feb 13 22:23:00.118161 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 22:23:00.118182 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 22:23:00.118197 kernel: PCI: CLS 0 bytes, default 64 Feb 13 22:23:00.118211 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 22:23:00.118225 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Feb 13 22:23:00.118250 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 22:23:00.118265 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 13 22:23:00.118278 kernel: Initialise system trusted keyrings Feb 13 22:23:00.118299 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 22:23:00.118313 kernel: Key type asymmetric registered Feb 13 22:23:00.118326 kernel: Asymmetric key parser 'x509' registered Feb 13 22:23:00.118339 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 22:23:00.118353 kernel: io scheduler mq-deadline registered Feb 13 22:23:00.118482 kernel: io scheduler kyber registered Feb 13 22:23:00.118496 kernel: io scheduler bfq registered Feb 13 22:23:00.118683 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Feb 13 22:23:00.118871 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Feb 13 22:23:00.119049 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:23:00.119215 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Feb 13 22:23:00.119433 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Feb 13 22:23:00.119612 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:23:00.119787 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Feb 13 22:23:00.119963 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Feb 13 22:23:00.120148 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:23:00.120329 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Feb 13 22:23:00.120520 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Feb 13 22:23:00.120687 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:23:00.120842 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Feb 13 22:23:00.121004 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Feb 13 22:23:00.121201 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:23:00.121410 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Feb 13 22:23:00.121608 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Feb 13 22:23:00.121787 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:23:00.121975 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Feb 13 22:23:00.122153 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Feb 13 22:23:00.122343 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:23:00.122545 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Feb 13 22:23:00.122726 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Feb 13 22:23:00.122897 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 22:23:00.122918 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 22:23:00.122933 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 22:23:00.122954 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 22:23:00.122968 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 22:23:00.122982 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 22:23:00.122995 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 22:23:00.123009 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 22:23:00.123023 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 22:23:00.123200 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 22:23:00.123222 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 22:23:00.123425 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 22:23:00.123584 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T22:22:59 UTC (1739485379) Feb 13 22:23:00.123750 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 22:23:00.123770 kernel: intel_pstate: CPU model not supported Feb 13 22:23:00.123790 kernel: NET: Registered PF_INET6 protocol family Feb 13 22:23:00.123803 kernel: Segment Routing with IPv6 Feb 13 22:23:00.123816 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 22:23:00.123830 kernel: NET: Registered PF_PACKET protocol family Feb 13 22:23:00.123842 kernel: Key type dns_resolver registered Feb 13 22:23:00.123860 kernel: IPI shorthand broadcast: enabled Feb 13 22:23:00.123873 kernel: sched_clock: Marking stable (1195003929, 242992457)->(1685510045, -247513659) Feb 13 22:23:00.123886 kernel: registered taskstats version 1 Feb 13 22:23:00.123899 kernel: Loading compiled-in X.509 certificates Feb 13 22:23:00.123912 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 0cc219a306b9e46e583adebba1820decbdc4307b' Feb 13 22:23:00.123925 kernel: Key type .fscrypt registered Feb 13 22:23:00.123951 kernel: Key type fscrypt-provisioning registered Feb 13 22:23:00.123964 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 22:23:00.123977 kernel: ima: Allocated hash algorithm: sha1 Feb 13 22:23:00.123996 kernel: ima: No architecture policies found Feb 13 22:23:00.124009 kernel: clk: Disabling unused clocks Feb 13 22:23:00.124022 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 22:23:00.124036 kernel: Write protecting the kernel read-only data: 36864k Feb 13 22:23:00.124049 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 22:23:00.124063 kernel: Run /init as init process Feb 13 22:23:00.124076 kernel: with arguments: Feb 13 22:23:00.124089 kernel: /init Feb 13 22:23:00.124102 kernel: with environment: Feb 13 22:23:00.124120 kernel: HOME=/ Feb 13 22:23:00.124133 kernel: TERM=linux Feb 13 22:23:00.124146 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 22:23:00.124171 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 22:23:00.124190 systemd[1]: Detected virtualization kvm. Feb 13 22:23:00.124204 systemd[1]: Detected architecture x86-64. Feb 13 22:23:00.124218 systemd[1]: Running in initrd. Feb 13 22:23:00.124241 systemd[1]: No hostname configured, using default hostname. Feb 13 22:23:00.124263 systemd[1]: Hostname set to . Feb 13 22:23:00.124278 systemd[1]: Initializing machine ID from VM UUID. Feb 13 22:23:00.124292 systemd[1]: Queued start job for default target initrd.target. Feb 13 22:23:00.124306 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 22:23:00.124321 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 22:23:00.124335 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 22:23:00.124350 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 22:23:00.124414 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 22:23:00.124430 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 22:23:00.124446 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 22:23:00.124461 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 22:23:00.124475 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 22:23:00.124489 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 22:23:00.124503 systemd[1]: Reached target paths.target - Path Units. Feb 13 22:23:00.124523 systemd[1]: Reached target slices.target - Slice Units. Feb 13 22:23:00.124537 systemd[1]: Reached target swap.target - Swaps. Feb 13 22:23:00.124552 systemd[1]: Reached target timers.target - Timer Units. Feb 13 22:23:00.124566 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 22:23:00.124580 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 22:23:00.124594 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 22:23:00.124609 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 22:23:00.124623 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 22:23:00.124637 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 22:23:00.124657 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 22:23:00.124671 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 22:23:00.124685 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 22:23:00.124700 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 22:23:00.124714 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 22:23:00.124728 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 22:23:00.124743 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 22:23:00.124757 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 22:23:00.124771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 22:23:00.124833 systemd-journald[202]: Collecting audit messages is disabled. Feb 13 22:23:00.124867 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 22:23:00.124882 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 22:23:00.124902 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 22:23:00.124922 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 22:23:00.124937 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 22:23:00.124951 kernel: Bridge firewalling registered Feb 13 22:23:00.124965 systemd-journald[202]: Journal started Feb 13 22:23:00.124997 systemd-journald[202]: Runtime Journal (/run/log/journal/9d03c335afc24077bc459cbf73fb3170) is 4.7M, max 38.0M, 33.2M free. Feb 13 22:23:00.081704 systemd-modules-load[203]: Inserted module 'overlay' Feb 13 22:23:00.116608 systemd-modules-load[203]: Inserted module 'br_netfilter' Feb 13 22:23:00.179379 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 22:23:00.180097 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 22:23:00.181142 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 22:23:00.191634 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 22:23:00.199541 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 22:23:00.204077 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 22:23:00.205266 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 22:23:00.217611 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 22:23:00.225876 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 22:23:00.230763 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 22:23:00.240589 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 22:23:00.243143 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 22:23:00.245407 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 22:23:00.257571 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 22:23:00.279814 dracut-cmdline[238]: dracut-dracut-053 Feb 13 22:23:00.286431 dracut-cmdline[238]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=ed9b5d8ea73d2e47b8decea8124089e04dd398ef43013c1b1a5809314044b1c3 Feb 13 22:23:00.295264 systemd-resolved[235]: Positive Trust Anchors: Feb 13 22:23:00.295301 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 22:23:00.295347 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 22:23:00.300958 systemd-resolved[235]: Defaulting to hostname 'linux'. Feb 13 22:23:00.303150 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 22:23:00.304819 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 22:23:00.399409 kernel: SCSI subsystem initialized Feb 13 22:23:00.411404 kernel: Loading iSCSI transport class v2.0-870. Feb 13 22:23:00.424404 kernel: iscsi: registered transport (tcp) Feb 13 22:23:00.452928 kernel: iscsi: registered transport (qla4xxx) Feb 13 22:23:00.453017 kernel: QLogic iSCSI HBA Driver Feb 13 22:23:00.511302 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 22:23:00.526577 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 22:23:00.557555 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 22:23:00.557733 kernel: device-mapper: uevent: version 1.0.3 Feb 13 22:23:00.559932 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 22:23:00.614426 kernel: raid6: sse2x4 gen() 7445 MB/s Feb 13 22:23:00.631426 kernel: raid6: sse2x2 gen() 5270 MB/s Feb 13 22:23:00.650138 kernel: raid6: sse2x1 gen() 5389 MB/s Feb 13 22:23:00.650285 kernel: raid6: using algorithm sse2x4 gen() 7445 MB/s Feb 13 22:23:00.669185 kernel: raid6: .... xor() 4840 MB/s, rmw enabled Feb 13 22:23:00.669326 kernel: raid6: using ssse3x2 recovery algorithm Feb 13 22:23:00.695415 kernel: xor: automatically using best checksumming function avx Feb 13 22:23:00.896417 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 22:23:00.911866 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 22:23:00.919745 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 22:23:00.948801 systemd-udevd[420]: Using default interface naming scheme 'v255'. Feb 13 22:23:00.956326 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 22:23:00.966554 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 22:23:00.987911 dracut-pre-trigger[428]: rd.md=0: removing MD RAID activation Feb 13 22:23:01.032490 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 22:23:01.040601 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 22:23:01.152394 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 22:23:01.163637 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 22:23:01.196394 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 22:23:01.199218 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 22:23:01.201501 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 22:23:01.202273 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 22:23:01.210563 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 22:23:01.230435 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 22:23:01.275397 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Feb 13 22:23:01.346820 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 22:23:01.346849 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 22:23:01.347052 kernel: AVX version of gcm_enc/dec engaged. Feb 13 22:23:01.347075 kernel: AES CTR mode by8 optimization enabled Feb 13 22:23:01.347103 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 22:23:01.347123 kernel: GPT:17805311 != 125829119 Feb 13 22:23:01.347140 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 22:23:01.347157 kernel: GPT:17805311 != 125829119 Feb 13 22:23:01.347174 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 22:23:01.347192 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 22:23:01.328384 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 22:23:01.328556 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 22:23:01.335219 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 22:23:01.336089 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 22:23:01.336406 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 22:23:01.339631 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 22:23:01.352639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 22:23:01.380826 kernel: libata version 3.00 loaded. Feb 13 22:23:01.388569 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 22:23:01.407072 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 22:23:01.407099 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 22:23:01.407339 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 22:23:01.407563 kernel: scsi host0: ahci Feb 13 22:23:01.407767 kernel: scsi host1: ahci Feb 13 22:23:01.407985 kernel: scsi host2: ahci Feb 13 22:23:01.408190 kernel: scsi host3: ahci Feb 13 22:23:01.408415 kernel: scsi host4: ahci Feb 13 22:23:01.408609 kernel: scsi host5: ahci Feb 13 22:23:01.408796 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Feb 13 22:23:01.408817 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Feb 13 22:23:01.408835 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Feb 13 22:23:01.408852 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Feb 13 22:23:01.408877 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Feb 13 22:23:01.408895 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Feb 13 22:23:01.414651 kernel: BTRFS: device fsid e9c87d9f-3864-4b45-9be4-80a5397f1fc6 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (474) Feb 13 22:23:01.433386 kernel: ACPI: bus type USB registered Feb 13 22:23:01.433459 kernel: usbcore: registered new interface driver usbfs Feb 13 22:23:01.433480 kernel: usbcore: registered new interface driver hub Feb 13 22:23:01.433498 kernel: usbcore: registered new device driver usb Feb 13 22:23:01.437382 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (467) Feb 13 22:23:01.443204 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 22:23:01.534680 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 22:23:01.535561 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 22:23:01.537672 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 22:23:01.551214 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 22:23:01.563238 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 22:23:01.568554 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 22:23:01.571585 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 22:23:01.577715 disk-uuid[560]: Primary Header is updated. Feb 13 22:23:01.577715 disk-uuid[560]: Secondary Entries is updated. Feb 13 22:23:01.577715 disk-uuid[560]: Secondary Header is updated. Feb 13 22:23:01.583201 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 22:23:01.607987 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 22:23:01.717391 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 22:23:01.717480 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 22:23:01.720265 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 22:23:01.722387 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 22:23:01.730571 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 22:23:01.730630 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 22:23:01.767725 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 22:23:01.785627 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Feb 13 22:23:01.785852 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 22:23:01.786103 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 22:23:01.786322 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Feb 13 22:23:01.786598 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Feb 13 22:23:01.786823 kernel: hub 1-0:1.0: USB hub found Feb 13 22:23:01.787054 kernel: hub 1-0:1.0: 4 ports detected Feb 13 22:23:01.788324 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 22:23:01.788645 kernel: hub 2-0:1.0: USB hub found Feb 13 22:23:01.788867 kernel: hub 2-0:1.0: 4 ports detected Feb 13 22:23:02.017474 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 22:23:02.159418 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 22:23:02.166603 kernel: usbcore: registered new interface driver usbhid Feb 13 22:23:02.166643 kernel: usbhid: USB HID core driver Feb 13 22:23:02.174174 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Feb 13 22:23:02.174230 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Feb 13 22:23:02.594769 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 22:23:02.596221 disk-uuid[561]: The operation has completed successfully. Feb 13 22:23:02.657003 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 22:23:02.657207 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 22:23:02.669616 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 22:23:02.676213 sh[587]: Success Feb 13 22:23:02.694404 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Feb 13 22:23:02.760046 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 22:23:02.774483 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 22:23:02.776352 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 22:23:02.805068 kernel: BTRFS info (device dm-0): first mount of filesystem e9c87d9f-3864-4b45-9be4-80a5397f1fc6 Feb 13 22:23:02.805150 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 22:23:02.808909 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 22:23:02.808946 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 22:23:02.812112 kernel: BTRFS info (device dm-0): using free space tree Feb 13 22:23:02.821782 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 22:23:02.823529 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 22:23:02.832616 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 22:23:02.836130 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 22:23:02.848814 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 22:23:02.848872 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 22:23:02.850642 kernel: BTRFS info (device vda6): using free space tree Feb 13 22:23:02.857396 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 22:23:02.874000 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 22:23:02.873635 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 22:23:02.882424 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 22:23:02.891591 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 22:23:02.987849 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 22:23:03.001802 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 22:23:03.042939 ignition[680]: Ignition 2.20.0 Feb 13 22:23:03.043052 systemd-networkd[772]: lo: Link UP Feb 13 22:23:03.042967 ignition[680]: Stage: fetch-offline Feb 13 22:23:03.043060 systemd-networkd[772]: lo: Gained carrier Feb 13 22:23:03.043075 ignition[680]: no configs at "/usr/lib/ignition/base.d" Feb 13 22:23:03.045301 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 22:23:03.043096 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:23:03.047625 systemd-networkd[772]: Enumeration completed Feb 13 22:23:03.043317 ignition[680]: parsed url from cmdline: "" Feb 13 22:23:03.048434 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 22:23:03.043325 ignition[680]: no config URL provided Feb 13 22:23:03.048440 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 22:23:03.043334 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 22:23:03.048852 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 22:23:03.043351 ignition[680]: no config at "/usr/lib/ignition/user.ign" Feb 13 22:23:03.049922 systemd-networkd[772]: eth0: Link UP Feb 13 22:23:03.043382 ignition[680]: failed to fetch config: resource requires networking Feb 13 22:23:03.049932 systemd-networkd[772]: eth0: Gained carrier Feb 13 22:23:03.043710 ignition[680]: Ignition finished successfully Feb 13 22:23:03.049947 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 22:23:03.051204 systemd[1]: Reached target network.target - Network. Feb 13 22:23:03.060564 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 22:23:03.079704 systemd-networkd[772]: eth0: DHCPv4 address 10.230.12.194/30, gateway 10.230.12.193 acquired from 10.230.12.193 Feb 13 22:23:03.081407 ignition[779]: Ignition 2.20.0 Feb 13 22:23:03.081420 ignition[779]: Stage: fetch Feb 13 22:23:03.081662 ignition[779]: no configs at "/usr/lib/ignition/base.d" Feb 13 22:23:03.081682 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:23:03.081830 ignition[779]: parsed url from cmdline: "" Feb 13 22:23:03.081838 ignition[779]: no config URL provided Feb 13 22:23:03.081847 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 22:23:03.081864 ignition[779]: no config at "/usr/lib/ignition/user.ign" Feb 13 22:23:03.082056 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 13 22:23:03.082301 ignition[779]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 13 22:23:03.082376 ignition[779]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 13 22:23:03.082649 ignition[779]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Feb 13 22:23:03.282943 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Feb 13 22:23:03.302388 ignition[779]: GET result: OK Feb 13 22:23:03.302904 ignition[779]: parsing config with SHA512: 41d348fa3e283028ab29f0064590ef0178b6f58cde8c6cd7ba9b9d3af6bfa836eb78f4f9ab59ad381b0d5d7a7f933bfbbed592a9eb68aba56752d1719eda55c4 Feb 13 22:23:03.309435 unknown[779]: fetched base config from "system" Feb 13 22:23:03.309453 unknown[779]: fetched base config from "system" Feb 13 22:23:03.309888 ignition[779]: fetch: fetch complete Feb 13 22:23:03.309462 unknown[779]: fetched user config from "openstack" Feb 13 22:23:03.309896 ignition[779]: fetch: fetch passed Feb 13 22:23:03.312350 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 22:23:03.309959 ignition[779]: Ignition finished successfully Feb 13 22:23:03.325571 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 22:23:03.357498 ignition[787]: Ignition 2.20.0 Feb 13 22:23:03.357523 ignition[787]: Stage: kargs Feb 13 22:23:03.357819 ignition[787]: no configs at "/usr/lib/ignition/base.d" Feb 13 22:23:03.360471 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 22:23:03.357839 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:23:03.359029 ignition[787]: kargs: kargs passed Feb 13 22:23:03.359105 ignition[787]: Ignition finished successfully Feb 13 22:23:03.368617 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 22:23:03.387345 ignition[793]: Ignition 2.20.0 Feb 13 22:23:03.387385 ignition[793]: Stage: disks Feb 13 22:23:03.387650 ignition[793]: no configs at "/usr/lib/ignition/base.d" Feb 13 22:23:03.389966 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 22:23:03.387669 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:23:03.392681 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 22:23:03.388785 ignition[793]: disks: disks passed Feb 13 22:23:03.394206 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 22:23:03.388867 ignition[793]: Ignition finished successfully Feb 13 22:23:03.395881 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 22:23:03.397512 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 22:23:03.398811 systemd[1]: Reached target basic.target - Basic System. Feb 13 22:23:03.406558 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 22:23:03.428795 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 22:23:03.431888 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 22:23:03.437474 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 22:23:03.554389 kernel: EXT4-fs (vda9): mounted filesystem c5993b0e-9201-4b44-aa01-79dc9d6c9fc9 r/w with ordered data mode. Quota mode: none. Feb 13 22:23:03.556193 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 22:23:03.557743 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 22:23:03.570505 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 22:23:03.573509 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 22:23:03.574638 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 22:23:03.576645 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 13 22:23:03.578516 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 22:23:03.578555 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 22:23:03.591469 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (809) Feb 13 22:23:03.593262 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 22:23:03.601626 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 22:23:03.601655 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 22:23:03.601674 kernel: BTRFS info (device vda6): using free space tree Feb 13 22:23:03.601710 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 22:23:03.617639 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 22:23:03.621420 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 22:23:03.689104 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 22:23:03.708953 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Feb 13 22:23:03.718173 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 22:23:03.723839 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 22:23:03.844882 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 22:23:03.849510 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 22:23:03.851532 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 22:23:03.867684 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 22:23:03.871485 kernel: BTRFS info (device vda6): last unmount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 22:23:03.899780 ignition[926]: INFO : Ignition 2.20.0 Feb 13 22:23:03.899780 ignition[926]: INFO : Stage: mount Feb 13 22:23:03.901681 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 22:23:03.901681 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:23:03.901681 ignition[926]: INFO : mount: mount passed Feb 13 22:23:03.901681 ignition[926]: INFO : Ignition finished successfully Feb 13 22:23:03.902886 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 22:23:03.913164 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 22:23:04.317740 systemd-networkd[772]: eth0: Gained IPv6LL Feb 13 22:23:05.825016 systemd-networkd[772]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8330:24:19ff:fee6:cc2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8330:24:19ff:fee6:cc2/64 assigned by NDisc. Feb 13 22:23:05.825034 systemd-networkd[772]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 22:23:10.785955 coreos-metadata[811]: Feb 13 22:23:10.785 WARN failed to locate config-drive, using the metadata service API instead Feb 13 22:23:10.809096 coreos-metadata[811]: Feb 13 22:23:10.809 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 22:23:10.825441 coreos-metadata[811]: Feb 13 22:23:10.825 INFO Fetch successful Feb 13 22:23:10.826984 coreos-metadata[811]: Feb 13 22:23:10.826 INFO wrote hostname srv-x84ug.gb1.brightbox.com to /sysroot/etc/hostname Feb 13 22:23:10.827401 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 13 22:23:10.827604 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 13 22:23:10.837483 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 22:23:10.873845 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 22:23:10.886028 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (943) Feb 13 22:23:10.886076 kernel: BTRFS info (device vda6): first mount of filesystem 84d576e4-038f-4c76-aa8e-6cfd81e812ea Feb 13 22:23:10.889820 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 22:23:10.889860 kernel: BTRFS info (device vda6): using free space tree Feb 13 22:23:10.896423 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 22:23:10.899140 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 22:23:10.928482 ignition[961]: INFO : Ignition 2.20.0 Feb 13 22:23:10.928482 ignition[961]: INFO : Stage: files Feb 13 22:23:10.930325 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 22:23:10.930325 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:23:10.930325 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Feb 13 22:23:10.933375 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 22:23:10.933375 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 22:23:10.935620 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 22:23:10.935620 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 22:23:10.935620 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 22:23:10.934866 unknown[961]: wrote ssh authorized keys file for user: core Feb 13 22:23:10.939973 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 22:23:10.939973 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 22:23:11.122662 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 22:23:11.851642 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 22:23:11.851642 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 22:23:11.860860 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 22:23:11.860860 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 22:23:11.860860 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 22:23:11.860860 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 22:23:11.860860 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 22:23:11.860860 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 22:23:11.860860 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 22:23:11.860860 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 22:23:11.860860 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 22:23:11.860860 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 22:23:11.860860 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 22:23:11.860860 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 22:23:11.860860 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 22:23:12.404721 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 22:23:13.902388 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 22:23:13.905012 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 22:23:13.905012 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 22:23:13.905012 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 22:23:13.905012 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 22:23:13.905012 ignition[961]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Feb 13 22:23:13.914667 ignition[961]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 22:23:13.914667 ignition[961]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 22:23:13.914667 ignition[961]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 22:23:13.914667 ignition[961]: INFO : files: files passed Feb 13 22:23:13.914667 ignition[961]: INFO : Ignition finished successfully Feb 13 22:23:13.909262 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 22:23:13.921703 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 22:23:13.925570 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 22:23:13.935725 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 22:23:13.935921 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 22:23:13.950413 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 22:23:13.952304 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 22:23:13.952304 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 22:23:13.952817 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 22:23:13.954897 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 22:23:13.959576 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 22:23:14.001074 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 22:23:14.002116 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 22:23:14.004496 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 22:23:14.006137 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 22:23:14.007083 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 22:23:14.012628 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 22:23:14.033794 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 22:23:14.042620 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 22:23:14.057585 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 22:23:14.058611 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 22:23:14.060472 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 22:23:14.062056 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 22:23:14.062246 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 22:23:14.064242 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 22:23:14.065254 systemd[1]: Stopped target basic.target - Basic System. Feb 13 22:23:14.066933 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 22:23:14.068279 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 22:23:14.069743 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 22:23:14.071378 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 22:23:14.072926 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 22:23:14.074664 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 22:23:14.076258 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 22:23:14.077890 systemd[1]: Stopped target swap.target - Swaps. Feb 13 22:23:14.079417 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 22:23:14.079624 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 22:23:14.081477 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 22:23:14.082493 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 22:23:14.083937 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 22:23:14.084115 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 22:23:14.085597 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 22:23:14.085783 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 22:23:14.087912 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 22:23:14.088087 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 22:23:14.089845 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 22:23:14.090028 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 22:23:14.102208 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 22:23:14.102968 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 22:23:14.103268 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 22:23:14.106669 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 22:23:14.108554 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 22:23:14.108822 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 22:23:14.113172 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 22:23:14.113459 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 22:23:14.124745 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 22:23:14.124907 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 22:23:14.137887 ignition[1013]: INFO : Ignition 2.20.0 Feb 13 22:23:14.137887 ignition[1013]: INFO : Stage: umount Feb 13 22:23:14.142544 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 22:23:14.142544 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 22:23:14.142544 ignition[1013]: INFO : umount: umount passed Feb 13 22:23:14.142544 ignition[1013]: INFO : Ignition finished successfully Feb 13 22:23:14.140943 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 22:23:14.141096 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 22:23:14.142634 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 22:23:14.142717 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 22:23:14.144572 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 22:23:14.144686 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 22:23:14.147553 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 22:23:14.147633 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 22:23:14.148740 systemd[1]: Stopped target network.target - Network. Feb 13 22:23:14.149374 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 22:23:14.149474 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 22:23:14.150336 systemd[1]: Stopped target paths.target - Path Units. Feb 13 22:23:14.150989 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 22:23:14.155725 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 22:23:14.156884 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 22:23:14.158448 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 22:23:14.160095 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 22:23:14.160167 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 22:23:14.161567 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 22:23:14.161646 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 22:23:14.163216 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 22:23:14.163295 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 22:23:14.164936 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 22:23:14.165027 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 22:23:14.166647 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 22:23:14.168608 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 22:23:14.171932 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 22:23:14.172781 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 22:23:14.173001 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 22:23:14.173545 systemd-networkd[772]: eth0: DHCPv6 lease lost Feb 13 22:23:14.176170 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 22:23:14.176341 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 22:23:14.179137 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 22:23:14.179221 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 22:23:14.182645 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 22:23:14.182758 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 22:23:14.191507 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 22:23:14.192517 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 22:23:14.192603 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 22:23:14.195372 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 22:23:14.199204 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 22:23:14.199516 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 22:23:14.208832 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 22:23:14.209421 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 22:23:14.213867 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 22:23:14.214618 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 22:23:14.216842 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 22:23:14.216947 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 22:23:14.218755 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 22:23:14.218817 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 22:23:14.220278 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 22:23:14.220347 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 22:23:14.222612 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 22:23:14.222687 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 22:23:14.224020 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 22:23:14.224088 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 22:23:14.239623 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 22:23:14.242769 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 22:23:14.242884 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 22:23:14.244540 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 22:23:14.244624 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 22:23:14.245983 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 22:23:14.246051 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 22:23:14.250646 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 22:23:14.250744 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 22:23:14.251632 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 22:23:14.251699 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 22:23:14.254103 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 22:23:14.254244 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 22:23:14.255826 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 22:23:14.262598 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 22:23:14.275005 systemd[1]: Switching root. Feb 13 22:23:14.309031 systemd-journald[202]: Journal stopped Feb 13 22:23:15.744015 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Feb 13 22:23:15.744105 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 22:23:15.744151 kernel: SELinux: policy capability open_perms=1 Feb 13 22:23:15.744174 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 22:23:15.744201 kernel: SELinux: policy capability always_check_network=0 Feb 13 22:23:15.744221 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 22:23:15.744246 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 22:23:15.744273 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 22:23:15.744305 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 22:23:15.744340 kernel: audit: type=1403 audit(1739485394.532:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 22:23:15.744406 systemd[1]: Successfully loaded SELinux policy in 48.843ms. Feb 13 22:23:15.744440 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.097ms. Feb 13 22:23:15.744463 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 22:23:15.744484 systemd[1]: Detected virtualization kvm. Feb 13 22:23:15.744505 systemd[1]: Detected architecture x86-64. Feb 13 22:23:15.744524 systemd[1]: Detected first boot. Feb 13 22:23:15.744544 systemd[1]: Hostname set to . Feb 13 22:23:15.744564 systemd[1]: Initializing machine ID from VM UUID. Feb 13 22:23:15.744584 zram_generator::config[1055]: No configuration found. Feb 13 22:23:15.744630 systemd[1]: Populated /etc with preset unit settings. Feb 13 22:23:15.744651 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 22:23:15.744690 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 22:23:15.744713 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 22:23:15.744746 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 22:23:15.744766 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 22:23:15.744785 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 22:23:15.744835 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 22:23:15.744875 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 22:23:15.744898 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 22:23:15.744926 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 22:23:15.744947 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 22:23:15.744968 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 22:23:15.744989 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 22:23:15.745016 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 22:23:15.745038 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 22:23:15.745058 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 22:23:15.745092 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 22:23:15.745116 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 22:23:15.745137 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 22:23:15.745158 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 22:23:15.745179 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 22:23:15.745201 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 22:23:15.745234 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 22:23:15.745256 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 22:23:15.745277 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 22:23:15.745299 systemd[1]: Reached target slices.target - Slice Units. Feb 13 22:23:15.745320 systemd[1]: Reached target swap.target - Swaps. Feb 13 22:23:15.745340 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 22:23:15.749473 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 22:23:15.749520 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 22:23:15.749543 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 22:23:15.749584 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 22:23:15.749625 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 22:23:15.749647 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 22:23:15.749666 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 22:23:15.749686 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 22:23:15.749705 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:23:15.749736 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 22:23:15.749756 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 22:23:15.749788 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 22:23:15.749818 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 22:23:15.749856 systemd[1]: Reached target machines.target - Containers. Feb 13 22:23:15.749878 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 22:23:15.749899 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 22:23:15.749921 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 22:23:15.749955 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 22:23:15.749978 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 22:23:15.749999 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 22:23:15.750021 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 22:23:15.750042 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 22:23:15.750064 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 22:23:15.750093 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 22:23:15.750115 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 22:23:15.750162 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 22:23:15.750185 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 22:23:15.750206 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 22:23:15.750229 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 22:23:15.750249 kernel: fuse: init (API version 7.39) Feb 13 22:23:15.750269 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 22:23:15.750293 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 22:23:15.750314 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 22:23:15.750335 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 22:23:15.752712 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 22:23:15.752767 systemd[1]: Stopped verity-setup.service. Feb 13 22:23:15.752791 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:23:15.752824 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 22:23:15.752848 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 22:23:15.752869 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 22:23:15.752890 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 22:23:15.752927 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 22:23:15.752950 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 22:23:15.752971 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 22:23:15.752992 kernel: ACPI: bus type drm_connector registered Feb 13 22:23:15.753013 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 22:23:15.753034 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 22:23:15.753055 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 22:23:15.753097 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 22:23:15.753119 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 22:23:15.753141 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 22:23:15.753183 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 22:23:15.753207 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 22:23:15.753240 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 22:23:15.753262 kernel: loop: module loaded Feb 13 22:23:15.753284 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 22:23:15.753305 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 22:23:15.753327 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 22:23:15.755692 systemd-journald[1144]: Collecting audit messages is disabled. Feb 13 22:23:15.755736 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 22:23:15.755779 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 22:23:15.755803 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 22:23:15.755840 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 22:23:15.755862 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 22:23:15.755884 systemd-journald[1144]: Journal started Feb 13 22:23:15.755917 systemd-journald[1144]: Runtime Journal (/run/log/journal/9d03c335afc24077bc459cbf73fb3170) is 4.7M, max 38.0M, 33.2M free. Feb 13 22:23:15.300697 systemd[1]: Queued start job for default target multi-user.target. Feb 13 22:23:15.322908 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 22:23:15.323604 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 22:23:15.765030 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 22:23:15.772401 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 22:23:15.778397 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 22:23:15.778472 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 22:23:15.783983 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 22:23:15.799621 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 22:23:15.811456 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 22:23:15.816382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 22:23:15.830450 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 22:23:15.830510 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 22:23:15.848822 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 22:23:15.848926 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 22:23:15.858404 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 22:23:15.877602 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 22:23:15.896802 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 22:23:15.896878 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 22:23:15.906052 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 22:23:15.908675 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 22:23:15.910909 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 22:23:15.921360 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 22:23:15.944408 kernel: loop0: detected capacity change from 0 to 140992 Feb 13 22:23:15.952080 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 22:23:15.965739 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 22:23:15.982593 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 22:23:15.997621 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 22:23:16.035817 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 22:23:16.044568 systemd-journald[1144]: Time spent on flushing to /var/log/journal/9d03c335afc24077bc459cbf73fb3170 is 77.147ms for 1145 entries. Feb 13 22:23:16.044568 systemd-journald[1144]: System Journal (/var/log/journal/9d03c335afc24077bc459cbf73fb3170) is 8.0M, max 584.8M, 576.8M free. Feb 13 22:23:16.140996 kernel: loop1: detected capacity change from 0 to 8 Feb 13 22:23:16.141040 systemd-journald[1144]: Received client request to flush runtime journal. Feb 13 22:23:16.141083 kernel: loop2: detected capacity change from 0 to 205544 Feb 13 22:23:16.056609 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 22:23:16.058608 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 22:23:16.073529 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 22:23:16.083640 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 22:23:16.085490 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 22:23:16.096925 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 22:23:16.144675 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 22:23:16.154344 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 22:23:16.179610 udevadm[1206]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 22:23:16.204095 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Feb 13 22:23:16.204123 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Feb 13 22:23:16.211516 kernel: loop4: detected capacity change from 0 to 140992 Feb 13 22:23:16.218035 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 22:23:16.253513 kernel: loop5: detected capacity change from 0 to 8 Feb 13 22:23:16.263024 kernel: loop6: detected capacity change from 0 to 205544 Feb 13 22:23:16.286415 kernel: loop7: detected capacity change from 0 to 138184 Feb 13 22:23:16.323723 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 13 22:23:16.324598 (sd-merge)[1212]: Merged extensions into '/usr'. Feb 13 22:23:16.334584 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 22:23:16.334622 systemd[1]: Reloading... Feb 13 22:23:16.505508 zram_generator::config[1239]: No configuration found. Feb 13 22:23:16.587211 ldconfig[1166]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 22:23:16.764462 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 22:23:16.836123 systemd[1]: Reloading finished in 500 ms. Feb 13 22:23:16.862454 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 22:23:16.864203 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 22:23:16.869715 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 22:23:16.885657 systemd[1]: Starting ensure-sysext.service... Feb 13 22:23:16.889572 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 22:23:16.894267 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 22:23:16.901605 systemd[1]: Reloading requested from client PID 1296 ('systemctl') (unit ensure-sysext.service)... Feb 13 22:23:16.901640 systemd[1]: Reloading... Feb 13 22:23:16.942271 systemd-udevd[1298]: Using default interface naming scheme 'v255'. Feb 13 22:23:16.948177 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 22:23:16.948754 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 22:23:16.952925 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 22:23:16.953347 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Feb 13 22:23:16.953477 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Feb 13 22:23:16.966129 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 22:23:16.966148 systemd-tmpfiles[1297]: Skipping /boot Feb 13 22:23:17.014400 zram_generator::config[1323]: No configuration found. Feb 13 22:23:17.016574 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 22:23:17.016593 systemd-tmpfiles[1297]: Skipping /boot Feb 13 22:23:17.166506 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1330) Feb 13 22:23:17.301399 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 22:23:17.321397 kernel: ACPI: button: Power Button [PWRF] Feb 13 22:23:17.323397 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 22:23:17.336524 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 22:23:17.410400 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 22:23:17.426983 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 22:23:17.427251 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 22:23:17.427820 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 22:23:17.451287 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 22:23:17.453162 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 22:23:17.454185 systemd[1]: Reloading finished in 551 ms. Feb 13 22:23:17.482677 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 22:23:17.492968 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 22:23:17.580452 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:23:17.621981 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 22:23:17.625692 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 22:23:17.626702 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 22:23:17.630463 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 22:23:17.632748 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 22:23:17.643469 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 22:23:17.648728 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 22:23:17.650632 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 22:23:17.653514 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 22:23:17.663713 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 22:23:17.670766 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 22:23:17.684279 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 22:23:17.695766 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 22:23:17.701681 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 22:23:17.703554 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 22:23:17.709425 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 22:23:17.709701 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 22:23:17.711152 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 22:23:17.712464 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 22:23:17.713852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 22:23:17.715460 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 22:23:17.722347 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 22:23:17.739874 systemd[1]: Finished ensure-sysext.service. Feb 13 22:23:17.747809 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 22:23:17.757584 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 22:23:17.759281 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 22:23:17.760589 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 22:23:17.762934 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 22:23:17.768261 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 22:23:17.788596 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 22:23:17.791872 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 22:23:17.803557 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 22:23:17.829985 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 22:23:17.875741 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 22:23:17.902655 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 22:23:17.916522 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 22:23:17.918046 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 22:23:17.920552 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 22:23:17.928348 augenrules[1458]: No rules Feb 13 22:23:17.933652 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 22:23:17.933999 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 22:23:17.939853 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 22:23:17.977546 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 22:23:17.978763 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 22:23:17.987987 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 22:23:18.014482 lvm[1470]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 22:23:18.067268 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 22:23:18.086398 systemd-networkd[1419]: lo: Link UP Feb 13 22:23:18.086411 systemd-networkd[1419]: lo: Gained carrier Feb 13 22:23:18.092000 systemd-networkd[1419]: Enumeration completed Feb 13 22:23:18.095176 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 22:23:18.095312 systemd-networkd[1419]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 22:23:18.099565 systemd-networkd[1419]: eth0: Link UP Feb 13 22:23:18.099577 systemd-networkd[1419]: eth0: Gained carrier Feb 13 22:23:18.099603 systemd-networkd[1419]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 22:23:18.118151 systemd-resolved[1420]: Positive Trust Anchors: Feb 13 22:23:18.118790 systemd-resolved[1420]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 22:23:18.118933 systemd-resolved[1420]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 22:23:18.119462 systemd-networkd[1419]: eth0: DHCPv4 address 10.230.12.194/30, gateway 10.230.12.193 acquired from 10.230.12.193 Feb 13 22:23:18.127530 systemd-resolved[1420]: Using system hostname 'srv-x84ug.gb1.brightbox.com'. Feb 13 22:23:18.145279 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 22:23:18.146398 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 22:23:18.147441 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 22:23:18.148708 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 22:23:18.150880 systemd[1]: Reached target network.target - Network. Feb 13 22:23:18.151677 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 22:23:18.152811 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 22:23:18.153798 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 22:23:18.154715 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 22:23:18.155612 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 22:23:18.156425 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 22:23:18.156501 systemd[1]: Reached target paths.target - Path Units. Feb 13 22:23:18.157194 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 22:23:18.158223 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 22:23:18.159252 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 22:23:18.160079 systemd[1]: Reached target timers.target - Timer Units. Feb 13 22:23:18.162583 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 22:23:18.165812 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 22:23:18.172296 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 22:23:18.175321 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 22:23:18.176852 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 22:23:18.177763 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 22:23:18.178448 systemd[1]: Reached target basic.target - Basic System. Feb 13 22:23:18.179203 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 22:23:18.179249 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 22:23:18.185525 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 22:23:18.193989 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 22:23:18.197648 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 22:23:18.202646 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 22:23:18.205964 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 22:23:18.208462 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 22:23:18.214253 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 22:23:18.217467 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 22:23:18.226590 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 22:23:18.230579 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 22:23:18.237604 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 22:23:18.239283 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 22:23:18.241675 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 22:23:18.252624 jq[1481]: false Feb 13 22:23:18.247606 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 22:23:18.253425 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 22:23:18.258467 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 22:23:18.258767 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 22:23:18.289985 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 22:23:18.291728 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 22:23:18.296387 dbus-daemon[1480]: [system] SELinux support is enabled Feb 13 22:23:18.297498 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 22:23:18.307103 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 22:23:18.307153 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 22:23:18.308491 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 22:23:18.308536 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 22:23:18.316544 dbus-daemon[1480]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1419 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 22:23:18.332392 jq[1491]: true Feb 13 22:23:18.335666 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 22:23:18.367178 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 22:23:18.367318 (ntainerd)[1505]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 22:23:18.367510 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 22:23:18.374260 tar[1493]: linux-amd64/helm Feb 13 22:23:18.381265 extend-filesystems[1483]: Found loop4 Feb 13 22:23:18.381265 extend-filesystems[1483]: Found loop5 Feb 13 22:23:18.381265 extend-filesystems[1483]: Found loop6 Feb 13 22:23:18.395163 update_engine[1490]: I20250213 22:23:18.389278 1490 main.cc:92] Flatcar Update Engine starting Feb 13 22:23:18.398463 extend-filesystems[1483]: Found loop7 Feb 13 22:23:18.398463 extend-filesystems[1483]: Found vda Feb 13 22:23:18.398463 extend-filesystems[1483]: Found vda1 Feb 13 22:23:18.398463 extend-filesystems[1483]: Found vda2 Feb 13 22:23:18.398463 extend-filesystems[1483]: Found vda3 Feb 13 22:23:18.398463 extend-filesystems[1483]: Found usr Feb 13 22:23:18.398463 extend-filesystems[1483]: Found vda4 Feb 13 22:23:18.398463 extend-filesystems[1483]: Found vda6 Feb 13 22:23:18.398463 extend-filesystems[1483]: Found vda7 Feb 13 22:23:18.398463 extend-filesystems[1483]: Found vda9 Feb 13 22:23:18.398463 extend-filesystems[1483]: Checking size of /dev/vda9 Feb 13 22:23:18.424758 jq[1510]: true Feb 13 22:23:18.400289 systemd[1]: Started update-engine.service - Update Engine. Feb 13 22:23:18.425180 update_engine[1490]: I20250213 22:23:18.401666 1490 update_check_scheduler.cc:74] Next update check in 10m26s Feb 13 22:23:18.424537 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 22:23:18.493701 extend-filesystems[1483]: Resized partition /dev/vda9 Feb 13 22:23:18.520445 extend-filesystems[1526]: resize2fs 1.47.1 (20-May-2024) Feb 13 22:23:18.539389 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Feb 13 22:23:18.579122 systemd-logind[1489]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 22:23:18.592511 systemd-logind[1489]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 22:23:18.595644 systemd-logind[1489]: New seat seat0. Feb 13 22:23:18.612429 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1326) Feb 13 22:23:18.605096 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 22:23:18.685166 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 22:23:18.750831 bash[1540]: Updated "/home/core/.ssh/authorized_keys" Feb 13 22:23:18.753091 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 22:23:18.766823 systemd[1]: Starting sshkeys.service... Feb 13 22:23:18.776094 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 22:23:18.776617 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 22:23:18.779512 dbus-daemon[1480]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1508 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 22:23:18.789781 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 22:23:18.801678 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 22:23:18.812871 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 22:23:18.838246 polkitd[1547]: Started polkitd version 121 Feb 13 22:23:18.865423 polkitd[1547]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 22:23:18.870623 polkitd[1547]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 22:23:18.879655 polkitd[1547]: Finished loading, compiling and executing 2 rules Feb 13 22:23:18.884629 dbus-daemon[1480]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 22:23:18.884948 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 22:23:18.890338 polkitd[1547]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 22:23:18.913380 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 22:23:18.940016 systemd-hostnamed[1508]: Hostname set to (static) Feb 13 22:23:18.941185 extend-filesystems[1526]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 22:23:18.941185 extend-filesystems[1526]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 22:23:18.941185 extend-filesystems[1526]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 22:23:18.950460 extend-filesystems[1483]: Resized filesystem in /dev/vda9 Feb 13 22:23:18.944660 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 22:23:18.945202 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 22:23:18.983555 locksmithd[1518]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 22:23:19.004238 containerd[1505]: time="2025-02-13T22:23:19.004054887Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 22:23:19.088209 containerd[1505]: time="2025-02-13T22:23:19.088075695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 22:23:19.094278 containerd[1505]: time="2025-02-13T22:23:19.094207471Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 22:23:19.094278 containerd[1505]: time="2025-02-13T22:23:19.094274956Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 22:23:19.094421 containerd[1505]: time="2025-02-13T22:23:19.094310491Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 22:23:19.094734 containerd[1505]: time="2025-02-13T22:23:19.094679293Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 22:23:19.094817 containerd[1505]: time="2025-02-13T22:23:19.094742300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 22:23:19.094907 containerd[1505]: time="2025-02-13T22:23:19.094874001Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 22:23:19.094962 containerd[1505]: time="2025-02-13T22:23:19.094907058Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 22:23:19.095244 containerd[1505]: time="2025-02-13T22:23:19.095206305Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 22:23:19.095304 containerd[1505]: time="2025-02-13T22:23:19.095242885Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 22:23:19.095304 containerd[1505]: time="2025-02-13T22:23:19.095267364Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 22:23:19.095304 containerd[1505]: time="2025-02-13T22:23:19.095285154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 22:23:19.096519 containerd[1505]: time="2025-02-13T22:23:19.095475980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 22:23:19.096519 containerd[1505]: time="2025-02-13T22:23:19.095961159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 22:23:19.096519 containerd[1505]: time="2025-02-13T22:23:19.096110133Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 22:23:19.096519 containerd[1505]: time="2025-02-13T22:23:19.096146516Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 22:23:19.096519 containerd[1505]: time="2025-02-13T22:23:19.096301376Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 22:23:19.101105 containerd[1505]: time="2025-02-13T22:23:19.101044890Z" level=info msg="metadata content store policy set" policy=shared Feb 13 22:23:19.106652 containerd[1505]: time="2025-02-13T22:23:19.106614650Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 22:23:19.106770 containerd[1505]: time="2025-02-13T22:23:19.106732172Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 22:23:19.106837 containerd[1505]: time="2025-02-13T22:23:19.106771048Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 22:23:19.106837 containerd[1505]: time="2025-02-13T22:23:19.106808559Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 22:23:19.106918 containerd[1505]: time="2025-02-13T22:23:19.106841745Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 22:23:19.107107 containerd[1505]: time="2025-02-13T22:23:19.107074498Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 22:23:19.109438 containerd[1505]: time="2025-02-13T22:23:19.107542322Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 22:23:19.109438 containerd[1505]: time="2025-02-13T22:23:19.107813672Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 22:23:19.109438 containerd[1505]: time="2025-02-13T22:23:19.107842490Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 22:23:19.109438 containerd[1505]: time="2025-02-13T22:23:19.107877070Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 22:23:19.109438 containerd[1505]: time="2025-02-13T22:23:19.107902563Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 22:23:19.109438 containerd[1505]: time="2025-02-13T22:23:19.107931771Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 22:23:19.109438 containerd[1505]: time="2025-02-13T22:23:19.107960905Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 22:23:19.109438 containerd[1505]: time="2025-02-13T22:23:19.108017785Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 22:23:19.109438 containerd[1505]: time="2025-02-13T22:23:19.108064106Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 22:23:19.109438 containerd[1505]: time="2025-02-13T22:23:19.108098958Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 22:23:19.109438 containerd[1505]: time="2025-02-13T22:23:19.108148708Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 22:23:19.109438 containerd[1505]: time="2025-02-13T22:23:19.108173946Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 22:23:19.109438 containerd[1505]: time="2025-02-13T22:23:19.108226058Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.109438 containerd[1505]: time="2025-02-13T22:23:19.108261111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.109949 containerd[1505]: time="2025-02-13T22:23:19.108289728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.109949 containerd[1505]: time="2025-02-13T22:23:19.108312287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.109949 containerd[1505]: time="2025-02-13T22:23:19.108340534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.111058 containerd[1505]: time="2025-02-13T22:23:19.111021385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.111146 containerd[1505]: time="2025-02-13T22:23:19.111113294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.111623 containerd[1505]: time="2025-02-13T22:23:19.111589246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.111688 containerd[1505]: time="2025-02-13T22:23:19.111660756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.111757 containerd[1505]: time="2025-02-13T22:23:19.111701688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.111810 containerd[1505]: time="2025-02-13T22:23:19.111767151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.111810 containerd[1505]: time="2025-02-13T22:23:19.111793598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.112025 containerd[1505]: time="2025-02-13T22:23:19.111993603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.112383 containerd[1505]: time="2025-02-13T22:23:19.112035550Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 22:23:19.112471 containerd[1505]: time="2025-02-13T22:23:19.112442148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.113258 containerd[1505]: time="2025-02-13T22:23:19.112484985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.113314 containerd[1505]: time="2025-02-13T22:23:19.113272290Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 22:23:19.113513 containerd[1505]: time="2025-02-13T22:23:19.113473026Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 22:23:19.113611 containerd[1505]: time="2025-02-13T22:23:19.113515702Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 22:23:19.113611 containerd[1505]: time="2025-02-13T22:23:19.113536258Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 22:23:19.113611 containerd[1505]: time="2025-02-13T22:23:19.113557587Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 22:23:19.113611 containerd[1505]: time="2025-02-13T22:23:19.113584122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.113775 containerd[1505]: time="2025-02-13T22:23:19.113614018Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 22:23:19.113775 containerd[1505]: time="2025-02-13T22:23:19.113647564Z" level=info msg="NRI interface is disabled by configuration." Feb 13 22:23:19.115416 containerd[1505]: time="2025-02-13T22:23:19.113669420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 22:23:19.115470 containerd[1505]: time="2025-02-13T22:23:19.114938391Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 22:23:19.115470 containerd[1505]: time="2025-02-13T22:23:19.115048225Z" level=info msg="Connect containerd service" Feb 13 22:23:19.115470 containerd[1505]: time="2025-02-13T22:23:19.115118408Z" level=info msg="using legacy CRI server" Feb 13 22:23:19.115470 containerd[1505]: time="2025-02-13T22:23:19.115148679Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 22:23:19.116511 containerd[1505]: time="2025-02-13T22:23:19.116477184Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 22:23:19.119019 containerd[1505]: time="2025-02-13T22:23:19.118968604Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 22:23:19.119388 containerd[1505]: time="2025-02-13T22:23:19.119199282Z" level=info msg="Start subscribing containerd event" Feb 13 22:23:19.119388 containerd[1505]: time="2025-02-13T22:23:19.119331174Z" level=info msg="Start recovering state" Feb 13 22:23:19.120915 containerd[1505]: time="2025-02-13T22:23:19.120538346Z" level=info msg="Start event monitor" Feb 13 22:23:19.120915 containerd[1505]: time="2025-02-13T22:23:19.120593188Z" level=info msg="Start snapshots syncer" Feb 13 22:23:19.120915 containerd[1505]: time="2025-02-13T22:23:19.120615780Z" level=info msg="Start cni network conf syncer for default" Feb 13 22:23:19.120915 containerd[1505]: time="2025-02-13T22:23:19.120634884Z" level=info msg="Start streaming server" Feb 13 22:23:19.123404 containerd[1505]: time="2025-02-13T22:23:19.121727718Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 22:23:19.123404 containerd[1505]: time="2025-02-13T22:23:19.121850374Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 22:23:19.122124 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 22:23:19.124993 containerd[1505]: time="2025-02-13T22:23:19.124943988Z" level=info msg="containerd successfully booted in 0.124379s" Feb 13 22:23:19.315525 sshd_keygen[1514]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 22:23:19.350932 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 22:23:19.361568 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 22:23:19.372834 systemd[1]: Started sshd@0-10.230.12.194:22-147.75.109.163:36666.service - OpenSSH per-connection server daemon (147.75.109.163:36666). Feb 13 22:23:19.389698 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 22:23:19.390054 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 22:23:19.401658 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 22:23:19.434940 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 22:23:19.922927 systemd-timesyncd[1430]: Contacted time server 131.111.8.60:123 (0.flatcar.pool.ntp.org). Feb 13 22:23:19.923015 systemd-timesyncd[1430]: Initial clock synchronization to Thu 2025-02-13 22:23:19.922691 UTC. Feb 13 22:23:19.924628 systemd-resolved[1420]: Clock change detected. Flushing caches. Feb 13 22:23:19.932769 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 22:23:19.939899 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 22:23:19.941825 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 22:23:19.990120 tar[1493]: linux-amd64/LICENSE Feb 13 22:23:19.990981 tar[1493]: linux-amd64/README.md Feb 13 22:23:20.006843 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 22:23:20.547771 systemd-networkd[1419]: eth0: Gained IPv6LL Feb 13 22:23:20.551897 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 22:23:20.555087 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 22:23:20.562929 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:23:20.565856 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 22:23:20.598388 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 22:23:20.825662 sshd[1576]: Accepted publickey for core from 147.75.109.163 port 36666 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:23:20.827359 sshd-session[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:23:20.847811 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 22:23:20.848077 systemd-logind[1489]: New session 1 of user core. Feb 13 22:23:20.858022 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 22:23:20.883036 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 22:23:20.896855 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 22:23:20.905165 (systemd)[1602]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 22:23:21.051347 systemd[1602]: Queued start job for default target default.target. Feb 13 22:23:21.056690 systemd[1602]: Created slice app.slice - User Application Slice. Feb 13 22:23:21.056752 systemd[1602]: Reached target paths.target - Paths. Feb 13 22:23:21.056778 systemd[1602]: Reached target timers.target - Timers. Feb 13 22:23:21.059575 systemd[1602]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 22:23:21.080352 systemd[1602]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 22:23:21.080605 systemd[1602]: Reached target sockets.target - Sockets. Feb 13 22:23:21.080633 systemd[1602]: Reached target basic.target - Basic System. Feb 13 22:23:21.080707 systemd[1602]: Reached target default.target - Main User Target. Feb 13 22:23:21.080790 systemd[1602]: Startup finished in 164ms. Feb 13 22:23:21.081270 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 22:23:21.092130 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 22:23:21.494973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:23:21.501508 (kubelet)[1617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 22:23:21.752895 systemd[1]: Started sshd@1-10.230.12.194:22-147.75.109.163:36624.service - OpenSSH per-connection server daemon (147.75.109.163:36624). Feb 13 22:23:22.006890 systemd-networkd[1419]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8330:24:19ff:fee6:cc2/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8330:24:19ff:fee6:cc2/64 assigned by NDisc. Feb 13 22:23:22.006910 systemd-networkd[1419]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 22:23:22.131526 kubelet[1617]: E0213 22:23:22.131423 1617 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 22:23:22.135079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 22:23:22.135419 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 22:23:22.136257 systemd[1]: kubelet.service: Consumed 1.040s CPU time. Feb 13 22:23:22.665142 sshd[1623]: Accepted publickey for core from 147.75.109.163 port 36624 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:23:22.667641 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:23:22.676381 systemd-logind[1489]: New session 2 of user core. Feb 13 22:23:22.687885 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 22:23:23.290423 sshd[1631]: Connection closed by 147.75.109.163 port 36624 Feb 13 22:23:23.289686 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Feb 13 22:23:23.294296 systemd[1]: sshd@1-10.230.12.194:22-147.75.109.163:36624.service: Deactivated successfully. Feb 13 22:23:23.296779 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 22:23:23.298707 systemd-logind[1489]: Session 2 logged out. Waiting for processes to exit. Feb 13 22:23:23.300250 systemd-logind[1489]: Removed session 2. Feb 13 22:23:23.456469 systemd[1]: Started sshd@2-10.230.12.194:22-147.75.109.163:36634.service - OpenSSH per-connection server daemon (147.75.109.163:36634). Feb 13 22:23:24.347312 sshd[1636]: Accepted publickey for core from 147.75.109.163 port 36634 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:23:24.349538 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:23:24.357140 systemd-logind[1489]: New session 3 of user core. Feb 13 22:23:24.365776 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 22:23:24.963920 sshd[1638]: Connection closed by 147.75.109.163 port 36634 Feb 13 22:23:24.968868 systemd[1]: sshd@2-10.230.12.194:22-147.75.109.163:36634.service: Deactivated successfully. Feb 13 22:23:24.964522 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Feb 13 22:23:24.974558 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 22:23:24.980011 systemd-logind[1489]: Session 3 logged out. Waiting for processes to exit. Feb 13 22:23:24.983213 systemd-logind[1489]: Removed session 3. Feb 13 22:23:25.003193 login[1584]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 22:23:25.006040 login[1583]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 22:23:25.012412 systemd-logind[1489]: New session 4 of user core. Feb 13 22:23:25.020711 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 22:23:25.025291 systemd-logind[1489]: New session 5 of user core. Feb 13 22:23:25.031649 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 22:23:25.818550 coreos-metadata[1479]: Feb 13 22:23:25.818 WARN failed to locate config-drive, using the metadata service API instead Feb 13 22:23:25.846222 coreos-metadata[1479]: Feb 13 22:23:25.846 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 13 22:23:25.853896 coreos-metadata[1479]: Feb 13 22:23:25.853 INFO Fetch failed with 404: resource not found Feb 13 22:23:25.853985 coreos-metadata[1479]: Feb 13 22:23:25.853 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 22:23:25.854767 coreos-metadata[1479]: Feb 13 22:23:25.854 INFO Fetch successful Feb 13 22:23:25.854984 coreos-metadata[1479]: Feb 13 22:23:25.854 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 13 22:23:25.867747 coreos-metadata[1479]: Feb 13 22:23:25.867 INFO Fetch successful Feb 13 22:23:25.867932 coreos-metadata[1479]: Feb 13 22:23:25.867 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 13 22:23:25.883599 coreos-metadata[1479]: Feb 13 22:23:25.883 INFO Fetch successful Feb 13 22:23:25.883759 coreos-metadata[1479]: Feb 13 22:23:25.883 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 13 22:23:25.898764 coreos-metadata[1479]: Feb 13 22:23:25.898 INFO Fetch successful Feb 13 22:23:25.898864 coreos-metadata[1479]: Feb 13 22:23:25.898 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 13 22:23:25.916837 coreos-metadata[1479]: Feb 13 22:23:25.916 INFO Fetch successful Feb 13 22:23:25.943195 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 22:23:25.944225 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 22:23:26.417120 coreos-metadata[1549]: Feb 13 22:23:26.416 WARN failed to locate config-drive, using the metadata service API instead Feb 13 22:23:26.440521 coreos-metadata[1549]: Feb 13 22:23:26.440 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 13 22:23:26.463980 coreos-metadata[1549]: Feb 13 22:23:26.463 INFO Fetch successful Feb 13 22:23:26.464160 coreos-metadata[1549]: Feb 13 22:23:26.464 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 22:23:26.529884 coreos-metadata[1549]: Feb 13 22:23:26.529 INFO Fetch successful Feb 13 22:23:26.532829 unknown[1549]: wrote ssh authorized keys file for user: core Feb 13 22:23:26.569387 update-ssh-keys[1678]: Updated "/home/core/.ssh/authorized_keys" Feb 13 22:23:26.570070 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 22:23:26.572372 systemd[1]: Finished sshkeys.service. Feb 13 22:23:26.575795 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 22:23:26.576171 systemd[1]: Startup finished in 1.375s (kernel) + 14.774s (initrd) + 11.606s (userspace) = 27.755s. Feb 13 22:23:32.249290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 22:23:32.261074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:23:32.415749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:23:32.432914 (kubelet)[1689]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 22:23:32.495073 kubelet[1689]: E0213 22:23:32.494973 1689 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 22:23:32.498706 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 22:23:32.498980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 22:23:35.129777 systemd[1]: Started sshd@3-10.230.12.194:22-147.75.109.163:33024.service - OpenSSH per-connection server daemon (147.75.109.163:33024). Feb 13 22:23:36.040967 sshd[1698]: Accepted publickey for core from 147.75.109.163 port 33024 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:23:36.043346 sshd-session[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:23:36.051901 systemd-logind[1489]: New session 6 of user core. Feb 13 22:23:36.063635 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 22:23:36.665464 sshd[1700]: Connection closed by 147.75.109.163 port 33024 Feb 13 22:23:36.666536 sshd-session[1698]: pam_unix(sshd:session): session closed for user core Feb 13 22:23:36.671814 systemd[1]: sshd@3-10.230.12.194:22-147.75.109.163:33024.service: Deactivated successfully. Feb 13 22:23:36.674090 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 22:23:36.675019 systemd-logind[1489]: Session 6 logged out. Waiting for processes to exit. Feb 13 22:23:36.676690 systemd-logind[1489]: Removed session 6. Feb 13 22:23:36.833904 systemd[1]: Started sshd@4-10.230.12.194:22-147.75.109.163:33028.service - OpenSSH per-connection server daemon (147.75.109.163:33028). Feb 13 22:23:37.742679 sshd[1705]: Accepted publickey for core from 147.75.109.163 port 33028 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:23:37.745031 sshd-session[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:23:37.752716 systemd-logind[1489]: New session 7 of user core. Feb 13 22:23:37.756652 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 22:23:38.368972 sshd[1707]: Connection closed by 147.75.109.163 port 33028 Feb 13 22:23:38.370077 sshd-session[1705]: pam_unix(sshd:session): session closed for user core Feb 13 22:23:38.374897 systemd[1]: sshd@4-10.230.12.194:22-147.75.109.163:33028.service: Deactivated successfully. Feb 13 22:23:38.377378 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 22:23:38.379100 systemd-logind[1489]: Session 7 logged out. Waiting for processes to exit. Feb 13 22:23:38.382943 systemd-logind[1489]: Removed session 7. Feb 13 22:23:38.539875 systemd[1]: Started sshd@5-10.230.12.194:22-147.75.109.163:33038.service - OpenSSH per-connection server daemon (147.75.109.163:33038). Feb 13 22:23:39.435227 sshd[1712]: Accepted publickey for core from 147.75.109.163 port 33038 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:23:39.437497 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:23:39.447486 systemd-logind[1489]: New session 8 of user core. Feb 13 22:23:39.454620 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 22:23:40.059616 sshd[1714]: Connection closed by 147.75.109.163 port 33038 Feb 13 22:23:40.060696 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Feb 13 22:23:40.065059 systemd[1]: sshd@5-10.230.12.194:22-147.75.109.163:33038.service: Deactivated successfully. Feb 13 22:23:40.067459 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 22:23:40.069124 systemd-logind[1489]: Session 8 logged out. Waiting for processes to exit. Feb 13 22:23:40.070695 systemd-logind[1489]: Removed session 8. Feb 13 22:23:40.219771 systemd[1]: Started sshd@6-10.230.12.194:22-147.75.109.163:45430.service - OpenSSH per-connection server daemon (147.75.109.163:45430). Feb 13 22:23:41.108441 sshd[1719]: Accepted publickey for core from 147.75.109.163 port 45430 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:23:41.110564 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:23:41.119236 systemd-logind[1489]: New session 9 of user core. Feb 13 22:23:41.125672 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 22:23:41.596626 sudo[1722]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 22:23:41.597130 sudo[1722]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 22:23:42.106935 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 22:23:42.107077 (dockerd)[1739]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 22:23:42.536894 dockerd[1739]: time="2025-02-13T22:23:42.536758904Z" level=info msg="Starting up" Feb 13 22:23:42.542388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 22:23:42.553305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:23:42.722932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:23:42.739186 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 22:23:42.823872 kubelet[1761]: E0213 22:23:42.823584 1761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 22:23:42.831968 systemd[1]: var-lib-docker-metacopy\x2dcheck4076276567-merged.mount: Deactivated successfully. Feb 13 22:23:42.833056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 22:23:42.833353 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 22:23:42.870091 dockerd[1739]: time="2025-02-13T22:23:42.870003898Z" level=info msg="Loading containers: start." Feb 13 22:23:43.082626 kernel: Initializing XFRM netlink socket Feb 13 22:23:43.203798 systemd-networkd[1419]: docker0: Link UP Feb 13 22:23:43.239509 dockerd[1739]: time="2025-02-13T22:23:43.239357373Z" level=info msg="Loading containers: done." Feb 13 22:23:43.264846 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1048436102-merged.mount: Deactivated successfully. Feb 13 22:23:43.270597 dockerd[1739]: time="2025-02-13T22:23:43.270517844Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 22:23:43.270754 dockerd[1739]: time="2025-02-13T22:23:43.270681820Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Feb 13 22:23:43.270975 dockerd[1739]: time="2025-02-13T22:23:43.270911283Z" level=info msg="Daemon has completed initialization" Feb 13 22:23:43.314566 dockerd[1739]: time="2025-02-13T22:23:43.314474607Z" level=info msg="API listen on /run/docker.sock" Feb 13 22:23:43.315652 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 22:23:44.568515 containerd[1505]: time="2025-02-13T22:23:44.567664305Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\"" Feb 13 22:23:45.366203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111044282.mount: Deactivated successfully. Feb 13 22:23:47.380380 containerd[1505]: time="2025-02-13T22:23:47.380237144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:47.382105 containerd[1505]: time="2025-02-13T22:23:47.382035125Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.6: active requests=0, bytes read=27976596" Feb 13 22:23:47.383284 containerd[1505]: time="2025-02-13T22:23:47.383218439Z" level=info msg="ImageCreate event name:\"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:47.387351 containerd[1505]: time="2025-02-13T22:23:47.387297785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:47.389626 containerd[1505]: time="2025-02-13T22:23:47.389212629Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.6\" with image id \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:be0a2d815793b0408d921a50b82759e654cf1bba718cac480498391926902905\", size \"27973388\" in 2.821409377s" Feb 13 22:23:47.389626 containerd[1505]: time="2025-02-13T22:23:47.389271197Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.6\" returns image reference \"sha256:1372127edc9da70a68712c470a11f621ed256e8be0dfec4c4d58ca09109352a3\"" Feb 13 22:23:47.392250 containerd[1505]: time="2025-02-13T22:23:47.392195821Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\"" Feb 13 22:23:49.809502 containerd[1505]: time="2025-02-13T22:23:49.809236679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:49.814080 containerd[1505]: time="2025-02-13T22:23:49.813997532Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.6: active requests=0, bytes read=24708201" Feb 13 22:23:49.818439 containerd[1505]: time="2025-02-13T22:23:49.816694217Z" level=info msg="ImageCreate event name:\"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:49.821519 containerd[1505]: time="2025-02-13T22:23:49.821461533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:49.823242 containerd[1505]: time="2025-02-13T22:23:49.823182551Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.6\" with image id \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:63166e537a82855ac9b54ffa8b510429fe799ed9b062bf6b788b74e1d5995d12\", size \"26154739\" in 2.430802056s" Feb 13 22:23:49.823360 containerd[1505]: time="2025-02-13T22:23:49.823244052Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.6\" returns image reference \"sha256:5f23cb154eea1f587685082e456e95e5480c1d459849b1c634119d7de897e34e\"" Feb 13 22:23:49.824356 containerd[1505]: time="2025-02-13T22:23:49.824315202Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\"" Feb 13 22:23:51.557295 containerd[1505]: time="2025-02-13T22:23:51.557172457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:51.559424 containerd[1505]: time="2025-02-13T22:23:51.559142529Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.6: active requests=0, bytes read=18652433" Feb 13 22:23:51.560421 containerd[1505]: time="2025-02-13T22:23:51.560356690Z" level=info msg="ImageCreate event name:\"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:51.564939 containerd[1505]: time="2025-02-13T22:23:51.564844821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:51.568451 containerd[1505]: time="2025-02-13T22:23:51.566535978Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.6\" with image id \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:8a64af33c57346355dc3cc6f9225dbe771da30e2f427e802ce2340ec3b5dd9b5\", size \"20098989\" in 1.742167337s" Feb 13 22:23:51.568451 containerd[1505]: time="2025-02-13T22:23:51.566588139Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.6\" returns image reference \"sha256:9195ad415d31e3c2df6dddf4603bc56915b71486f514455bc3b5389b9b0ed9c1\"" Feb 13 22:23:51.571324 containerd[1505]: time="2025-02-13T22:23:51.571270913Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 22:23:52.053283 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 22:23:52.999365 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 22:23:53.010574 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:23:53.216885 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:23:53.218286 (kubelet)[2024]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 22:23:53.306549 kubelet[2024]: E0213 22:23:53.305074 2024 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 22:23:53.310680 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 22:23:53.310955 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 22:23:53.382607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3958317214.mount: Deactivated successfully. Feb 13 22:23:54.122511 containerd[1505]: time="2025-02-13T22:23:54.122286802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:54.124040 containerd[1505]: time="2025-02-13T22:23:54.123899613Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229116" Feb 13 22:23:54.125101 containerd[1505]: time="2025-02-13T22:23:54.125036729Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:54.128909 containerd[1505]: time="2025-02-13T22:23:54.128831155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:54.130364 containerd[1505]: time="2025-02-13T22:23:54.129887872Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 2.558542603s" Feb 13 22:23:54.130364 containerd[1505]: time="2025-02-13T22:23:54.129964015Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 22:23:54.132652 containerd[1505]: time="2025-02-13T22:23:54.132592480Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 22:23:54.749111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819312868.mount: Deactivated successfully. Feb 13 22:23:55.877176 containerd[1505]: time="2025-02-13T22:23:55.876889659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:55.878635 containerd[1505]: time="2025-02-13T22:23:55.878599575Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Feb 13 22:23:55.879522 containerd[1505]: time="2025-02-13T22:23:55.879437846Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:55.883530 containerd[1505]: time="2025-02-13T22:23:55.883401140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:55.885369 containerd[1505]: time="2025-02-13T22:23:55.885159568Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.752517245s" Feb 13 22:23:55.885369 containerd[1505]: time="2025-02-13T22:23:55.885205645Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 22:23:55.886971 containerd[1505]: time="2025-02-13T22:23:55.886928930Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 22:23:56.480204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1116295008.mount: Deactivated successfully. Feb 13 22:23:56.485413 containerd[1505]: time="2025-02-13T22:23:56.485324116Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:56.487340 containerd[1505]: time="2025-02-13T22:23:56.487252171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Feb 13 22:23:56.488556 containerd[1505]: time="2025-02-13T22:23:56.488487917Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:56.491345 containerd[1505]: time="2025-02-13T22:23:56.491284262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:23:56.492579 containerd[1505]: time="2025-02-13T22:23:56.492537990Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 605.565804ms" Feb 13 22:23:56.492694 containerd[1505]: time="2025-02-13T22:23:56.492583647Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 22:23:56.493255 containerd[1505]: time="2025-02-13T22:23:56.493139989Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Feb 13 22:23:57.094810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount510768.mount: Deactivated successfully. Feb 13 22:24:01.046817 containerd[1505]: time="2025-02-13T22:24:01.046563240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:24:01.049193 containerd[1505]: time="2025-02-13T22:24:01.048577050Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Feb 13 22:24:01.049883 containerd[1505]: time="2025-02-13T22:24:01.049544288Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:24:01.055307 containerd[1505]: time="2025-02-13T22:24:01.055224515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:24:01.058388 containerd[1505]: time="2025-02-13T22:24:01.057548489Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.564368224s" Feb 13 22:24:01.058388 containerd[1505]: time="2025-02-13T22:24:01.057607200Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Feb 13 22:24:03.499922 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Feb 13 22:24:03.510659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:24:03.671721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:24:03.681853 (kubelet)[2165]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 22:24:03.785572 kubelet[2165]: E0213 22:24:03.785327 2165 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 22:24:03.789594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 22:24:03.789845 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 22:24:03.822292 update_engine[1490]: I20250213 22:24:03.822149 1490 update_attempter.cc:509] Updating boot flags... Feb 13 22:24:03.878439 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2180) Feb 13 22:24:04.016423 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2183) Feb 13 22:24:06.829292 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:24:06.839925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:24:06.894351 systemd[1]: Reloading requested from client PID 2194 ('systemctl') (unit session-9.scope)... Feb 13 22:24:06.894410 systemd[1]: Reloading... Feb 13 22:24:07.079447 zram_generator::config[2233]: No configuration found. Feb 13 22:24:07.226410 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 22:24:07.339972 systemd[1]: Reloading finished in 444 ms. Feb 13 22:24:07.420799 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 22:24:07.420954 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 22:24:07.421358 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:24:07.426747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:24:07.595911 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:24:07.612889 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 22:24:07.681137 kubelet[2300]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 22:24:07.681137 kubelet[2300]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 22:24:07.681137 kubelet[2300]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 22:24:07.681855 kubelet[2300]: I0213 22:24:07.681248 2300 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 22:24:08.430447 kubelet[2300]: I0213 22:24:08.430362 2300 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 22:24:08.431502 kubelet[2300]: I0213 22:24:08.430720 2300 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 22:24:08.431502 kubelet[2300]: I0213 22:24:08.431183 2300 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 22:24:08.455975 kubelet[2300]: I0213 22:24:08.455300 2300 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 22:24:08.456458 kubelet[2300]: E0213 22:24:08.456410 2300 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.12.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.12.194:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:24:08.474709 kubelet[2300]: E0213 22:24:08.474644 2300 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 22:24:08.475031 kubelet[2300]: I0213 22:24:08.475003 2300 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 22:24:08.483622 kubelet[2300]: I0213 22:24:08.483582 2300 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 22:24:08.485456 kubelet[2300]: I0213 22:24:08.485428 2300 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 22:24:08.486807 kubelet[2300]: I0213 22:24:08.485858 2300 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 22:24:08.486807 kubelet[2300]: I0213 22:24:08.485926 2300 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-x84ug.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 22:24:08.486807 kubelet[2300]: I0213 22:24:08.486274 2300 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 22:24:08.486807 kubelet[2300]: I0213 22:24:08.486311 2300 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 22:24:08.487292 kubelet[2300]: I0213 22:24:08.486561 2300 state_mem.go:36] "Initialized new in-memory state store" Feb 13 22:24:08.489252 kubelet[2300]: I0213 22:24:08.489228 2300 kubelet.go:408] "Attempting to sync node with API server" Feb 13 22:24:08.489383 kubelet[2300]: I0213 22:24:08.489362 2300 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 22:24:08.489588 kubelet[2300]: I0213 22:24:08.489568 2300 kubelet.go:314] "Adding apiserver pod source" Feb 13 22:24:08.489772 kubelet[2300]: I0213 22:24:08.489749 2300 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 22:24:08.498827 kubelet[2300]: W0213 22:24:08.498726 2300 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.12.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-x84ug.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.12.194:6443: connect: connection refused Feb 13 22:24:08.498960 kubelet[2300]: E0213 22:24:08.498847 2300 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.12.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-x84ug.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.12.194:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:24:08.500292 kubelet[2300]: I0213 22:24:08.500105 2300 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 22:24:08.504125 kubelet[2300]: I0213 22:24:08.503900 2300 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 22:24:08.505428 kubelet[2300]: W0213 22:24:08.504733 2300 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 22:24:08.506672 kubelet[2300]: W0213 22:24:08.506367 2300 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.12.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.12.194:6443: connect: connection refused Feb 13 22:24:08.506672 kubelet[2300]: E0213 22:24:08.506474 2300 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.12.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.12.194:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:24:08.507124 kubelet[2300]: I0213 22:24:08.507097 2300 server.go:1269] "Started kubelet" Feb 13 22:24:08.507623 kubelet[2300]: I0213 22:24:08.507578 2300 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 22:24:08.509456 kubelet[2300]: I0213 22:24:08.509421 2300 server.go:460] "Adding debug handlers to kubelet server" Feb 13 22:24:08.512552 kubelet[2300]: I0213 22:24:08.512504 2300 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 22:24:08.513247 kubelet[2300]: I0213 22:24:08.513216 2300 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 22:24:08.514010 kubelet[2300]: I0213 22:24:08.513755 2300 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 22:24:08.520632 kubelet[2300]: E0213 22:24:08.514524 2300 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.12.194:6443/api/v1/namespaces/default/events\": dial tcp 10.230.12.194:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-x84ug.gb1.brightbox.com.1823e4cc9a3966de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-x84ug.gb1.brightbox.com,UID:srv-x84ug.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-x84ug.gb1.brightbox.com,},FirstTimestamp:2025-02-13 22:24:08.507057886 +0000 UTC m=+0.887911904,LastTimestamp:2025-02-13 22:24:08.507057886 +0000 UTC m=+0.887911904,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-x84ug.gb1.brightbox.com,}" Feb 13 22:24:08.520632 kubelet[2300]: I0213 22:24:08.518842 2300 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 22:24:08.528808 kubelet[2300]: E0213 22:24:08.526101 2300 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-x84ug.gb1.brightbox.com\" not found" Feb 13 22:24:08.528808 kubelet[2300]: I0213 22:24:08.526179 2300 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 22:24:08.528808 kubelet[2300]: I0213 22:24:08.527936 2300 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 22:24:08.528808 kubelet[2300]: I0213 22:24:08.528085 2300 reconciler.go:26] "Reconciler: start to sync state" Feb 13 22:24:08.531349 kubelet[2300]: I0213 22:24:08.530375 2300 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 22:24:08.532696 kubelet[2300]: W0213 22:24:08.532639 2300 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.12.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.12.194:6443: connect: connection refused Feb 13 22:24:08.533625 kubelet[2300]: E0213 22:24:08.533592 2300 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.12.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.12.194:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:24:08.534585 kubelet[2300]: E0213 22:24:08.532958 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-x84ug.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.194:6443: connect: connection refused" interval="200ms" Feb 13 22:24:08.534706 kubelet[2300]: E0213 22:24:08.534067 2300 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 22:24:08.534802 kubelet[2300]: I0213 22:24:08.534085 2300 factory.go:221] Registration of the containerd container factory successfully Feb 13 22:24:08.534893 kubelet[2300]: I0213 22:24:08.534875 2300 factory.go:221] Registration of the systemd container factory successfully Feb 13 22:24:08.571283 kubelet[2300]: I0213 22:24:08.571211 2300 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 22:24:08.575838 kubelet[2300]: I0213 22:24:08.575696 2300 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 22:24:08.575838 kubelet[2300]: I0213 22:24:08.575768 2300 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 22:24:08.575838 kubelet[2300]: I0213 22:24:08.575809 2300 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 22:24:08.576049 kubelet[2300]: E0213 22:24:08.575889 2300 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 22:24:08.583717 kubelet[2300]: I0213 22:24:08.583255 2300 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 22:24:08.583717 kubelet[2300]: I0213 22:24:08.583289 2300 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 22:24:08.583717 kubelet[2300]: I0213 22:24:08.583322 2300 state_mem.go:36] "Initialized new in-memory state store" Feb 13 22:24:08.583717 kubelet[2300]: W0213 22:24:08.583599 2300 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.12.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.12.194:6443: connect: connection refused Feb 13 22:24:08.583717 kubelet[2300]: E0213 22:24:08.583649 2300 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.12.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.12.194:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:24:08.585572 kubelet[2300]: I0213 22:24:08.585541 2300 policy_none.go:49] "None policy: Start" Feb 13 22:24:08.586566 kubelet[2300]: I0213 22:24:08.586539 2300 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 22:24:08.586654 kubelet[2300]: I0213 22:24:08.586583 2300 state_mem.go:35] "Initializing new in-memory state store" Feb 13 22:24:08.596031 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 22:24:08.612803 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 22:24:08.617804 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 22:24:08.627191 kubelet[2300]: E0213 22:24:08.627131 2300 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-x84ug.gb1.brightbox.com\" not found" Feb 13 22:24:08.632367 kubelet[2300]: I0213 22:24:08.629312 2300 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 22:24:08.632367 kubelet[2300]: I0213 22:24:08.629695 2300 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 22:24:08.632367 kubelet[2300]: I0213 22:24:08.629727 2300 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 22:24:08.632367 kubelet[2300]: I0213 22:24:08.630960 2300 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 22:24:08.636938 kubelet[2300]: E0213 22:24:08.636903 2300 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-x84ug.gb1.brightbox.com\" not found" Feb 13 22:24:08.693266 systemd[1]: Created slice kubepods-burstable-podad897eefc74c4be40860043f1ff4975b.slice - libcontainer container kubepods-burstable-podad897eefc74c4be40860043f1ff4975b.slice. Feb 13 22:24:08.716248 systemd[1]: Created slice kubepods-burstable-pod227191e3be16bab6fc5cd70e75bbdccd.slice - libcontainer container kubepods-burstable-pod227191e3be16bab6fc5cd70e75bbdccd.slice. Feb 13 22:24:08.730860 systemd[1]: Created slice kubepods-burstable-podd60822f27e0b659bbae12cb4ebb39e35.slice - libcontainer container kubepods-burstable-podd60822f27e0b659bbae12cb4ebb39e35.slice. Feb 13 22:24:08.735186 kubelet[2300]: I0213 22:24:08.734866 2300 kubelet_node_status.go:72] "Attempting to register node" node="srv-x84ug.gb1.brightbox.com" Feb 13 22:24:08.735186 kubelet[2300]: E0213 22:24:08.735121 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-x84ug.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.194:6443: connect: connection refused" interval="400ms" Feb 13 22:24:08.735951 kubelet[2300]: E0213 22:24:08.735897 2300 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.12.194:6443/api/v1/nodes\": dial tcp 10.230.12.194:6443: connect: connection refused" node="srv-x84ug.gb1.brightbox.com" Feb 13 22:24:08.829787 kubelet[2300]: I0213 22:24:08.829664 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad897eefc74c4be40860043f1ff4975b-k8s-certs\") pod \"kube-apiserver-srv-x84ug.gb1.brightbox.com\" (UID: \"ad897eefc74c4be40860043f1ff4975b\") " pod="kube-system/kube-apiserver-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:08.829787 kubelet[2300]: I0213 22:24:08.829750 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad897eefc74c4be40860043f1ff4975b-usr-share-ca-certificates\") pod \"kube-apiserver-srv-x84ug.gb1.brightbox.com\" (UID: \"ad897eefc74c4be40860043f1ff4975b\") " pod="kube-system/kube-apiserver-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:08.829787 kubelet[2300]: I0213 22:24:08.829793 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/227191e3be16bab6fc5cd70e75bbdccd-ca-certs\") pod \"kube-controller-manager-srv-x84ug.gb1.brightbox.com\" (UID: \"227191e3be16bab6fc5cd70e75bbdccd\") " pod="kube-system/kube-controller-manager-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:08.830122 kubelet[2300]: I0213 22:24:08.829822 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/227191e3be16bab6fc5cd70e75bbdccd-kubeconfig\") pod \"kube-controller-manager-srv-x84ug.gb1.brightbox.com\" (UID: \"227191e3be16bab6fc5cd70e75bbdccd\") " pod="kube-system/kube-controller-manager-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:08.830122 kubelet[2300]: I0213 22:24:08.829854 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/227191e3be16bab6fc5cd70e75bbdccd-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-x84ug.gb1.brightbox.com\" (UID: \"227191e3be16bab6fc5cd70e75bbdccd\") " pod="kube-system/kube-controller-manager-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:08.830122 kubelet[2300]: I0213 22:24:08.829880 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad897eefc74c4be40860043f1ff4975b-ca-certs\") pod \"kube-apiserver-srv-x84ug.gb1.brightbox.com\" (UID: \"ad897eefc74c4be40860043f1ff4975b\") " pod="kube-system/kube-apiserver-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:08.830122 kubelet[2300]: I0213 22:24:08.829917 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/227191e3be16bab6fc5cd70e75bbdccd-flexvolume-dir\") pod \"kube-controller-manager-srv-x84ug.gb1.brightbox.com\" (UID: \"227191e3be16bab6fc5cd70e75bbdccd\") " pod="kube-system/kube-controller-manager-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:08.830122 kubelet[2300]: I0213 22:24:08.829944 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/227191e3be16bab6fc5cd70e75bbdccd-k8s-certs\") pod \"kube-controller-manager-srv-x84ug.gb1.brightbox.com\" (UID: \"227191e3be16bab6fc5cd70e75bbdccd\") " pod="kube-system/kube-controller-manager-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:08.830374 kubelet[2300]: I0213 22:24:08.829970 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d60822f27e0b659bbae12cb4ebb39e35-kubeconfig\") pod \"kube-scheduler-srv-x84ug.gb1.brightbox.com\" (UID: \"d60822f27e0b659bbae12cb4ebb39e35\") " pod="kube-system/kube-scheduler-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:08.939437 kubelet[2300]: I0213 22:24:08.939303 2300 kubelet_node_status.go:72] "Attempting to register node" node="srv-x84ug.gb1.brightbox.com" Feb 13 22:24:08.939833 kubelet[2300]: E0213 22:24:08.939788 2300 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.12.194:6443/api/v1/nodes\": dial tcp 10.230.12.194:6443: connect: connection refused" node="srv-x84ug.gb1.brightbox.com" Feb 13 22:24:09.014752 containerd[1505]: time="2025-02-13T22:24:09.014571872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-x84ug.gb1.brightbox.com,Uid:ad897eefc74c4be40860043f1ff4975b,Namespace:kube-system,Attempt:0,}" Feb 13 22:24:09.027729 containerd[1505]: time="2025-02-13T22:24:09.027472816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-x84ug.gb1.brightbox.com,Uid:227191e3be16bab6fc5cd70e75bbdccd,Namespace:kube-system,Attempt:0,}" Feb 13 22:24:09.035860 containerd[1505]: time="2025-02-13T22:24:09.035376112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-x84ug.gb1.brightbox.com,Uid:d60822f27e0b659bbae12cb4ebb39e35,Namespace:kube-system,Attempt:0,}" Feb 13 22:24:09.136211 kubelet[2300]: E0213 22:24:09.136134 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-x84ug.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.194:6443: connect: connection refused" interval="800ms" Feb 13 22:24:09.337122 kubelet[2300]: W0213 22:24:09.336807 2300 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.12.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.12.194:6443: connect: connection refused Feb 13 22:24:09.337122 kubelet[2300]: E0213 22:24:09.336920 2300 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.12.194:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.12.194:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:24:09.343344 kubelet[2300]: I0213 22:24:09.343280 2300 kubelet_node_status.go:72] "Attempting to register node" node="srv-x84ug.gb1.brightbox.com" Feb 13 22:24:09.343772 kubelet[2300]: E0213 22:24:09.343733 2300 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.12.194:6443/api/v1/nodes\": dial tcp 10.230.12.194:6443: connect: connection refused" node="srv-x84ug.gb1.brightbox.com" Feb 13 22:24:09.616384 kubelet[2300]: W0213 22:24:09.616198 2300 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.12.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.12.194:6443: connect: connection refused Feb 13 22:24:09.616384 kubelet[2300]: E0213 22:24:09.616261 2300 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.12.194:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.12.194:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:24:09.620411 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084125588.mount: Deactivated successfully. Feb 13 22:24:09.626657 containerd[1505]: time="2025-02-13T22:24:09.626587704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 22:24:09.629279 containerd[1505]: time="2025-02-13T22:24:09.629218061Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 22:24:09.631132 containerd[1505]: time="2025-02-13T22:24:09.631048446Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 22:24:09.633183 containerd[1505]: time="2025-02-13T22:24:09.633148394Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 22:24:09.634381 containerd[1505]: time="2025-02-13T22:24:09.634341294Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 22:24:09.636110 containerd[1505]: time="2025-02-13T22:24:09.636075966Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 22:24:09.636285 containerd[1505]: time="2025-02-13T22:24:09.636245598Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 22:24:09.645370 containerd[1505]: time="2025-02-13T22:24:09.644583208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 22:24:09.645370 containerd[1505]: time="2025-02-13T22:24:09.644843538Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 609.279371ms" Feb 13 22:24:09.648570 containerd[1505]: time="2025-02-13T22:24:09.648522840Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 633.640952ms" Feb 13 22:24:09.658904 containerd[1505]: time="2025-02-13T22:24:09.658812934Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 631.196178ms" Feb 13 22:24:09.937242 kubelet[2300]: E0213 22:24:09.937015 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.12.194:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-x84ug.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.12.194:6443: connect: connection refused" interval="1.6s" Feb 13 22:24:09.947608 containerd[1505]: time="2025-02-13T22:24:09.946559779Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:24:09.947608 containerd[1505]: time="2025-02-13T22:24:09.946711522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:24:09.947608 containerd[1505]: time="2025-02-13T22:24:09.946737767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:24:09.950200 containerd[1505]: time="2025-02-13T22:24:09.949267795Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:24:09.950200 containerd[1505]: time="2025-02-13T22:24:09.949328964Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:24:09.950200 containerd[1505]: time="2025-02-13T22:24:09.949346723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:24:09.950200 containerd[1505]: time="2025-02-13T22:24:09.949489438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:24:09.952216 containerd[1505]: time="2025-02-13T22:24:09.951691930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:24:09.952356 containerd[1505]: time="2025-02-13T22:24:09.943753509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:24:09.952356 containerd[1505]: time="2025-02-13T22:24:09.950019526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:24:09.952356 containerd[1505]: time="2025-02-13T22:24:09.950042156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:24:09.952356 containerd[1505]: time="2025-02-13T22:24:09.950146894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:24:09.971193 kubelet[2300]: W0213 22:24:09.971105 2300 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.12.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-x84ug.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.12.194:6443: connect: connection refused Feb 13 22:24:09.971374 kubelet[2300]: E0213 22:24:09.971204 2300 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.12.194:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-x84ug.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.12.194:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:24:10.001679 systemd[1]: Started cri-containerd-7a85af3cac6d5ea76d7d2ecd3a1b9140b99989dd9846f2684971a2f2201e4de5.scope - libcontainer container 7a85af3cac6d5ea76d7d2ecd3a1b9140b99989dd9846f2684971a2f2201e4de5. Feb 13 22:24:10.005776 systemd[1]: Started cri-containerd-9e03d7ff2e6761ebb5ba11cae805c1c4f7b1c38d3f2c4cd10e0757d98b8b2ba0.scope - libcontainer container 9e03d7ff2e6761ebb5ba11cae805c1c4f7b1c38d3f2c4cd10e0757d98b8b2ba0. Feb 13 22:24:10.018593 systemd[1]: Started cri-containerd-1143ca4e842f7f5b1295a55755e0c6105ea05b48a9d9dcbb6ddb1e0565366033.scope - libcontainer container 1143ca4e842f7f5b1295a55755e0c6105ea05b48a9d9dcbb6ddb1e0565366033. Feb 13 22:24:10.029856 kubelet[2300]: W0213 22:24:10.026456 2300 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.12.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.12.194:6443: connect: connection refused Feb 13 22:24:10.029856 kubelet[2300]: E0213 22:24:10.026572 2300 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.12.194:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.12.194:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:24:10.124238 containerd[1505]: time="2025-02-13T22:24:10.123494560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-x84ug.gb1.brightbox.com,Uid:ad897eefc74c4be40860043f1ff4975b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e03d7ff2e6761ebb5ba11cae805c1c4f7b1c38d3f2c4cd10e0757d98b8b2ba0\"" Feb 13 22:24:10.127947 containerd[1505]: time="2025-02-13T22:24:10.127403434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-x84ug.gb1.brightbox.com,Uid:d60822f27e0b659bbae12cb4ebb39e35,Namespace:kube-system,Attempt:0,} returns sandbox id \"1143ca4e842f7f5b1295a55755e0c6105ea05b48a9d9dcbb6ddb1e0565366033\"" Feb 13 22:24:10.144231 containerd[1505]: time="2025-02-13T22:24:10.143630500Z" level=info msg="CreateContainer within sandbox \"1143ca4e842f7f5b1295a55755e0c6105ea05b48a9d9dcbb6ddb1e0565366033\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 22:24:10.144231 containerd[1505]: time="2025-02-13T22:24:10.144739767Z" level=info msg="CreateContainer within sandbox \"9e03d7ff2e6761ebb5ba11cae805c1c4f7b1c38d3f2c4cd10e0757d98b8b2ba0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 22:24:10.147558 containerd[1505]: time="2025-02-13T22:24:10.145161869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-x84ug.gb1.brightbox.com,Uid:227191e3be16bab6fc5cd70e75bbdccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a85af3cac6d5ea76d7d2ecd3a1b9140b99989dd9846f2684971a2f2201e4de5\"" Feb 13 22:24:10.148412 containerd[1505]: time="2025-02-13T22:24:10.148344988Z" level=info msg="CreateContainer within sandbox \"7a85af3cac6d5ea76d7d2ecd3a1b9140b99989dd9846f2684971a2f2201e4de5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 22:24:10.152709 kubelet[2300]: I0213 22:24:10.152630 2300 kubelet_node_status.go:72] "Attempting to register node" node="srv-x84ug.gb1.brightbox.com" Feb 13 22:24:10.153545 kubelet[2300]: E0213 22:24:10.153510 2300 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.12.194:6443/api/v1/nodes\": dial tcp 10.230.12.194:6443: connect: connection refused" node="srv-x84ug.gb1.brightbox.com" Feb 13 22:24:10.179095 containerd[1505]: time="2025-02-13T22:24:10.178869755Z" level=info msg="CreateContainer within sandbox \"9e03d7ff2e6761ebb5ba11cae805c1c4f7b1c38d3f2c4cd10e0757d98b8b2ba0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0f6e0dd234c224f03c6bc041739841db54049d494a217fc57f2ef259de8ba0c1\"" Feb 13 22:24:10.183449 containerd[1505]: time="2025-02-13T22:24:10.181863745Z" level=info msg="StartContainer for \"0f6e0dd234c224f03c6bc041739841db54049d494a217fc57f2ef259de8ba0c1\"" Feb 13 22:24:10.186340 containerd[1505]: time="2025-02-13T22:24:10.186302440Z" level=info msg="CreateContainer within sandbox \"1143ca4e842f7f5b1295a55755e0c6105ea05b48a9d9dcbb6ddb1e0565366033\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"29d7c4f880ba15b046b0aff3f8f72a8f1a9bcb207990c121fe82b48212d820ce\"" Feb 13 22:24:10.186856 containerd[1505]: time="2025-02-13T22:24:10.186818097Z" level=info msg="CreateContainer within sandbox \"7a85af3cac6d5ea76d7d2ecd3a1b9140b99989dd9846f2684971a2f2201e4de5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5b8a4e6e08409aac204d11693a00613c2acb56ffe4f1ea74066d3e3a2d3ea2cf\"" Feb 13 22:24:10.187372 containerd[1505]: time="2025-02-13T22:24:10.187112570Z" level=info msg="StartContainer for \"29d7c4f880ba15b046b0aff3f8f72a8f1a9bcb207990c121fe82b48212d820ce\"" Feb 13 22:24:10.189623 containerd[1505]: time="2025-02-13T22:24:10.188527697Z" level=info msg="StartContainer for \"5b8a4e6e08409aac204d11693a00613c2acb56ffe4f1ea74066d3e3a2d3ea2cf\"" Feb 13 22:24:10.242771 systemd[1]: Started cri-containerd-0f6e0dd234c224f03c6bc041739841db54049d494a217fc57f2ef259de8ba0c1.scope - libcontainer container 0f6e0dd234c224f03c6bc041739841db54049d494a217fc57f2ef259de8ba0c1. Feb 13 22:24:10.256569 systemd[1]: Started cri-containerd-29d7c4f880ba15b046b0aff3f8f72a8f1a9bcb207990c121fe82b48212d820ce.scope - libcontainer container 29d7c4f880ba15b046b0aff3f8f72a8f1a9bcb207990c121fe82b48212d820ce. Feb 13 22:24:10.259624 systemd[1]: Started cri-containerd-5b8a4e6e08409aac204d11693a00613c2acb56ffe4f1ea74066d3e3a2d3ea2cf.scope - libcontainer container 5b8a4e6e08409aac204d11693a00613c2acb56ffe4f1ea74066d3e3a2d3ea2cf. Feb 13 22:24:10.390726 containerd[1505]: time="2025-02-13T22:24:10.388606503Z" level=info msg="StartContainer for \"29d7c4f880ba15b046b0aff3f8f72a8f1a9bcb207990c121fe82b48212d820ce\" returns successfully" Feb 13 22:24:10.397275 containerd[1505]: time="2025-02-13T22:24:10.396880462Z" level=info msg="StartContainer for \"0f6e0dd234c224f03c6bc041739841db54049d494a217fc57f2ef259de8ba0c1\" returns successfully" Feb 13 22:24:10.414858 containerd[1505]: time="2025-02-13T22:24:10.414816616Z" level=info msg="StartContainer for \"5b8a4e6e08409aac204d11693a00613c2acb56ffe4f1ea74066d3e3a2d3ea2cf\" returns successfully" Feb 13 22:24:10.610736 kubelet[2300]: E0213 22:24:10.610673 2300 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.12.194:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.12.194:6443: connect: connection refused" logger="UnhandledError" Feb 13 22:24:11.759283 kubelet[2300]: I0213 22:24:11.759230 2300 kubelet_node_status.go:72] "Attempting to register node" node="srv-x84ug.gb1.brightbox.com" Feb 13 22:24:13.895931 kubelet[2300]: E0213 22:24:13.895762 2300 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-x84ug.gb1.brightbox.com\" not found" node="srv-x84ug.gb1.brightbox.com" Feb 13 22:24:13.988446 kubelet[2300]: I0213 22:24:13.988364 2300 kubelet_node_status.go:75] "Successfully registered node" node="srv-x84ug.gb1.brightbox.com" Feb 13 22:24:13.988617 kubelet[2300]: E0213 22:24:13.988437 2300 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"srv-x84ug.gb1.brightbox.com\": node \"srv-x84ug.gb1.brightbox.com\" not found" Feb 13 22:24:14.510344 kubelet[2300]: I0213 22:24:14.509987 2300 apiserver.go:52] "Watching apiserver" Feb 13 22:24:14.528667 kubelet[2300]: I0213 22:24:14.528626 2300 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 22:24:16.031577 kubelet[2300]: W0213 22:24:16.031511 2300 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 22:24:16.151796 systemd[1]: Reloading requested from client PID 2578 ('systemctl') (unit session-9.scope)... Feb 13 22:24:16.152470 systemd[1]: Reloading... Feb 13 22:24:16.279459 zram_generator::config[2620]: No configuration found. Feb 13 22:24:16.453661 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 22:24:16.584977 systemd[1]: Reloading finished in 431 ms. Feb 13 22:24:16.657533 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:24:16.668311 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 22:24:16.668805 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:24:16.668984 systemd[1]: kubelet.service: Consumed 1.469s CPU time, 113.6M memory peak, 0B memory swap peak. Feb 13 22:24:16.676952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 22:24:16.887898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 22:24:16.899852 (kubelet)[2681]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 22:24:16.997693 kubelet[2681]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 22:24:16.997693 kubelet[2681]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 22:24:16.997693 kubelet[2681]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 22:24:16.998268 kubelet[2681]: I0213 22:24:16.997792 2681 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 22:24:17.010345 kubelet[2681]: I0213 22:24:17.010271 2681 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 22:24:17.010345 kubelet[2681]: I0213 22:24:17.010309 2681 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 22:24:17.010667 kubelet[2681]: I0213 22:24:17.010635 2681 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 22:24:17.014542 kubelet[2681]: I0213 22:24:17.014503 2681 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 22:24:17.022237 kubelet[2681]: I0213 22:24:17.022166 2681 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 22:24:17.027683 kubelet[2681]: E0213 22:24:17.027578 2681 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 22:24:17.027683 kubelet[2681]: I0213 22:24:17.027623 2681 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 22:24:17.037564 kubelet[2681]: I0213 22:24:17.037537 2681 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 22:24:17.037748 kubelet[2681]: I0213 22:24:17.037724 2681 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 22:24:17.038042 kubelet[2681]: I0213 22:24:17.037994 2681 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 22:24:17.038308 kubelet[2681]: I0213 22:24:17.038044 2681 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-x84ug.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 22:24:17.038537 kubelet[2681]: I0213 22:24:17.038326 2681 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 22:24:17.038537 kubelet[2681]: I0213 22:24:17.038344 2681 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 22:24:17.038537 kubelet[2681]: I0213 22:24:17.038435 2681 state_mem.go:36] "Initialized new in-memory state store" Feb 13 22:24:17.038705 kubelet[2681]: I0213 22:24:17.038624 2681 kubelet.go:408] "Attempting to sync node with API server" Feb 13 22:24:17.038705 kubelet[2681]: I0213 22:24:17.038651 2681 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 22:24:17.038705 kubelet[2681]: I0213 22:24:17.038703 2681 kubelet.go:314] "Adding apiserver pod source" Feb 13 22:24:17.039861 kubelet[2681]: I0213 22:24:17.038731 2681 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 22:24:17.041376 kubelet[2681]: I0213 22:24:17.041338 2681 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 22:24:17.042632 kubelet[2681]: I0213 22:24:17.042221 2681 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 22:24:17.047838 kubelet[2681]: I0213 22:24:17.047458 2681 server.go:1269] "Started kubelet" Feb 13 22:24:17.052820 kubelet[2681]: I0213 22:24:17.052767 2681 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 22:24:17.055936 kubelet[2681]: I0213 22:24:17.055862 2681 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 22:24:17.064083 kubelet[2681]: I0213 22:24:17.063973 2681 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 22:24:17.066274 kubelet[2681]: I0213 22:24:17.064628 2681 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 22:24:17.066274 kubelet[2681]: I0213 22:24:17.064991 2681 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 22:24:17.068739 kubelet[2681]: I0213 22:24:17.068691 2681 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 22:24:17.068924 kubelet[2681]: E0213 22:24:17.068883 2681 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-x84ug.gb1.brightbox.com\" not found" Feb 13 22:24:17.080455 kubelet[2681]: I0213 22:24:17.079364 2681 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 22:24:17.080455 kubelet[2681]: I0213 22:24:17.079688 2681 reconciler.go:26] "Reconciler: start to sync state" Feb 13 22:24:17.094438 kubelet[2681]: I0213 22:24:17.094149 2681 server.go:460] "Adding debug handlers to kubelet server" Feb 13 22:24:17.097645 kubelet[2681]: I0213 22:24:17.097532 2681 factory.go:221] Registration of the systemd container factory successfully Feb 13 22:24:17.097871 kubelet[2681]: I0213 22:24:17.097838 2681 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 22:24:17.102206 kubelet[2681]: I0213 22:24:17.101795 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 22:24:17.105557 kubelet[2681]: I0213 22:24:17.105529 2681 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 22:24:17.105634 kubelet[2681]: I0213 22:24:17.105582 2681 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 22:24:17.105634 kubelet[2681]: I0213 22:24:17.105614 2681 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 22:24:17.105748 kubelet[2681]: E0213 22:24:17.105673 2681 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 22:24:17.126796 kubelet[2681]: I0213 22:24:17.126761 2681 factory.go:221] Registration of the containerd container factory successfully Feb 13 22:24:17.128934 kubelet[2681]: E0213 22:24:17.128896 2681 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 22:24:17.200161 kubelet[2681]: I0213 22:24:17.199978 2681 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 22:24:17.200460 kubelet[2681]: I0213 22:24:17.200439 2681 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 22:24:17.201502 kubelet[2681]: I0213 22:24:17.201479 2681 state_mem.go:36] "Initialized new in-memory state store" Feb 13 22:24:17.201733 kubelet[2681]: I0213 22:24:17.201695 2681 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 22:24:17.201810 kubelet[2681]: I0213 22:24:17.201727 2681 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 22:24:17.201810 kubelet[2681]: I0213 22:24:17.201762 2681 policy_none.go:49] "None policy: Start" Feb 13 22:24:17.203065 kubelet[2681]: I0213 22:24:17.203041 2681 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 22:24:17.203186 kubelet[2681]: I0213 22:24:17.203085 2681 state_mem.go:35] "Initializing new in-memory state store" Feb 13 22:24:17.203307 kubelet[2681]: I0213 22:24:17.203282 2681 state_mem.go:75] "Updated machine memory state" Feb 13 22:24:17.205770 kubelet[2681]: E0213 22:24:17.205747 2681 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 22:24:17.213153 kubelet[2681]: I0213 22:24:17.213101 2681 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 22:24:17.215319 kubelet[2681]: I0213 22:24:17.213385 2681 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 22:24:17.215319 kubelet[2681]: I0213 22:24:17.214489 2681 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 22:24:17.216267 kubelet[2681]: I0213 22:24:17.216244 2681 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 22:24:17.336161 kubelet[2681]: I0213 22:24:17.336116 2681 kubelet_node_status.go:72] "Attempting to register node" node="srv-x84ug.gb1.brightbox.com" Feb 13 22:24:17.346418 kubelet[2681]: I0213 22:24:17.346372 2681 kubelet_node_status.go:111] "Node was previously registered" node="srv-x84ug.gb1.brightbox.com" Feb 13 22:24:17.346537 kubelet[2681]: I0213 22:24:17.346478 2681 kubelet_node_status.go:75] "Successfully registered node" node="srv-x84ug.gb1.brightbox.com" Feb 13 22:24:17.424981 kubelet[2681]: W0213 22:24:17.422988 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 22:24:17.424981 kubelet[2681]: W0213 22:24:17.423922 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 22:24:17.424981 kubelet[2681]: W0213 22:24:17.423969 2681 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 13 22:24:17.424981 kubelet[2681]: E0213 22:24:17.424028 2681 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-srv-x84ug.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:17.482063 kubelet[2681]: I0213 22:24:17.481240 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/227191e3be16bab6fc5cd70e75bbdccd-flexvolume-dir\") pod \"kube-controller-manager-srv-x84ug.gb1.brightbox.com\" (UID: \"227191e3be16bab6fc5cd70e75bbdccd\") " pod="kube-system/kube-controller-manager-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:17.482063 kubelet[2681]: I0213 22:24:17.481300 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/227191e3be16bab6fc5cd70e75bbdccd-k8s-certs\") pod \"kube-controller-manager-srv-x84ug.gb1.brightbox.com\" (UID: \"227191e3be16bab6fc5cd70e75bbdccd\") " pod="kube-system/kube-controller-manager-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:17.482063 kubelet[2681]: I0213 22:24:17.481344 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/227191e3be16bab6fc5cd70e75bbdccd-kubeconfig\") pod \"kube-controller-manager-srv-x84ug.gb1.brightbox.com\" (UID: \"227191e3be16bab6fc5cd70e75bbdccd\") " pod="kube-system/kube-controller-manager-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:17.482063 kubelet[2681]: I0213 22:24:17.481490 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d60822f27e0b659bbae12cb4ebb39e35-kubeconfig\") pod \"kube-scheduler-srv-x84ug.gb1.brightbox.com\" (UID: \"d60822f27e0b659bbae12cb4ebb39e35\") " pod="kube-system/kube-scheduler-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:17.482063 kubelet[2681]: I0213 22:24:17.481526 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad897eefc74c4be40860043f1ff4975b-ca-certs\") pod \"kube-apiserver-srv-x84ug.gb1.brightbox.com\" (UID: \"ad897eefc74c4be40860043f1ff4975b\") " pod="kube-system/kube-apiserver-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:17.482874 kubelet[2681]: I0213 22:24:17.481591 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad897eefc74c4be40860043f1ff4975b-k8s-certs\") pod \"kube-apiserver-srv-x84ug.gb1.brightbox.com\" (UID: \"ad897eefc74c4be40860043f1ff4975b\") " pod="kube-system/kube-apiserver-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:17.482874 kubelet[2681]: I0213 22:24:17.482560 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad897eefc74c4be40860043f1ff4975b-usr-share-ca-certificates\") pod \"kube-apiserver-srv-x84ug.gb1.brightbox.com\" (UID: \"ad897eefc74c4be40860043f1ff4975b\") " pod="kube-system/kube-apiserver-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:17.482874 kubelet[2681]: I0213 22:24:17.482597 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/227191e3be16bab6fc5cd70e75bbdccd-ca-certs\") pod \"kube-controller-manager-srv-x84ug.gb1.brightbox.com\" (UID: \"227191e3be16bab6fc5cd70e75bbdccd\") " pod="kube-system/kube-controller-manager-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:17.482874 kubelet[2681]: I0213 22:24:17.482630 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/227191e3be16bab6fc5cd70e75bbdccd-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-x84ug.gb1.brightbox.com\" (UID: \"227191e3be16bab6fc5cd70e75bbdccd\") " pod="kube-system/kube-controller-manager-srv-x84ug.gb1.brightbox.com" Feb 13 22:24:18.039816 kubelet[2681]: I0213 22:24:18.039735 2681 apiserver.go:52] "Watching apiserver" Feb 13 22:24:18.079978 kubelet[2681]: I0213 22:24:18.079920 2681 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 22:24:18.217721 kubelet[2681]: I0213 22:24:18.217541 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-x84ug.gb1.brightbox.com" podStartSLOduration=1.217507756 podStartE2EDuration="1.217507756s" podCreationTimestamp="2025-02-13 22:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 22:24:18.215873293 +0000 UTC m=+1.299084850" watchObservedRunningTime="2025-02-13 22:24:18.217507756 +0000 UTC m=+1.300719302" Feb 13 22:24:18.271801 kubelet[2681]: I0213 22:24:18.271565 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-x84ug.gb1.brightbox.com" podStartSLOduration=1.271513226 podStartE2EDuration="1.271513226s" podCreationTimestamp="2025-02-13 22:24:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 22:24:18.266664945 +0000 UTC m=+1.349876501" watchObservedRunningTime="2025-02-13 22:24:18.271513226 +0000 UTC m=+1.354724765" Feb 13 22:24:18.321329 kubelet[2681]: I0213 22:24:18.320904 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-x84ug.gb1.brightbox.com" podStartSLOduration=2.320875061 podStartE2EDuration="2.320875061s" podCreationTimestamp="2025-02-13 22:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 22:24:18.293119138 +0000 UTC m=+1.376330698" watchObservedRunningTime="2025-02-13 22:24:18.320875061 +0000 UTC m=+1.404086610" Feb 13 22:24:18.610792 sudo[1722]: pam_unix(sudo:session): session closed for user root Feb 13 22:24:18.759954 sshd[1721]: Connection closed by 147.75.109.163 port 45430 Feb 13 22:24:18.761609 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Feb 13 22:24:18.768840 systemd[1]: sshd@6-10.230.12.194:22-147.75.109.163:45430.service: Deactivated successfully. Feb 13 22:24:18.769687 systemd-logind[1489]: Session 9 logged out. Waiting for processes to exit. Feb 13 22:24:18.772900 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 22:24:18.773407 systemd[1]: session-9.scope: Consumed 6.935s CPU time, 146.6M memory peak, 0B memory swap peak. Feb 13 22:24:18.775642 systemd-logind[1489]: Removed session 9. Feb 13 22:24:22.068963 kubelet[2681]: I0213 22:24:22.068719 2681 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 22:24:22.072325 kubelet[2681]: I0213 22:24:22.071994 2681 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 22:24:22.072636 containerd[1505]: time="2025-02-13T22:24:22.070826145Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 22:24:23.014825 systemd[1]: Created slice kubepods-besteffort-pod3d6ebb05_8579_40b1_8f18_32dbafef03c4.slice - libcontainer container kubepods-besteffort-pod3d6ebb05_8579_40b1_8f18_32dbafef03c4.slice. Feb 13 22:24:23.018504 kubelet[2681]: I0213 22:24:23.017007 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn5mm\" (UniqueName: \"kubernetes.io/projected/3d6ebb05-8579-40b1-8f18-32dbafef03c4-kube-api-access-pn5mm\") pod \"kube-proxy-zsttr\" (UID: \"3d6ebb05-8579-40b1-8f18-32dbafef03c4\") " pod="kube-system/kube-proxy-zsttr" Feb 13 22:24:23.018504 kubelet[2681]: I0213 22:24:23.017095 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d6ebb05-8579-40b1-8f18-32dbafef03c4-kube-proxy\") pod \"kube-proxy-zsttr\" (UID: \"3d6ebb05-8579-40b1-8f18-32dbafef03c4\") " pod="kube-system/kube-proxy-zsttr" Feb 13 22:24:23.018504 kubelet[2681]: I0213 22:24:23.017126 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d6ebb05-8579-40b1-8f18-32dbafef03c4-xtables-lock\") pod \"kube-proxy-zsttr\" (UID: \"3d6ebb05-8579-40b1-8f18-32dbafef03c4\") " pod="kube-system/kube-proxy-zsttr" Feb 13 22:24:23.018504 kubelet[2681]: I0213 22:24:23.017173 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d6ebb05-8579-40b1-8f18-32dbafef03c4-lib-modules\") pod \"kube-proxy-zsttr\" (UID: \"3d6ebb05-8579-40b1-8f18-32dbafef03c4\") " pod="kube-system/kube-proxy-zsttr" Feb 13 22:24:23.048616 systemd[1]: Created slice kubepods-burstable-pod5db2147f_afaa_4431_aad6_c988f9d3bc78.slice - libcontainer container kubepods-burstable-pod5db2147f_afaa_4431_aad6_c988f9d3bc78.slice. Feb 13 22:24:23.118302 kubelet[2681]: I0213 22:24:23.118223 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5db2147f-afaa-4431-aad6-c988f9d3bc78-run\") pod \"kube-flannel-ds-dn8hj\" (UID: \"5db2147f-afaa-4431-aad6-c988f9d3bc78\") " pod="kube-flannel/kube-flannel-ds-dn8hj" Feb 13 22:24:23.119577 kubelet[2681]: I0213 22:24:23.119274 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxj2p\" (UniqueName: \"kubernetes.io/projected/5db2147f-afaa-4431-aad6-c988f9d3bc78-kube-api-access-vxj2p\") pod \"kube-flannel-ds-dn8hj\" (UID: \"5db2147f-afaa-4431-aad6-c988f9d3bc78\") " pod="kube-flannel/kube-flannel-ds-dn8hj" Feb 13 22:24:23.119577 kubelet[2681]: I0213 22:24:23.119351 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/5db2147f-afaa-4431-aad6-c988f9d3bc78-cni-plugin\") pod \"kube-flannel-ds-dn8hj\" (UID: \"5db2147f-afaa-4431-aad6-c988f9d3bc78\") " pod="kube-flannel/kube-flannel-ds-dn8hj" Feb 13 22:24:23.119577 kubelet[2681]: I0213 22:24:23.119436 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/5db2147f-afaa-4431-aad6-c988f9d3bc78-cni\") pod \"kube-flannel-ds-dn8hj\" (UID: \"5db2147f-afaa-4431-aad6-c988f9d3bc78\") " pod="kube-flannel/kube-flannel-ds-dn8hj" Feb 13 22:24:23.119577 kubelet[2681]: I0213 22:24:23.119516 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/5db2147f-afaa-4431-aad6-c988f9d3bc78-flannel-cfg\") pod \"kube-flannel-ds-dn8hj\" (UID: \"5db2147f-afaa-4431-aad6-c988f9d3bc78\") " pod="kube-flannel/kube-flannel-ds-dn8hj" Feb 13 22:24:23.119577 kubelet[2681]: I0213 22:24:23.119547 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5db2147f-afaa-4431-aad6-c988f9d3bc78-xtables-lock\") pod \"kube-flannel-ds-dn8hj\" (UID: \"5db2147f-afaa-4431-aad6-c988f9d3bc78\") " pod="kube-flannel/kube-flannel-ds-dn8hj" Feb 13 22:24:23.332709 containerd[1505]: time="2025-02-13T22:24:23.330953064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zsttr,Uid:3d6ebb05-8579-40b1-8f18-32dbafef03c4,Namespace:kube-system,Attempt:0,}" Feb 13 22:24:23.355554 containerd[1505]: time="2025-02-13T22:24:23.355512532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-dn8hj,Uid:5db2147f-afaa-4431-aad6-c988f9d3bc78,Namespace:kube-flannel,Attempt:0,}" Feb 13 22:24:23.385849 containerd[1505]: time="2025-02-13T22:24:23.385352407Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:24:23.385849 containerd[1505]: time="2025-02-13T22:24:23.385697194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:24:23.385849 containerd[1505]: time="2025-02-13T22:24:23.385730669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:24:23.387726 containerd[1505]: time="2025-02-13T22:24:23.387631318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:24:23.401725 containerd[1505]: time="2025-02-13T22:24:23.401290471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:24:23.401725 containerd[1505]: time="2025-02-13T22:24:23.401378361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:24:23.401725 containerd[1505]: time="2025-02-13T22:24:23.401431479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:24:23.401725 containerd[1505]: time="2025-02-13T22:24:23.401593593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:24:23.427615 systemd[1]: Started cri-containerd-bfa488e8b2716a17d62553796fae77e291e6b1182b85ffa980f60ac59ee30af2.scope - libcontainer container bfa488e8b2716a17d62553796fae77e291e6b1182b85ffa980f60ac59ee30af2. Feb 13 22:24:23.439559 systemd[1]: Started cri-containerd-fd9612b4d7133ada589ac36b5be94f4445aeafb8aa7c9660f9bd372088f9f6a1.scope - libcontainer container fd9612b4d7133ada589ac36b5be94f4445aeafb8aa7c9660f9bd372088f9f6a1. Feb 13 22:24:23.492992 containerd[1505]: time="2025-02-13T22:24:23.492388249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zsttr,Uid:3d6ebb05-8579-40b1-8f18-32dbafef03c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfa488e8b2716a17d62553796fae77e291e6b1182b85ffa980f60ac59ee30af2\"" Feb 13 22:24:23.502923 containerd[1505]: time="2025-02-13T22:24:23.502880825Z" level=info msg="CreateContainer within sandbox \"bfa488e8b2716a17d62553796fae77e291e6b1182b85ffa980f60ac59ee30af2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 22:24:23.527128 containerd[1505]: time="2025-02-13T22:24:23.527027548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-dn8hj,Uid:5db2147f-afaa-4431-aad6-c988f9d3bc78,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"fd9612b4d7133ada589ac36b5be94f4445aeafb8aa7c9660f9bd372088f9f6a1\"" Feb 13 22:24:23.533300 containerd[1505]: time="2025-02-13T22:24:23.532650263Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 22:24:23.541251 containerd[1505]: time="2025-02-13T22:24:23.541215287Z" level=info msg="CreateContainer within sandbox \"bfa488e8b2716a17d62553796fae77e291e6b1182b85ffa980f60ac59ee30af2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8904b44f95ddf9afef4f5e8136780614ff204aacf0013104a3f295b0db19e0d0\"" Feb 13 22:24:23.543416 containerd[1505]: time="2025-02-13T22:24:23.542409757Z" level=info msg="StartContainer for \"8904b44f95ddf9afef4f5e8136780614ff204aacf0013104a3f295b0db19e0d0\"" Feb 13 22:24:23.583598 systemd[1]: Started cri-containerd-8904b44f95ddf9afef4f5e8136780614ff204aacf0013104a3f295b0db19e0d0.scope - libcontainer container 8904b44f95ddf9afef4f5e8136780614ff204aacf0013104a3f295b0db19e0d0. Feb 13 22:24:23.629690 containerd[1505]: time="2025-02-13T22:24:23.629635505Z" level=info msg="StartContainer for \"8904b44f95ddf9afef4f5e8136780614ff204aacf0013104a3f295b0db19e0d0\" returns successfully" Feb 13 22:24:24.200298 kubelet[2681]: I0213 22:24:24.199864 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zsttr" podStartSLOduration=2.199775296 podStartE2EDuration="2.199775296s" podCreationTimestamp="2025-02-13 22:24:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 22:24:24.199621023 +0000 UTC m=+7.282832577" watchObservedRunningTime="2025-02-13 22:24:24.199775296 +0000 UTC m=+7.282986845" Feb 13 22:24:26.430156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount630587841.mount: Deactivated successfully. Feb 13 22:24:26.482518 containerd[1505]: time="2025-02-13T22:24:26.482437703Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:24:26.486167 containerd[1505]: time="2025-02-13T22:24:26.486073323Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Feb 13 22:24:26.490457 containerd[1505]: time="2025-02-13T22:24:26.489947837Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:24:26.495022 containerd[1505]: time="2025-02-13T22:24:26.494985560Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:24:26.500579 containerd[1505]: time="2025-02-13T22:24:26.500540831Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.96779404s" Feb 13 22:24:26.500725 containerd[1505]: time="2025-02-13T22:24:26.500604733Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Feb 13 22:24:26.505851 containerd[1505]: time="2025-02-13T22:24:26.505305655Z" level=info msg="CreateContainer within sandbox \"fd9612b4d7133ada589ac36b5be94f4445aeafb8aa7c9660f9bd372088f9f6a1\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 22:24:26.535848 containerd[1505]: time="2025-02-13T22:24:26.535797633Z" level=info msg="CreateContainer within sandbox \"fd9612b4d7133ada589ac36b5be94f4445aeafb8aa7c9660f9bd372088f9f6a1\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"8886c0fd0e7ec27c067c5bfeba034cf8959ff387912d24c98f1157806920c790\"" Feb 13 22:24:26.537293 containerd[1505]: time="2025-02-13T22:24:26.536294432Z" level=info msg="StartContainer for \"8886c0fd0e7ec27c067c5bfeba034cf8959ff387912d24c98f1157806920c790\"" Feb 13 22:24:26.593684 systemd[1]: Started cri-containerd-8886c0fd0e7ec27c067c5bfeba034cf8959ff387912d24c98f1157806920c790.scope - libcontainer container 8886c0fd0e7ec27c067c5bfeba034cf8959ff387912d24c98f1157806920c790. Feb 13 22:24:26.645063 containerd[1505]: time="2025-02-13T22:24:26.644993534Z" level=info msg="StartContainer for \"8886c0fd0e7ec27c067c5bfeba034cf8959ff387912d24c98f1157806920c790\" returns successfully" Feb 13 22:24:26.647215 systemd[1]: cri-containerd-8886c0fd0e7ec27c067c5bfeba034cf8959ff387912d24c98f1157806920c790.scope: Deactivated successfully. Feb 13 22:24:26.702480 containerd[1505]: time="2025-02-13T22:24:26.702149813Z" level=info msg="shim disconnected" id=8886c0fd0e7ec27c067c5bfeba034cf8959ff387912d24c98f1157806920c790 namespace=k8s.io Feb 13 22:24:26.702480 containerd[1505]: time="2025-02-13T22:24:26.702349400Z" level=warning msg="cleaning up after shim disconnected" id=8886c0fd0e7ec27c067c5bfeba034cf8959ff387912d24c98f1157806920c790 namespace=k8s.io Feb 13 22:24:26.702480 containerd[1505]: time="2025-02-13T22:24:26.702417220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:24:26.727908 containerd[1505]: time="2025-02-13T22:24:26.727828059Z" level=warning msg="cleanup warnings time=\"2025-02-13T22:24:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Feb 13 22:24:27.199033 containerd[1505]: time="2025-02-13T22:24:27.198945979Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 22:24:27.274447 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8886c0fd0e7ec27c067c5bfeba034cf8959ff387912d24c98f1157806920c790-rootfs.mount: Deactivated successfully. Feb 13 22:24:30.044157 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3032781539.mount: Deactivated successfully. Feb 13 22:24:32.413172 containerd[1505]: time="2025-02-13T22:24:32.413043694Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:24:32.415852 containerd[1505]: time="2025-02-13T22:24:32.415784208Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Feb 13 22:24:32.417146 containerd[1505]: time="2025-02-13T22:24:32.417054076Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:24:32.425782 containerd[1505]: time="2025-02-13T22:24:32.425635943Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 22:24:32.427786 containerd[1505]: time="2025-02-13T22:24:32.427531522Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 5.228274435s" Feb 13 22:24:32.427786 containerd[1505]: time="2025-02-13T22:24:32.427599869Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Feb 13 22:24:32.433324 containerd[1505]: time="2025-02-13T22:24:32.433279211Z" level=info msg="CreateContainer within sandbox \"fd9612b4d7133ada589ac36b5be94f4445aeafb8aa7c9660f9bd372088f9f6a1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 22:24:32.451947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1997338625.mount: Deactivated successfully. Feb 13 22:24:32.465488 containerd[1505]: time="2025-02-13T22:24:32.465427571Z" level=info msg="CreateContainer within sandbox \"fd9612b4d7133ada589ac36b5be94f4445aeafb8aa7c9660f9bd372088f9f6a1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cb2683826ecea74a8ed484b5a8a1940c93317c6a7af5871d3bd1d24fdd5e8acf\"" Feb 13 22:24:32.466648 containerd[1505]: time="2025-02-13T22:24:32.466599610Z" level=info msg="StartContainer for \"cb2683826ecea74a8ed484b5a8a1940c93317c6a7af5871d3bd1d24fdd5e8acf\"" Feb 13 22:24:32.529125 systemd[1]: Started cri-containerd-cb2683826ecea74a8ed484b5a8a1940c93317c6a7af5871d3bd1d24fdd5e8acf.scope - libcontainer container cb2683826ecea74a8ed484b5a8a1940c93317c6a7af5871d3bd1d24fdd5e8acf. Feb 13 22:24:32.569989 systemd[1]: cri-containerd-cb2683826ecea74a8ed484b5a8a1940c93317c6a7af5871d3bd1d24fdd5e8acf.scope: Deactivated successfully. Feb 13 22:24:32.573663 containerd[1505]: time="2025-02-13T22:24:32.572809173Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5db2147f_afaa_4431_aad6_c988f9d3bc78.slice/cri-containerd-cb2683826ecea74a8ed484b5a8a1940c93317c6a7af5871d3bd1d24fdd5e8acf.scope/memory.events\": no such file or directory" Feb 13 22:24:32.575679 containerd[1505]: time="2025-02-13T22:24:32.575644759Z" level=info msg="StartContainer for \"cb2683826ecea74a8ed484b5a8a1940c93317c6a7af5871d3bd1d24fdd5e8acf\" returns successfully" Feb 13 22:24:32.636991 kubelet[2681]: I0213 22:24:32.635693 2681 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 22:24:32.691413 systemd[1]: Created slice kubepods-burstable-podce57ef81_1977_4bdc_a85e_463e69292e1d.slice - libcontainer container kubepods-burstable-podce57ef81_1977_4bdc_a85e_463e69292e1d.slice. Feb 13 22:24:32.742343 containerd[1505]: time="2025-02-13T22:24:32.741828882Z" level=info msg="shim disconnected" id=cb2683826ecea74a8ed484b5a8a1940c93317c6a7af5871d3bd1d24fdd5e8acf namespace=k8s.io Feb 13 22:24:32.742343 containerd[1505]: time="2025-02-13T22:24:32.742204833Z" level=warning msg="cleaning up after shim disconnected" id=cb2683826ecea74a8ed484b5a8a1940c93317c6a7af5871d3bd1d24fdd5e8acf namespace=k8s.io Feb 13 22:24:32.742343 containerd[1505]: time="2025-02-13T22:24:32.742316369Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 22:24:32.752851 systemd[1]: Created slice kubepods-burstable-pod9fcca9c2_6c18_4494_a09f_a75100e342d4.slice - libcontainer container kubepods-burstable-pod9fcca9c2_6c18_4494_a09f_a75100e342d4.slice. Feb 13 22:24:32.785718 kubelet[2681]: I0213 22:24:32.785647 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9srq2\" (UniqueName: \"kubernetes.io/projected/9fcca9c2-6c18-4494-a09f-a75100e342d4-kube-api-access-9srq2\") pod \"coredns-6f6b679f8f-7bwtg\" (UID: \"9fcca9c2-6c18-4494-a09f-a75100e342d4\") " pod="kube-system/coredns-6f6b679f8f-7bwtg" Feb 13 22:24:32.786199 kubelet[2681]: I0213 22:24:32.786035 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce57ef81-1977-4bdc-a85e-463e69292e1d-config-volume\") pod \"coredns-6f6b679f8f-xf86r\" (UID: \"ce57ef81-1977-4bdc-a85e-463e69292e1d\") " pod="kube-system/coredns-6f6b679f8f-xf86r" Feb 13 22:24:32.786199 kubelet[2681]: I0213 22:24:32.786097 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hb68\" (UniqueName: \"kubernetes.io/projected/ce57ef81-1977-4bdc-a85e-463e69292e1d-kube-api-access-9hb68\") pod \"coredns-6f6b679f8f-xf86r\" (UID: \"ce57ef81-1977-4bdc-a85e-463e69292e1d\") " pod="kube-system/coredns-6f6b679f8f-xf86r" Feb 13 22:24:32.786199 kubelet[2681]: I0213 22:24:32.786153 2681 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9fcca9c2-6c18-4494-a09f-a75100e342d4-config-volume\") pod \"coredns-6f6b679f8f-7bwtg\" (UID: \"9fcca9c2-6c18-4494-a09f-a75100e342d4\") " pod="kube-system/coredns-6f6b679f8f-7bwtg" Feb 13 22:24:33.037238 containerd[1505]: time="2025-02-13T22:24:33.037175720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xf86r,Uid:ce57ef81-1977-4bdc-a85e-463e69292e1d,Namespace:kube-system,Attempt:0,}" Feb 13 22:24:33.063257 containerd[1505]: time="2025-02-13T22:24:33.062752844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7bwtg,Uid:9fcca9c2-6c18-4494-a09f-a75100e342d4,Namespace:kube-system,Attempt:0,}" Feb 13 22:24:33.092369 containerd[1505]: time="2025-02-13T22:24:33.091838007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xf86r,Uid:ce57ef81-1977-4bdc-a85e-463e69292e1d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3b71018c5b15b4535ec75b664e2d30bbf0b25c5735690becb078fac08a93f784\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 22:24:33.092681 kubelet[2681]: E0213 22:24:33.092200 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b71018c5b15b4535ec75b664e2d30bbf0b25c5735690becb078fac08a93f784\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 22:24:33.092681 kubelet[2681]: E0213 22:24:33.092334 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b71018c5b15b4535ec75b664e2d30bbf0b25c5735690becb078fac08a93f784\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-xf86r" Feb 13 22:24:33.092681 kubelet[2681]: E0213 22:24:33.092413 2681 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b71018c5b15b4535ec75b664e2d30bbf0b25c5735690becb078fac08a93f784\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-xf86r" Feb 13 22:24:33.092681 kubelet[2681]: E0213 22:24:33.092506 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-xf86r_kube-system(ce57ef81-1977-4bdc-a85e-463e69292e1d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-xf86r_kube-system(ce57ef81-1977-4bdc-a85e-463e69292e1d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b71018c5b15b4535ec75b664e2d30bbf0b25c5735690becb078fac08a93f784\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-xf86r" podUID="ce57ef81-1977-4bdc-a85e-463e69292e1d" Feb 13 22:24:33.097388 containerd[1505]: time="2025-02-13T22:24:33.097326112Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7bwtg,Uid:9fcca9c2-6c18-4494-a09f-a75100e342d4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a8405330c2dc7190e734bda9e70a10461c3a2080723b521e0b35b0ad46e2d3a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 22:24:33.098106 kubelet[2681]: E0213 22:24:33.097829 2681 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a8405330c2dc7190e734bda9e70a10461c3a2080723b521e0b35b0ad46e2d3a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 22:24:33.098106 kubelet[2681]: E0213 22:24:33.097915 2681 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a8405330c2dc7190e734bda9e70a10461c3a2080723b521e0b35b0ad46e2d3a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-7bwtg" Feb 13 22:24:33.098106 kubelet[2681]: E0213 22:24:33.097942 2681 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a8405330c2dc7190e734bda9e70a10461c3a2080723b521e0b35b0ad46e2d3a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-6f6b679f8f-7bwtg" Feb 13 22:24:33.098106 kubelet[2681]: E0213 22:24:33.098007 2681 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-7bwtg_kube-system(9fcca9c2-6c18-4494-a09f-a75100e342d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-7bwtg_kube-system(9fcca9c2-6c18-4494-a09f-a75100e342d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a8405330c2dc7190e734bda9e70a10461c3a2080723b521e0b35b0ad46e2d3a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-6f6b679f8f-7bwtg" podUID="9fcca9c2-6c18-4494-a09f-a75100e342d4" Feb 13 22:24:33.218739 containerd[1505]: time="2025-02-13T22:24:33.218598967Z" level=info msg="CreateContainer within sandbox \"fd9612b4d7133ada589ac36b5be94f4445aeafb8aa7c9660f9bd372088f9f6a1\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 22:24:33.237709 containerd[1505]: time="2025-02-13T22:24:33.237411705Z" level=info msg="CreateContainer within sandbox \"fd9612b4d7133ada589ac36b5be94f4445aeafb8aa7c9660f9bd372088f9f6a1\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"cecb7230d76041e63c07eb06a804c5eedc755138024629d20b138546b33eefbb\"" Feb 13 22:24:33.241415 containerd[1505]: time="2025-02-13T22:24:33.239161626Z" level=info msg="StartContainer for \"cecb7230d76041e63c07eb06a804c5eedc755138024629d20b138546b33eefbb\"" Feb 13 22:24:33.308713 systemd[1]: Started cri-containerd-cecb7230d76041e63c07eb06a804c5eedc755138024629d20b138546b33eefbb.scope - libcontainer container cecb7230d76041e63c07eb06a804c5eedc755138024629d20b138546b33eefbb. Feb 13 22:24:33.354349 containerd[1505]: time="2025-02-13T22:24:33.354269206Z" level=info msg="StartContainer for \"cecb7230d76041e63c07eb06a804c5eedc755138024629d20b138546b33eefbb\" returns successfully" Feb 13 22:24:33.451941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb2683826ecea74a8ed484b5a8a1940c93317c6a7af5871d3bd1d24fdd5e8acf-rootfs.mount: Deactivated successfully. Feb 13 22:24:34.238672 kubelet[2681]: I0213 22:24:34.238551 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-dn8hj" podStartSLOduration=3.339640889 podStartE2EDuration="12.238508476s" podCreationTimestamp="2025-02-13 22:24:22 +0000 UTC" firstStartedPulling="2025-02-13 22:24:23.53029747 +0000 UTC m=+6.613509013" lastFinishedPulling="2025-02-13 22:24:32.429165056 +0000 UTC m=+15.512376600" observedRunningTime="2025-02-13 22:24:34.236921322 +0000 UTC m=+17.320132882" watchObservedRunningTime="2025-02-13 22:24:34.238508476 +0000 UTC m=+17.321720024" Feb 13 22:24:34.452873 systemd-networkd[1419]: flannel.1: Link UP Feb 13 22:24:34.452888 systemd-networkd[1419]: flannel.1: Gained carrier Feb 13 22:24:36.259754 systemd-networkd[1419]: flannel.1: Gained IPv6LL Feb 13 22:24:46.109094 containerd[1505]: time="2025-02-13T22:24:46.108797696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xf86r,Uid:ce57ef81-1977-4bdc-a85e-463e69292e1d,Namespace:kube-system,Attempt:0,}" Feb 13 22:24:46.109094 containerd[1505]: time="2025-02-13T22:24:46.108797854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7bwtg,Uid:9fcca9c2-6c18-4494-a09f-a75100e342d4,Namespace:kube-system,Attempt:0,}" Feb 13 22:24:46.162505 systemd-networkd[1419]: cni0: Link UP Feb 13 22:24:46.167783 systemd-networkd[1419]: cni0: Gained carrier Feb 13 22:24:46.175494 systemd-networkd[1419]: cni0: Lost carrier Feb 13 22:24:46.184410 systemd-networkd[1419]: vethf643884e: Link UP Feb 13 22:24:46.190494 kernel: cni0: port 1(vethf643884e) entered blocking state Feb 13 22:24:46.190649 kernel: cni0: port 1(vethf643884e) entered disabled state Feb 13 22:24:46.194421 kernel: vethf643884e: entered allmulticast mode Feb 13 22:24:46.196486 kernel: vethf643884e: entered promiscuous mode Feb 13 22:24:46.201924 kernel: cni0: port 1(vethf643884e) entered blocking state Feb 13 22:24:46.201994 kernel: cni0: port 1(vethf643884e) entered forwarding state Feb 13 22:24:46.202040 kernel: cni0: port 1(vethf643884e) entered disabled state Feb 13 22:24:46.200880 systemd-networkd[1419]: vethbf3d7c62: Link UP Feb 13 22:24:46.208834 kernel: cni0: port 2(vethbf3d7c62) entered blocking state Feb 13 22:24:46.208978 kernel: cni0: port 2(vethbf3d7c62) entered disabled state Feb 13 22:24:46.218445 kernel: vethbf3d7c62: entered allmulticast mode Feb 13 22:24:46.220463 kernel: vethbf3d7c62: entered promiscuous mode Feb 13 22:24:46.224540 kernel: cni0: port 2(vethbf3d7c62) entered blocking state Feb 13 22:24:46.224611 kernel: cni0: port 2(vethbf3d7c62) entered forwarding state Feb 13 22:24:46.227482 kernel: cni0: port 2(vethbf3d7c62) entered disabled state Feb 13 22:24:46.236612 kernel: cni0: port 1(vethf643884e) entered blocking state Feb 13 22:24:46.236697 kernel: cni0: port 1(vethf643884e) entered forwarding state Feb 13 22:24:46.239355 systemd-networkd[1419]: vethf643884e: Gained carrier Feb 13 22:24:46.239946 systemd-networkd[1419]: cni0: Gained carrier Feb 13 22:24:46.249136 kernel: cni0: port 2(vethbf3d7c62) entered blocking state Feb 13 22:24:46.249732 kernel: cni0: port 2(vethbf3d7c62) entered forwarding state Feb 13 22:24:46.249288 systemd-networkd[1419]: vethbf3d7c62: Gained carrier Feb 13 22:24:46.252586 containerd[1505]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000a48e8), "name":"cbr0", "type":"bridge"} Feb 13 22:24:46.252586 containerd[1505]: delegateAdd: netconf sent to delegate plugin: Feb 13 22:24:46.261646 containerd[1505]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Feb 13 22:24:46.261646 containerd[1505]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Feb 13 22:24:46.261646 containerd[1505]: delegateAdd: netconf sent to delegate plugin: Feb 13 22:24:46.327820 containerd[1505]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T22:24:46.327639041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:24:46.327820 containerd[1505]: time="2025-02-13T22:24:46.327779020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:24:46.328571 containerd[1505]: time="2025-02-13T22:24:46.328387655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 22:24:46.328571 containerd[1505]: time="2025-02-13T22:24:46.328523499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 22:24:46.328571 containerd[1505]: time="2025-02-13T22:24:46.328545236Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:24:46.329099 containerd[1505]: time="2025-02-13T22:24:46.328826453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:24:46.329099 containerd[1505]: time="2025-02-13T22:24:46.329017980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:24:46.329523 containerd[1505]: time="2025-02-13T22:24:46.329373293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 22:24:46.373653 systemd[1]: Started cri-containerd-c78513d32ee860c90be8d951ab7365774d20b13d4c0936e088c639e1ef7991db.scope - libcontainer container c78513d32ee860c90be8d951ab7365774d20b13d4c0936e088c639e1ef7991db. Feb 13 22:24:46.386543 systemd[1]: Started cri-containerd-4a54292842153bbd2c768a987a64b749c31feb2a61f7ba2dade56e947acf6638.scope - libcontainer container 4a54292842153bbd2c768a987a64b749c31feb2a61f7ba2dade56e947acf6638. Feb 13 22:24:46.478102 containerd[1505]: time="2025-02-13T22:24:46.477893679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-xf86r,Uid:ce57ef81-1977-4bdc-a85e-463e69292e1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a54292842153bbd2c768a987a64b749c31feb2a61f7ba2dade56e947acf6638\"" Feb 13 22:24:46.486667 containerd[1505]: time="2025-02-13T22:24:46.486521842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-7bwtg,Uid:9fcca9c2-6c18-4494-a09f-a75100e342d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c78513d32ee860c90be8d951ab7365774d20b13d4c0936e088c639e1ef7991db\"" Feb 13 22:24:46.487358 containerd[1505]: time="2025-02-13T22:24:46.487083077Z" level=info msg="CreateContainer within sandbox \"4a54292842153bbd2c768a987a64b749c31feb2a61f7ba2dade56e947acf6638\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 22:24:46.493711 containerd[1505]: time="2025-02-13T22:24:46.493665253Z" level=info msg="CreateContainer within sandbox \"c78513d32ee860c90be8d951ab7365774d20b13d4c0936e088c639e1ef7991db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 22:24:46.509991 containerd[1505]: time="2025-02-13T22:24:46.509944572Z" level=info msg="CreateContainer within sandbox \"4a54292842153bbd2c768a987a64b749c31feb2a61f7ba2dade56e947acf6638\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"21c8b6835e22afcb0a760167449b4651385383c3a47416c4b4b76f90e0746bd1\"" Feb 13 22:24:46.510801 containerd[1505]: time="2025-02-13T22:24:46.510764236Z" level=info msg="StartContainer for \"21c8b6835e22afcb0a760167449b4651385383c3a47416c4b4b76f90e0746bd1\"" Feb 13 22:24:46.523013 containerd[1505]: time="2025-02-13T22:24:46.522963659Z" level=info msg="CreateContainer within sandbox \"c78513d32ee860c90be8d951ab7365774d20b13d4c0936e088c639e1ef7991db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1994f8eae6c0b5e7dc0b2e3c029abe4db1ec2c4980c519ace933ddaa1f9a50d4\"" Feb 13 22:24:46.526493 containerd[1505]: time="2025-02-13T22:24:46.524773453Z" level=info msg="StartContainer for \"1994f8eae6c0b5e7dc0b2e3c029abe4db1ec2c4980c519ace933ddaa1f9a50d4\"" Feb 13 22:24:46.574319 systemd[1]: Started cri-containerd-21c8b6835e22afcb0a760167449b4651385383c3a47416c4b4b76f90e0746bd1.scope - libcontainer container 21c8b6835e22afcb0a760167449b4651385383c3a47416c4b4b76f90e0746bd1. Feb 13 22:24:46.593759 systemd[1]: Started cri-containerd-1994f8eae6c0b5e7dc0b2e3c029abe4db1ec2c4980c519ace933ddaa1f9a50d4.scope - libcontainer container 1994f8eae6c0b5e7dc0b2e3c029abe4db1ec2c4980c519ace933ddaa1f9a50d4. Feb 13 22:24:46.644451 containerd[1505]: time="2025-02-13T22:24:46.643471779Z" level=info msg="StartContainer for \"21c8b6835e22afcb0a760167449b4651385383c3a47416c4b4b76f90e0746bd1\" returns successfully" Feb 13 22:24:46.668692 containerd[1505]: time="2025-02-13T22:24:46.668628478Z" level=info msg="StartContainer for \"1994f8eae6c0b5e7dc0b2e3c029abe4db1ec2c4980c519ace933ddaa1f9a50d4\" returns successfully" Feb 13 22:24:47.282292 kubelet[2681]: I0213 22:24:47.281775 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-xf86r" podStartSLOduration=24.281698281 podStartE2EDuration="24.281698281s" podCreationTimestamp="2025-02-13 22:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 22:24:47.280739147 +0000 UTC m=+30.363950705" watchObservedRunningTime="2025-02-13 22:24:47.281698281 +0000 UTC m=+30.364909838" Feb 13 22:24:47.362131 kubelet[2681]: I0213 22:24:47.361431 2681 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-7bwtg" podStartSLOduration=24.361061928 podStartE2EDuration="24.361061928s" podCreationTimestamp="2025-02-13 22:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 22:24:47.359571139 +0000 UTC m=+30.442782697" watchObservedRunningTime="2025-02-13 22:24:47.361061928 +0000 UTC m=+30.444273464" Feb 13 22:24:47.523646 systemd-networkd[1419]: vethbf3d7c62: Gained IPv6LL Feb 13 22:24:47.587702 systemd-networkd[1419]: vethf643884e: Gained IPv6LL Feb 13 22:24:48.035623 systemd-networkd[1419]: cni0: Gained IPv6LL Feb 13 22:25:22.620785 systemd[1]: Started sshd@7-10.230.12.194:22-147.75.109.163:39690.service - OpenSSH per-connection server daemon (147.75.109.163:39690). Feb 13 22:25:23.556617 sshd[3736]: Accepted publickey for core from 147.75.109.163 port 39690 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:25:23.558907 sshd-session[3736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:25:23.567141 systemd-logind[1489]: New session 10 of user core. Feb 13 22:25:23.575689 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 22:25:24.311003 sshd[3738]: Connection closed by 147.75.109.163 port 39690 Feb 13 22:25:24.312127 sshd-session[3736]: pam_unix(sshd:session): session closed for user core Feb 13 22:25:24.318413 systemd[1]: sshd@7-10.230.12.194:22-147.75.109.163:39690.service: Deactivated successfully. Feb 13 22:25:24.322086 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 22:25:24.323859 systemd-logind[1489]: Session 10 logged out. Waiting for processes to exit. Feb 13 22:25:24.325305 systemd-logind[1489]: Removed session 10. Feb 13 22:25:29.478001 systemd[1]: Started sshd@8-10.230.12.194:22-147.75.109.163:39706.service - OpenSSH per-connection server daemon (147.75.109.163:39706). Feb 13 22:25:30.389920 sshd[3773]: Accepted publickey for core from 147.75.109.163 port 39706 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:25:30.391983 sshd-session[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:25:30.400185 systemd-logind[1489]: New session 11 of user core. Feb 13 22:25:30.405627 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 22:25:31.117303 sshd[3796]: Connection closed by 147.75.109.163 port 39706 Feb 13 22:25:31.119553 sshd-session[3773]: pam_unix(sshd:session): session closed for user core Feb 13 22:25:31.125755 systemd[1]: sshd@8-10.230.12.194:22-147.75.109.163:39706.service: Deactivated successfully. Feb 13 22:25:31.130350 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 22:25:31.131753 systemd-logind[1489]: Session 11 logged out. Waiting for processes to exit. Feb 13 22:25:31.133823 systemd-logind[1489]: Removed session 11. Feb 13 22:25:36.278893 systemd[1]: Started sshd@9-10.230.12.194:22-147.75.109.163:43426.service - OpenSSH per-connection server daemon (147.75.109.163:43426). Feb 13 22:25:37.197188 sshd[3828]: Accepted publickey for core from 147.75.109.163 port 43426 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:25:37.200582 sshd-session[3828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:25:37.208715 systemd-logind[1489]: New session 12 of user core. Feb 13 22:25:37.216725 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 22:25:37.912574 sshd[3830]: Connection closed by 147.75.109.163 port 43426 Feb 13 22:25:37.912928 sshd-session[3828]: pam_unix(sshd:session): session closed for user core Feb 13 22:25:37.920407 systemd[1]: sshd@9-10.230.12.194:22-147.75.109.163:43426.service: Deactivated successfully. Feb 13 22:25:37.923082 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 22:25:37.924161 systemd-logind[1489]: Session 12 logged out. Waiting for processes to exit. Feb 13 22:25:37.926173 systemd-logind[1489]: Removed session 12. Feb 13 22:25:38.076803 systemd[1]: Started sshd@10-10.230.12.194:22-147.75.109.163:43440.service - OpenSSH per-connection server daemon (147.75.109.163:43440). Feb 13 22:25:39.003672 sshd[3842]: Accepted publickey for core from 147.75.109.163 port 43440 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:25:39.005978 sshd-session[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:25:39.014689 systemd-logind[1489]: New session 13 of user core. Feb 13 22:25:39.021703 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 22:25:39.812476 sshd[3844]: Connection closed by 147.75.109.163 port 43440 Feb 13 22:25:39.813940 sshd-session[3842]: pam_unix(sshd:session): session closed for user core Feb 13 22:25:39.821491 systemd[1]: sshd@10-10.230.12.194:22-147.75.109.163:43440.service: Deactivated successfully. Feb 13 22:25:39.824557 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 22:25:39.825827 systemd-logind[1489]: Session 13 logged out. Waiting for processes to exit. Feb 13 22:25:39.827673 systemd-logind[1489]: Removed session 13. Feb 13 22:25:39.970858 systemd[1]: Started sshd@11-10.230.12.194:22-147.75.109.163:48788.service - OpenSSH per-connection server daemon (147.75.109.163:48788). Feb 13 22:25:40.872840 sshd[3869]: Accepted publickey for core from 147.75.109.163 port 48788 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:25:40.875675 sshd-session[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:25:40.882308 systemd-logind[1489]: New session 14 of user core. Feb 13 22:25:40.888586 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 22:25:41.582815 sshd[3876]: Connection closed by 147.75.109.163 port 48788 Feb 13 22:25:41.583443 sshd-session[3869]: pam_unix(sshd:session): session closed for user core Feb 13 22:25:41.589097 systemd[1]: sshd@11-10.230.12.194:22-147.75.109.163:48788.service: Deactivated successfully. Feb 13 22:25:41.591612 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 22:25:41.593049 systemd-logind[1489]: Session 14 logged out. Waiting for processes to exit. Feb 13 22:25:41.594963 systemd-logind[1489]: Removed session 14. Feb 13 22:25:46.743942 systemd[1]: Started sshd@12-10.230.12.194:22-147.75.109.163:48804.service - OpenSSH per-connection server daemon (147.75.109.163:48804). Feb 13 22:25:47.637781 sshd[3908]: Accepted publickey for core from 147.75.109.163 port 48804 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:25:47.640488 sshd-session[3908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:25:47.648526 systemd-logind[1489]: New session 15 of user core. Feb 13 22:25:47.653626 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 22:25:48.349384 sshd[3910]: Connection closed by 147.75.109.163 port 48804 Feb 13 22:25:48.348493 sshd-session[3908]: pam_unix(sshd:session): session closed for user core Feb 13 22:25:48.352409 systemd-logind[1489]: Session 15 logged out. Waiting for processes to exit. Feb 13 22:25:48.353414 systemd[1]: sshd@12-10.230.12.194:22-147.75.109.163:48804.service: Deactivated successfully. Feb 13 22:25:48.355907 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 22:25:48.357979 systemd-logind[1489]: Removed session 15. Feb 13 22:25:48.513877 systemd[1]: Started sshd@13-10.230.12.194:22-147.75.109.163:48806.service - OpenSSH per-connection server daemon (147.75.109.163:48806). Feb 13 22:25:49.412639 sshd[3921]: Accepted publickey for core from 147.75.109.163 port 48806 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:25:49.415259 sshd-session[3921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:25:49.423852 systemd-logind[1489]: New session 16 of user core. Feb 13 22:25:49.432650 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 22:25:50.431051 sshd[3923]: Connection closed by 147.75.109.163 port 48806 Feb 13 22:25:50.432099 sshd-session[3921]: pam_unix(sshd:session): session closed for user core Feb 13 22:25:50.437715 systemd[1]: sshd@13-10.230.12.194:22-147.75.109.163:48806.service: Deactivated successfully. Feb 13 22:25:50.440646 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 22:25:50.442314 systemd-logind[1489]: Session 16 logged out. Waiting for processes to exit. Feb 13 22:25:50.443940 systemd-logind[1489]: Removed session 16. Feb 13 22:25:50.590777 systemd[1]: Started sshd@14-10.230.12.194:22-147.75.109.163:60678.service - OpenSSH per-connection server daemon (147.75.109.163:60678). Feb 13 22:25:51.501835 sshd[3953]: Accepted publickey for core from 147.75.109.163 port 60678 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:25:51.504318 sshd-session[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:25:51.515412 systemd-logind[1489]: New session 17 of user core. Feb 13 22:25:51.520636 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 22:25:54.189730 sshd[3955]: Connection closed by 147.75.109.163 port 60678 Feb 13 22:25:54.191632 sshd-session[3953]: pam_unix(sshd:session): session closed for user core Feb 13 22:25:54.199054 systemd[1]: sshd@14-10.230.12.194:22-147.75.109.163:60678.service: Deactivated successfully. Feb 13 22:25:54.202345 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 22:25:54.203921 systemd-logind[1489]: Session 17 logged out. Waiting for processes to exit. Feb 13 22:25:54.206744 systemd-logind[1489]: Removed session 17. Feb 13 22:25:54.358787 systemd[1]: Started sshd@15-10.230.12.194:22-147.75.109.163:60682.service - OpenSSH per-connection server daemon (147.75.109.163:60682). Feb 13 22:25:55.279037 sshd[3977]: Accepted publickey for core from 147.75.109.163 port 60682 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:25:55.281172 sshd-session[3977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:25:55.289724 systemd-logind[1489]: New session 18 of user core. Feb 13 22:25:55.301659 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 22:25:56.197956 sshd[4000]: Connection closed by 147.75.109.163 port 60682 Feb 13 22:25:56.198524 sshd-session[3977]: pam_unix(sshd:session): session closed for user core Feb 13 22:25:56.203766 systemd[1]: sshd@15-10.230.12.194:22-147.75.109.163:60682.service: Deactivated successfully. Feb 13 22:25:56.207232 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 22:25:56.208968 systemd-logind[1489]: Session 18 logged out. Waiting for processes to exit. Feb 13 22:25:56.210256 systemd-logind[1489]: Removed session 18. Feb 13 22:25:56.329723 systemd[1]: Started sshd@16-10.230.12.194:22-147.75.109.163:60698.service - OpenSSH per-connection server daemon (147.75.109.163:60698). Feb 13 22:25:57.231555 sshd[4009]: Accepted publickey for core from 147.75.109.163 port 60698 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:25:57.233902 sshd-session[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:25:57.242448 systemd-logind[1489]: New session 19 of user core. Feb 13 22:25:57.250628 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 22:25:57.938823 sshd[4011]: Connection closed by 147.75.109.163 port 60698 Feb 13 22:25:57.940024 sshd-session[4009]: pam_unix(sshd:session): session closed for user core Feb 13 22:25:57.945117 systemd-logind[1489]: Session 19 logged out. Waiting for processes to exit. Feb 13 22:25:57.945751 systemd[1]: sshd@16-10.230.12.194:22-147.75.109.163:60698.service: Deactivated successfully. Feb 13 22:25:57.948163 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 22:25:57.949689 systemd-logind[1489]: Removed session 19. Feb 13 22:26:03.097546 systemd[1]: Started sshd@17-10.230.12.194:22-147.75.109.163:59808.service - OpenSSH per-connection server daemon (147.75.109.163:59808). Feb 13 22:26:04.018914 sshd[4046]: Accepted publickey for core from 147.75.109.163 port 59808 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:26:04.021067 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:26:04.027802 systemd-logind[1489]: New session 20 of user core. Feb 13 22:26:04.034625 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 22:26:04.725318 sshd[4048]: Connection closed by 147.75.109.163 port 59808 Feb 13 22:26:04.726718 sshd-session[4046]: pam_unix(sshd:session): session closed for user core Feb 13 22:26:04.733265 systemd[1]: sshd@17-10.230.12.194:22-147.75.109.163:59808.service: Deactivated successfully. Feb 13 22:26:04.736098 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 22:26:04.737334 systemd-logind[1489]: Session 20 logged out. Waiting for processes to exit. Feb 13 22:26:04.739190 systemd-logind[1489]: Removed session 20. Feb 13 22:26:09.883804 systemd[1]: Started sshd@18-10.230.12.194:22-147.75.109.163:40610.service - OpenSSH per-connection server daemon (147.75.109.163:40610). Feb 13 22:26:10.769499 sshd[4086]: Accepted publickey for core from 147.75.109.163 port 40610 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:26:10.771275 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:26:10.779406 systemd-logind[1489]: New session 21 of user core. Feb 13 22:26:10.782654 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 22:26:11.461486 sshd[4103]: Connection closed by 147.75.109.163 port 40610 Feb 13 22:26:11.462473 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Feb 13 22:26:11.467996 systemd[1]: sshd@18-10.230.12.194:22-147.75.109.163:40610.service: Deactivated successfully. Feb 13 22:26:11.471004 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 22:26:11.472162 systemd-logind[1489]: Session 21 logged out. Waiting for processes to exit. Feb 13 22:26:11.474274 systemd-logind[1489]: Removed session 21. Feb 13 22:26:16.626864 systemd[1]: Started sshd@19-10.230.12.194:22-147.75.109.163:40624.service - OpenSSH per-connection server daemon (147.75.109.163:40624). Feb 13 22:26:17.520276 sshd[4136]: Accepted publickey for core from 147.75.109.163 port 40624 ssh2: RSA SHA256:Yx7fWtREze/vjbfbVXgsOsi8+bAvCeghI7ZLGsIJS+I Feb 13 22:26:17.522254 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 22:26:17.529714 systemd-logind[1489]: New session 22 of user core. Feb 13 22:26:17.540648 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 22:26:18.237543 sshd[4140]: Connection closed by 147.75.109.163 port 40624 Feb 13 22:26:18.238781 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Feb 13 22:26:18.244148 systemd[1]: sshd@19-10.230.12.194:22-147.75.109.163:40624.service: Deactivated successfully. Feb 13 22:26:18.246494 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 22:26:18.247538 systemd-logind[1489]: Session 22 logged out. Waiting for processes to exit. Feb 13 22:26:18.249849 systemd-logind[1489]: Removed session 22.