Jan 30 14:05:41.916193 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:29:54 -00 2025 Jan 30 14:05:41.916221 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 14:05:41.916236 kernel: BIOS-provided physical RAM map: Jan 30 14:05:41.916245 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 14:05:41.916254 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 30 14:05:41.916263 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 30 14:05:41.916274 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 30 14:05:41.916283 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 30 14:05:41.916292 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 30 14:05:41.916301 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 30 14:05:41.916310 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Jan 30 14:05:41.916322 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 30 14:05:41.916331 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 30 14:05:41.916340 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 30 14:05:41.916351 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 30 14:05:41.916361 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 30 14:05:41.916374 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 30 14:05:41.916384 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 30 14:05:41.916393 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 30 14:05:41.916403 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 30 14:05:41.916412 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 30 14:05:41.916422 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 30 14:05:41.916433 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 14:05:41.916444 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 14:05:41.916456 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 30 14:05:41.916465 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 14:05:41.916475 kernel: NX (Execute Disable) protection: active Jan 30 14:05:41.916487 kernel: APIC: Static calls initialized Jan 30 14:05:41.916496 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 30 14:05:41.916506 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Jan 30 14:05:41.916530 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 30 14:05:41.916540 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Jan 30 14:05:41.916549 kernel: extended physical RAM map: Jan 30 14:05:41.916559 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 14:05:41.916569 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 30 14:05:41.916589 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 30 14:05:41.916599 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Jan 30 14:05:41.916620 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 30 14:05:41.916640 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Jan 30 14:05:41.916662 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Jan 30 14:05:41.916680 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Jan 30 14:05:41.916694 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Jan 30 14:05:41.916714 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Jan 30 14:05:41.916738 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Jan 30 14:05:41.916759 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Jan 30 14:05:41.916790 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Jan 30 14:05:41.916808 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Jan 30 14:05:41.916818 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Jan 30 14:05:41.916828 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Jan 30 14:05:41.916838 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 30 14:05:41.916847 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Jan 30 14:05:41.916857 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Jan 30 14:05:41.916866 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Jan 30 14:05:41.916876 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Jan 30 14:05:41.916889 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Jan 30 14:05:41.916898 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 30 14:05:41.916908 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 30 14:05:41.916917 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 14:05:41.916927 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 30 14:05:41.916937 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 14:05:41.916947 kernel: efi: EFI v2.7 by EDK II Jan 30 14:05:41.916956 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Jan 30 14:05:41.916966 kernel: random: crng init done Jan 30 14:05:41.916975 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 30 14:05:41.916985 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 30 14:05:41.916995 kernel: secureboot: Secure boot disabled Jan 30 14:05:41.917009 kernel: SMBIOS 2.8 present. Jan 30 14:05:41.917019 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jan 30 14:05:41.917040 kernel: Hypervisor detected: KVM Jan 30 14:05:41.917050 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 14:05:41.917060 kernel: kvm-clock: using sched offset of 2697012084 cycles Jan 30 14:05:41.917070 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 14:05:41.917080 kernel: tsc: Detected 2794.748 MHz processor Jan 30 14:05:41.917099 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 14:05:41.917110 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 14:05:41.917120 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Jan 30 14:05:41.917134 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 14:05:41.917145 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 14:05:41.917154 kernel: Using GB pages for direct mapping Jan 30 14:05:41.917165 kernel: ACPI: Early table checksum verification disabled Jan 30 14:05:41.917175 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 30 14:05:41.917185 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 30 14:05:41.917195 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:05:41.917205 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:05:41.917215 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 30 14:05:41.917229 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:05:41.917238 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:05:41.917249 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:05:41.917259 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 14:05:41.917269 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 30 14:05:41.917279 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 30 14:05:41.917289 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 30 14:05:41.917299 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 30 14:05:41.917308 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 30 14:05:41.917321 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 30 14:05:41.917331 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 30 14:05:41.917342 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 30 14:05:41.917352 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 30 14:05:41.917362 kernel: No NUMA configuration found Jan 30 14:05:41.917373 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Jan 30 14:05:41.917384 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Jan 30 14:05:41.917394 kernel: Zone ranges: Jan 30 14:05:41.917405 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 14:05:41.917418 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Jan 30 14:05:41.917429 kernel: Normal empty Jan 30 14:05:41.917440 kernel: Movable zone start for each node Jan 30 14:05:41.917450 kernel: Early memory node ranges Jan 30 14:05:41.917461 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 14:05:41.917472 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 30 14:05:41.917483 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 30 14:05:41.917494 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Jan 30 14:05:41.917504 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Jan 30 14:05:41.917532 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Jan 30 14:05:41.917543 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Jan 30 14:05:41.917553 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Jan 30 14:05:41.917564 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Jan 30 14:05:41.917585 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 14:05:41.917595 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 14:05:41.917616 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 30 14:05:41.917629 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 14:05:41.917640 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Jan 30 14:05:41.917651 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 30 14:05:41.917662 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 30 14:05:41.917673 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jan 30 14:05:41.917687 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Jan 30 14:05:41.917698 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 14:05:41.917709 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 14:05:41.917720 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 14:05:41.917731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 14:05:41.917745 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 14:05:41.917756 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 14:05:41.917766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 14:05:41.917776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 14:05:41.917787 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 14:05:41.917798 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 14:05:41.917808 kernel: TSC deadline timer available Jan 30 14:05:41.917819 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 14:05:41.917830 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 14:05:41.917840 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 14:05:41.917855 kernel: kvm-guest: setup PV sched yield Jan 30 14:05:41.917865 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jan 30 14:05:41.917876 kernel: Booting paravirtualized kernel on KVM Jan 30 14:05:41.917888 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 14:05:41.917899 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 14:05:41.917910 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 14:05:41.917920 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 14:05:41.917930 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 14:05:41.917940 kernel: kvm-guest: PV spinlocks enabled Jan 30 14:05:41.917954 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 14:05:41.917965 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 14:05:41.917976 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 14:05:41.917986 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 14:05:41.917996 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 14:05:41.918007 kernel: Fallback order for Node 0: 0 Jan 30 14:05:41.918017 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Jan 30 14:05:41.918027 kernel: Policy zone: DMA32 Jan 30 14:05:41.918041 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 14:05:41.918052 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 177824K reserved, 0K cma-reserved) Jan 30 14:05:41.918062 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 14:05:41.918072 kernel: ftrace: allocating 37893 entries in 149 pages Jan 30 14:05:41.918082 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 14:05:41.918092 kernel: Dynamic Preempt: voluntary Jan 30 14:05:41.918103 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 14:05:41.918114 kernel: rcu: RCU event tracing is enabled. Jan 30 14:05:41.918125 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 14:05:41.918139 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 14:05:41.918150 kernel: Rude variant of Tasks RCU enabled. Jan 30 14:05:41.918160 kernel: Tracing variant of Tasks RCU enabled. Jan 30 14:05:41.918170 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 14:05:41.918181 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 14:05:41.918192 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 14:05:41.918203 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 14:05:41.918214 kernel: Console: colour dummy device 80x25 Jan 30 14:05:41.918224 kernel: printk: console [ttyS0] enabled Jan 30 14:05:41.918237 kernel: ACPI: Core revision 20230628 Jan 30 14:05:41.918248 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 14:05:41.918270 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 14:05:41.918289 kernel: x2apic enabled Jan 30 14:05:41.918308 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 14:05:41.918335 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 14:05:41.918345 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 14:05:41.918355 kernel: kvm-guest: setup PV IPIs Jan 30 14:05:41.918366 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 14:05:41.918380 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 14:05:41.918391 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 30 14:05:41.918401 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 14:05:41.918416 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 14:05:41.918427 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 14:05:41.918437 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 14:05:41.918447 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 14:05:41.918458 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 14:05:41.918468 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 14:05:41.918481 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 14:05:41.918492 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 14:05:41.918502 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 14:05:41.918528 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 14:05:41.918539 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 14:05:41.918549 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 14:05:41.918560 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 14:05:41.918570 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 14:05:41.918595 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 14:05:41.918605 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 14:05:41.918616 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 14:05:41.918627 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 14:05:41.918637 kernel: Freeing SMP alternatives memory: 32K Jan 30 14:05:41.918647 kernel: pid_max: default: 32768 minimum: 301 Jan 30 14:05:41.918657 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 14:05:41.918668 kernel: landlock: Up and running. Jan 30 14:05:41.918678 kernel: SELinux: Initializing. Jan 30 14:05:41.918692 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:05:41.918702 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 14:05:41.918713 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 14:05:41.918724 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 14:05:41.918734 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 14:05:41.918745 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 14:05:41.918755 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 14:05:41.918765 kernel: ... version: 0 Jan 30 14:05:41.918776 kernel: ... bit width: 48 Jan 30 14:05:41.918791 kernel: ... generic registers: 6 Jan 30 14:05:41.918801 kernel: ... value mask: 0000ffffffffffff Jan 30 14:05:41.918812 kernel: ... max period: 00007fffffffffff Jan 30 14:05:41.918822 kernel: ... fixed-purpose events: 0 Jan 30 14:05:41.918833 kernel: ... event mask: 000000000000003f Jan 30 14:05:41.918843 kernel: signal: max sigframe size: 1776 Jan 30 14:05:41.918854 kernel: rcu: Hierarchical SRCU implementation. Jan 30 14:05:41.918865 kernel: rcu: Max phase no-delay instances is 400. Jan 30 14:05:41.918876 kernel: smp: Bringing up secondary CPUs ... Jan 30 14:05:41.918890 kernel: smpboot: x86: Booting SMP configuration: Jan 30 14:05:41.918901 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 14:05:41.918911 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 14:05:41.918922 kernel: smpboot: Max logical packages: 1 Jan 30 14:05:41.918933 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 30 14:05:41.918944 kernel: devtmpfs: initialized Jan 30 14:05:41.918954 kernel: x86/mm: Memory block size: 128MB Jan 30 14:05:41.918965 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 30 14:05:41.918976 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 30 14:05:41.918991 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Jan 30 14:05:41.919002 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 30 14:05:41.919013 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Jan 30 14:05:41.919024 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 30 14:05:41.919034 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 14:05:41.919045 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 14:05:41.919055 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 14:05:41.919066 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 14:05:41.919076 kernel: audit: initializing netlink subsys (disabled) Jan 30 14:05:41.919090 kernel: audit: type=2000 audit(1738245940.985:1): state=initialized audit_enabled=0 res=1 Jan 30 14:05:41.919100 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 14:05:41.919112 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 14:05:41.919123 kernel: cpuidle: using governor menu Jan 30 14:05:41.919133 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 14:05:41.919143 kernel: dca service started, version 1.12.1 Jan 30 14:05:41.919154 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Jan 30 14:05:41.919164 kernel: PCI: Using configuration type 1 for base access Jan 30 14:05:41.919174 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 14:05:41.919198 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 14:05:41.919217 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 14:05:41.919236 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 14:05:41.919247 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 14:05:41.919257 kernel: ACPI: Added _OSI(Module Device) Jan 30 14:05:41.919267 kernel: ACPI: Added _OSI(Processor Device) Jan 30 14:05:41.919277 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 14:05:41.919287 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 14:05:41.919302 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 14:05:41.919315 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 14:05:41.919325 kernel: ACPI: Interpreter enabled Jan 30 14:05:41.919335 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 14:05:41.919346 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 14:05:41.919356 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 14:05:41.919366 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 14:05:41.919377 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 14:05:41.919387 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 14:05:41.919653 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 14:05:41.919823 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 14:05:41.919947 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 14:05:41.919957 kernel: PCI host bridge to bus 0000:00 Jan 30 14:05:41.920093 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 14:05:41.920207 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 14:05:41.920319 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 14:05:41.920435 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jan 30 14:05:41.920581 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 30 14:05:41.920703 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jan 30 14:05:41.920825 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 14:05:41.920979 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 14:05:41.921131 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 14:05:41.921254 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 30 14:05:41.921383 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 30 14:05:41.921586 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 30 14:05:41.921725 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 30 14:05:41.921869 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 14:05:41.922004 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 14:05:41.922126 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 30 14:05:41.922252 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 30 14:05:41.922372 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Jan 30 14:05:41.922501 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 14:05:41.922678 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 30 14:05:41.922820 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 30 14:05:41.922942 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Jan 30 14:05:41.923069 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 14:05:41.923197 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 30 14:05:41.923317 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 30 14:05:41.923439 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Jan 30 14:05:41.923603 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 30 14:05:41.923780 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 14:05:41.923940 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 14:05:41.924101 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 14:05:41.924261 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 30 14:05:41.924407 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 30 14:05:41.924590 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 14:05:41.924744 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 30 14:05:41.924759 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 14:05:41.924770 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 14:05:41.924780 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 14:05:41.924795 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 14:05:41.924805 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 14:05:41.924815 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 14:05:41.924826 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 14:05:41.924836 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 14:05:41.924846 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 14:05:41.924857 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 14:05:41.924868 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 14:05:41.924878 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 14:05:41.924892 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 14:05:41.924903 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 14:05:41.924913 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 14:05:41.924924 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 14:05:41.924935 kernel: iommu: Default domain type: Translated Jan 30 14:05:41.924945 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 14:05:41.924955 kernel: efivars: Registered efivars operations Jan 30 14:05:41.924966 kernel: PCI: Using ACPI for IRQ routing Jan 30 14:05:41.924976 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 14:05:41.924987 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 30 14:05:41.925001 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Jan 30 14:05:41.925011 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Jan 30 14:05:41.925022 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Jan 30 14:05:41.925033 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Jan 30 14:05:41.925043 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Jan 30 14:05:41.925054 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Jan 30 14:05:41.925064 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Jan 30 14:05:41.925226 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 14:05:41.925389 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 14:05:41.925568 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 14:05:41.925597 kernel: vgaarb: loaded Jan 30 14:05:41.925609 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 14:05:41.925620 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 14:05:41.925631 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 14:05:41.925641 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 14:05:41.925652 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 14:05:41.925663 kernel: pnp: PnP ACPI init Jan 30 14:05:41.925838 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jan 30 14:05:41.925856 kernel: pnp: PnP ACPI: found 6 devices Jan 30 14:05:41.925867 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 14:05:41.925878 kernel: NET: Registered PF_INET protocol family Jan 30 14:05:41.925914 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 14:05:41.925928 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 14:05:41.925939 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 14:05:41.925950 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 14:05:41.925965 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 14:05:41.925976 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 14:05:41.925987 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:05:41.925998 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 14:05:41.926009 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 14:05:41.926020 kernel: NET: Registered PF_XDP protocol family Jan 30 14:05:41.926182 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 30 14:05:41.926344 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 30 14:05:41.926500 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 14:05:41.926677 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 14:05:41.926818 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 14:05:41.926957 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jan 30 14:05:41.927096 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 30 14:05:41.927239 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jan 30 14:05:41.927255 kernel: PCI: CLS 0 bytes, default 64 Jan 30 14:05:41.927267 kernel: Initialise system trusted keyrings Jan 30 14:05:41.927283 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 14:05:41.927294 kernel: Key type asymmetric registered Jan 30 14:05:41.927305 kernel: Asymmetric key parser 'x509' registered Jan 30 14:05:41.927316 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 14:05:41.927327 kernel: io scheduler mq-deadline registered Jan 30 14:05:41.927338 kernel: io scheduler kyber registered Jan 30 14:05:41.927349 kernel: io scheduler bfq registered Jan 30 14:05:41.927360 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 14:05:41.927371 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 14:05:41.927386 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 14:05:41.927400 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 14:05:41.927411 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 14:05:41.927422 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 14:05:41.927433 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 14:05:41.927445 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 14:05:41.927459 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 14:05:41.927650 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 14:05:41.927668 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 14:05:41.927812 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 14:05:41.927959 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T14:05:41 UTC (1738245941) Jan 30 14:05:41.928102 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jan 30 14:05:41.928118 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 14:05:41.928129 kernel: efifb: probing for efifb Jan 30 14:05:41.928146 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jan 30 14:05:41.928157 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 30 14:05:41.928168 kernel: efifb: scrolling: redraw Jan 30 14:05:41.928179 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 14:05:41.928190 kernel: Console: switching to colour frame buffer device 160x50 Jan 30 14:05:41.928201 kernel: fb0: EFI VGA frame buffer device Jan 30 14:05:41.928211 kernel: pstore: Using crash dump compression: deflate Jan 30 14:05:41.928223 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 14:05:41.928234 kernel: NET: Registered PF_INET6 protocol family Jan 30 14:05:41.928247 kernel: Segment Routing with IPv6 Jan 30 14:05:41.928258 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 14:05:41.928269 kernel: NET: Registered PF_PACKET protocol family Jan 30 14:05:41.928280 kernel: Key type dns_resolver registered Jan 30 14:05:41.928291 kernel: IPI shorthand broadcast: enabled Jan 30 14:05:41.928305 kernel: sched_clock: Marking stable (665003721, 173831308)->(881335829, -42500800) Jan 30 14:05:41.928316 kernel: registered taskstats version 1 Jan 30 14:05:41.928327 kernel: Loading compiled-in X.509 certificates Jan 30 14:05:41.928338 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 7f0738935740330d55027faa5877e7155d5f24f4' Jan 30 14:05:41.928352 kernel: Key type .fscrypt registered Jan 30 14:05:41.928363 kernel: Key type fscrypt-provisioning registered Jan 30 14:05:41.928374 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 14:05:41.928385 kernel: ima: Allocated hash algorithm: sha1 Jan 30 14:05:41.928395 kernel: ima: No architecture policies found Jan 30 14:05:41.928406 kernel: clk: Disabling unused clocks Jan 30 14:05:41.928417 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 30 14:05:41.928428 kernel: Write protecting the kernel read-only data: 38912k Jan 30 14:05:41.928439 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 30 14:05:41.928453 kernel: Run /init as init process Jan 30 14:05:41.928466 kernel: with arguments: Jan 30 14:05:41.928478 kernel: /init Jan 30 14:05:41.928490 kernel: with environment: Jan 30 14:05:41.928500 kernel: HOME=/ Jan 30 14:05:41.928786 kernel: TERM=linux Jan 30 14:05:41.928800 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 14:05:41.928814 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:05:41.928833 systemd[1]: Detected virtualization kvm. Jan 30 14:05:41.928845 systemd[1]: Detected architecture x86-64. Jan 30 14:05:41.928856 systemd[1]: Running in initrd. Jan 30 14:05:41.928867 systemd[1]: No hostname configured, using default hostname. Jan 30 14:05:41.928879 systemd[1]: Hostname set to . Jan 30 14:05:41.928892 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:05:41.928905 systemd[1]: Queued start job for default target initrd.target. Jan 30 14:05:41.928918 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:05:41.928933 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:05:41.928946 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 14:05:41.928958 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:05:41.928970 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 14:05:41.928982 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 14:05:41.928997 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 14:05:41.929012 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 14:05:41.929024 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:05:41.929035 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:05:41.929047 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:05:41.929058 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:05:41.929070 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:05:41.929082 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:05:41.929093 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:05:41.929105 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:05:41.929121 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 14:05:41.929132 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 14:05:41.929144 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:05:41.929156 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:05:41.929168 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:05:41.929180 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:05:41.929191 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 14:05:41.929203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:05:41.929215 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 14:05:41.929230 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 14:05:41.929242 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:05:41.929254 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:05:41.929265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:05:41.929277 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 14:05:41.929288 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:05:41.929300 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 14:05:41.929343 systemd-journald[192]: Collecting audit messages is disabled. Jan 30 14:05:41.929377 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 14:05:41.929389 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:05:41.929402 systemd-journald[192]: Journal started Jan 30 14:05:41.929432 systemd-journald[192]: Runtime Journal (/run/log/journal/65e2a9eba7bf4ef28149ce24e91573d0) is 6.0M, max 48.2M, 42.2M free. Jan 30 14:05:41.917162 systemd-modules-load[194]: Inserted module 'overlay' Jan 30 14:05:41.952591 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 14:05:41.956750 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:05:41.956828 kernel: Bridge firewalling registered Jan 30 14:05:41.956870 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 30 14:05:41.960058 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:05:41.960687 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:05:41.963542 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 14:05:41.968887 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:05:41.982681 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 14:05:41.984471 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:05:41.986402 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:05:41.990417 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:05:42.002092 dracut-cmdline[217]: dracut-dracut-053 Jan 30 14:05:42.002373 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:05:42.002745 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:05:42.009070 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fe60919b0c6f6abb7495678f87f7024e97a038fc343fa31a123a43ef5f489466 Jan 30 14:05:42.005739 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:05:42.021818 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:05:42.058358 systemd-resolved[245]: Positive Trust Anchors: Jan 30 14:05:42.058378 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:05:42.058416 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:05:42.061116 systemd-resolved[245]: Defaulting to hostname 'linux'. Jan 30 14:05:42.062272 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:05:42.068368 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:05:42.091540 kernel: SCSI subsystem initialized Jan 30 14:05:42.101531 kernel: Loading iSCSI transport class v2.0-870. Jan 30 14:05:42.111545 kernel: iscsi: registered transport (tcp) Jan 30 14:05:42.131926 kernel: iscsi: registered transport (qla4xxx) Jan 30 14:05:42.131975 kernel: QLogic iSCSI HBA Driver Jan 30 14:05:42.184160 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 14:05:42.196632 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 14:05:42.221396 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 14:05:42.221466 kernel: device-mapper: uevent: version 1.0.3 Jan 30 14:05:42.221479 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 14:05:42.262547 kernel: raid6: avx2x4 gen() 29887 MB/s Jan 30 14:05:42.279532 kernel: raid6: avx2x2 gen() 31155 MB/s Jan 30 14:05:42.296618 kernel: raid6: avx2x1 gen() 26014 MB/s Jan 30 14:05:42.296688 kernel: raid6: using algorithm avx2x2 gen() 31155 MB/s Jan 30 14:05:42.314652 kernel: raid6: .... xor() 20009 MB/s, rmw enabled Jan 30 14:05:42.314704 kernel: raid6: using avx2x2 recovery algorithm Jan 30 14:05:42.335540 kernel: xor: automatically using best checksumming function avx Jan 30 14:05:42.483549 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 14:05:42.496250 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:05:42.516696 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:05:42.527977 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 30 14:05:42.532582 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:05:42.543665 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 14:05:42.559213 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jan 30 14:05:42.591698 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:05:42.604646 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:05:42.667601 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:05:42.677739 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 14:05:42.691025 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 14:05:42.706643 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 14:05:42.714636 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 14:05:42.714778 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 14:05:42.714791 kernel: GPT:9289727 != 19775487 Jan 30 14:05:42.714801 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 14:05:42.714811 kernel: GPT:9289727 != 19775487 Jan 30 14:05:42.714821 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 14:05:42.714831 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:05:42.691801 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:05:42.692300 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:05:42.718369 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 14:05:42.692931 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:05:42.701681 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 14:05:42.731789 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:05:42.731913 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:05:42.736867 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:05:42.741245 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:05:42.748235 kernel: libata version 3.00 loaded. Jan 30 14:05:42.741398 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:05:42.744671 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:05:42.754732 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:05:42.757047 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:05:42.761609 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) Jan 30 14:05:42.762986 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 14:05:42.766528 kernel: AES CTR mode by8 optimization enabled Jan 30 14:05:42.771524 kernel: BTRFS: device fsid f8084233-4a6f-4e67-af0b-519e43b19e58 devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (467) Jan 30 14:05:42.775547 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 14:05:42.796227 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 14:05:42.796242 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 14:05:42.796401 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 14:05:42.796578 kernel: scsi host0: ahci Jan 30 14:05:42.796734 kernel: scsi host1: ahci Jan 30 14:05:42.796882 kernel: scsi host2: ahci Jan 30 14:05:42.797026 kernel: scsi host3: ahci Jan 30 14:05:42.797173 kernel: scsi host4: ahci Jan 30 14:05:42.797312 kernel: scsi host5: ahci Jan 30 14:05:42.797456 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 30 14:05:42.797468 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 30 14:05:42.797478 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 30 14:05:42.797489 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 30 14:05:42.797499 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 30 14:05:42.797585 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 30 14:05:42.786063 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 14:05:42.787778 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:05:42.802543 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 14:05:42.810204 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 14:05:42.816383 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 14:05:42.836602 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 14:05:42.849667 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 14:05:42.861574 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:05:42.861635 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:05:42.864787 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:05:42.867601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:05:42.881360 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:05:42.884605 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 14:05:42.908678 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:05:43.106831 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 14:05:43.106903 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 14:05:43.106915 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 14:05:43.108835 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 14:05:43.108875 kernel: ata3.00: applying bridge limits Jan 30 14:05:43.109531 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 14:05:43.110547 kernel: ata3.00: configured for UDMA/100 Jan 30 14:05:43.110569 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 14:05:43.113529 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 14:05:43.113558 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 14:05:43.127617 disk-uuid[558]: Primary Header is updated. Jan 30 14:05:43.127617 disk-uuid[558]: Secondary Entries is updated. Jan 30 14:05:43.127617 disk-uuid[558]: Secondary Header is updated. Jan 30 14:05:43.131548 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:05:43.137566 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:05:43.166951 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 14:05:43.183630 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 14:05:43.183648 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 14:05:44.153545 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 14:05:44.153825 disk-uuid[574]: The operation has completed successfully. Jan 30 14:05:44.180716 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 14:05:44.180838 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 14:05:44.203713 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 14:05:44.207379 sh[600]: Success Jan 30 14:05:44.221600 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 14:05:44.257644 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 14:05:44.270957 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 14:05:44.275644 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 14:05:44.286852 kernel: BTRFS info (device dm-0): first mount of filesystem f8084233-4a6f-4e67-af0b-519e43b19e58 Jan 30 14:05:44.286894 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:05:44.286908 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 14:05:44.287862 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 14:05:44.288599 kernel: BTRFS info (device dm-0): using free space tree Jan 30 14:05:44.292943 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 14:05:44.293654 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 14:05:44.303631 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 14:05:44.304498 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 14:05:44.314392 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 14:05:44.314429 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:05:44.314444 kernel: BTRFS info (device vda6): using free space tree Jan 30 14:05:44.317560 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 14:05:44.326152 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 14:05:44.351930 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 14:05:44.412451 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:05:44.449715 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:05:44.475089 systemd-networkd[778]: lo: Link UP Jan 30 14:05:44.475101 systemd-networkd[778]: lo: Gained carrier Jan 30 14:05:44.477077 systemd-networkd[778]: Enumeration completed Jan 30 14:05:44.477157 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:05:44.477569 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:05:44.477573 systemd-networkd[778]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:05:44.478685 systemd-networkd[778]: eth0: Link UP Jan 30 14:05:44.478689 systemd-networkd[778]: eth0: Gained carrier Jan 30 14:05:44.478697 systemd-networkd[778]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:05:44.479318 systemd[1]: Reached target network.target - Network. Jan 30 14:05:44.528557 systemd-networkd[778]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 14:05:44.635027 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 14:05:44.645776 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 14:05:44.696457 ignition[783]: Ignition 2.20.0 Jan 30 14:05:44.696471 ignition[783]: Stage: fetch-offline Jan 30 14:05:44.696546 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:05:44.696560 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 14:05:44.696683 ignition[783]: parsed url from cmdline: "" Jan 30 14:05:44.696689 ignition[783]: no config URL provided Jan 30 14:05:44.696695 ignition[783]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 14:05:44.696705 ignition[783]: no config at "/usr/lib/ignition/user.ign" Jan 30 14:05:44.696740 ignition[783]: op(1): [started] loading QEMU firmware config module Jan 30 14:05:44.696746 ignition[783]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 14:05:44.702953 ignition[783]: op(1): [finished] loading QEMU firmware config module Jan 30 14:05:44.722242 ignition[783]: parsing config with SHA512: b3f263d46a3cdb6f9b5fb04084aa0ab127a67619b7de962a630dce30dda81548ccfd9cd1cc742e61baf49bc1e53b1adf87af5a31846891d9ebf3ef549b4c416f Jan 30 14:05:44.726894 unknown[783]: fetched base config from "system" Jan 30 14:05:44.726906 unknown[783]: fetched user config from "qemu" Jan 30 14:05:44.728307 ignition[783]: fetch-offline: fetch-offline passed Jan 30 14:05:44.728398 ignition[783]: Ignition finished successfully Jan 30 14:05:44.730454 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:05:44.733884 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 14:05:44.741771 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 14:05:44.755938 ignition[793]: Ignition 2.20.0 Jan 30 14:05:44.755950 ignition[793]: Stage: kargs Jan 30 14:05:44.756103 ignition[793]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:05:44.756113 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 14:05:44.756871 ignition[793]: kargs: kargs passed Jan 30 14:05:44.756914 ignition[793]: Ignition finished successfully Jan 30 14:05:44.760907 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 14:05:44.772675 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 14:05:44.785128 ignition[802]: Ignition 2.20.0 Jan 30 14:05:44.785139 ignition[802]: Stage: disks Jan 30 14:05:44.785290 ignition[802]: no configs at "/usr/lib/ignition/base.d" Jan 30 14:05:44.785302 ignition[802]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 14:05:44.788951 ignition[802]: disks: disks passed Jan 30 14:05:44.788996 ignition[802]: Ignition finished successfully Jan 30 14:05:44.792429 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 14:05:44.793677 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 14:05:44.795709 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 14:05:44.795930 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:05:44.796262 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:05:44.796766 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:05:44.816841 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 14:05:44.842416 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 14:05:45.189605 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 14:05:45.212658 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 14:05:45.332534 kernel: EXT4-fs (vda9): mounted filesystem cdc615db-d057-439f-af25-aa57b1c399e2 r/w with ordered data mode. Quota mode: none. Jan 30 14:05:45.332879 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 14:05:45.333555 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 14:05:45.351619 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:05:45.354297 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 14:05:45.354628 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 14:05:45.354670 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 14:05:45.354690 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:05:45.372566 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 14:05:45.375366 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 14:05:45.384530 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (820) Jan 30 14:05:45.387109 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 14:05:45.387132 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:05:45.387142 kernel: BTRFS info (device vda6): using free space tree Jan 30 14:05:45.390541 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 14:05:45.391842 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:05:45.412790 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 14:05:45.417555 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Jan 30 14:05:45.422108 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 14:05:45.426518 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 14:05:45.530351 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 14:05:45.556585 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 14:05:45.558200 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 14:05:45.563240 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 14:05:45.564587 kernel: BTRFS info (device vda6): last unmount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 14:05:45.654889 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 14:05:45.744252 ignition[937]: INFO : Ignition 2.20.0 Jan 30 14:05:45.744252 ignition[937]: INFO : Stage: mount Jan 30 14:05:45.751662 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:05:45.751662 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 14:05:45.751662 ignition[937]: INFO : mount: mount passed Jan 30 14:05:45.751662 ignition[937]: INFO : Ignition finished successfully Jan 30 14:05:45.747153 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 14:05:45.766740 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 14:05:46.303702 systemd-networkd[778]: eth0: Gained IPv6LL Jan 30 14:05:46.350839 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 14:05:46.357532 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (947) Jan 30 14:05:46.359582 kernel: BTRFS info (device vda6): first mount of filesystem 8f723f8b-dc93-4eaf-8b2c-0038aa5af52c Jan 30 14:05:46.359601 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 14:05:46.359612 kernel: BTRFS info (device vda6): using free space tree Jan 30 14:05:46.362546 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 14:05:46.364983 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 14:05:46.396946 ignition[964]: INFO : Ignition 2.20.0 Jan 30 14:05:46.396946 ignition[964]: INFO : Stage: files Jan 30 14:05:46.399037 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:05:46.399037 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 14:05:46.399037 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Jan 30 14:05:46.399037 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 14:05:46.399037 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 14:05:46.406908 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 14:05:46.406908 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 14:05:46.406908 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 14:05:46.406908 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 14:05:46.406908 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 14:05:46.402111 unknown[964]: wrote ssh authorized keys file for user: core Jan 30 14:05:46.446569 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 14:05:46.545940 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 14:05:46.545940 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 14:05:46.549699 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 14:05:46.549699 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:05:46.553118 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 14:05:46.554814 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:05:46.556558 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 14:05:46.558271 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:05:46.560034 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 14:05:46.562129 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:05:46.563949 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 14:05:46.565703 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:05:46.568205 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:05:46.570637 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:05:46.572736 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 14:05:47.086794 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 14:05:47.366200 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 14:05:47.366200 ignition[964]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 14:05:47.370819 ignition[964]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:05:47.370819 ignition[964]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 14:05:47.370819 ignition[964]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 14:05:47.370819 ignition[964]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 14:05:47.370819 ignition[964]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 14:05:47.370819 ignition[964]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 14:05:47.370819 ignition[964]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 14:05:47.370819 ignition[964]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 14:05:47.393691 ignition[964]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 14:05:47.398578 ignition[964]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 14:05:47.400210 ignition[964]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 14:05:47.400210 ignition[964]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 14:05:47.400210 ignition[964]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 14:05:47.400210 ignition[964]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:05:47.400210 ignition[964]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 14:05:47.400210 ignition[964]: INFO : files: files passed Jan 30 14:05:47.400210 ignition[964]: INFO : Ignition finished successfully Jan 30 14:05:47.401707 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 14:05:47.411779 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 14:05:47.416232 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 14:05:47.418944 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 14:05:47.419957 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 14:05:47.427221 initrd-setup-root-after-ignition[993]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 14:05:47.431879 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:05:47.431879 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:05:47.435371 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 14:05:47.436749 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:05:47.438118 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 14:05:47.446672 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 14:05:47.472204 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 14:05:47.472332 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 14:05:47.474909 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 14:05:47.477022 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 14:05:47.479041 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 14:05:47.479869 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 14:05:47.499445 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:05:47.511670 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 14:05:47.524901 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:05:47.527545 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:05:47.527746 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 14:05:47.528048 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 14:05:47.528185 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 14:05:47.535034 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 14:05:47.537359 systemd[1]: Stopped target basic.target - Basic System. Jan 30 14:05:47.537577 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 14:05:47.539379 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 14:05:47.539943 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 14:05:47.540326 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 14:05:47.540895 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 14:05:47.541291 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 14:05:47.541865 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 14:05:47.542234 systemd[1]: Stopped target swap.target - Swaps. Jan 30 14:05:47.542777 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 14:05:47.542936 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 14:05:47.558688 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:05:47.558864 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:05:47.562141 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 14:05:47.562268 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:05:47.565581 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 14:05:47.565755 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 14:05:47.570205 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 14:05:47.570346 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 14:05:47.571561 systemd[1]: Stopped target paths.target - Path Units. Jan 30 14:05:47.573689 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 14:05:47.577594 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:05:47.580391 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 14:05:47.582287 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 14:05:47.584299 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 14:05:47.585230 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 14:05:47.587256 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 14:05:47.588217 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 14:05:47.590622 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 14:05:47.591912 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 14:05:47.594535 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 14:05:47.595572 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 14:05:47.609707 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 14:05:47.611657 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 14:05:47.612753 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:05:47.616090 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 14:05:47.618260 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 14:05:47.619586 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:05:47.622503 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 14:05:47.623802 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 14:05:47.627906 ignition[1019]: INFO : Ignition 2.20.0 Jan 30 14:05:47.627906 ignition[1019]: INFO : Stage: umount Jan 30 14:05:47.627906 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 14:05:47.627906 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 14:05:47.627906 ignition[1019]: INFO : umount: umount passed Jan 30 14:05:47.627906 ignition[1019]: INFO : Ignition finished successfully Jan 30 14:05:47.634463 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 14:05:47.635546 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 14:05:47.639452 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 14:05:47.640590 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 14:05:47.644675 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 14:05:47.646134 systemd[1]: Stopped target network.target - Network. Jan 30 14:05:47.648137 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 14:05:47.649135 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 14:05:47.651505 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 14:05:47.652738 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 14:05:47.655080 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 14:05:47.656010 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 14:05:47.658009 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 14:05:47.658987 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 14:05:47.661255 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 14:05:47.663737 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 14:05:47.667576 systemd-networkd[778]: eth0: DHCPv6 lease lost Jan 30 14:05:47.669541 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 14:05:47.670882 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 14:05:47.674845 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 14:05:47.675886 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 14:05:47.679786 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 14:05:47.679837 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:05:47.693618 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 14:05:47.694555 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 14:05:47.694610 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 14:05:47.696850 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 14:05:47.696899 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:05:47.698945 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 14:05:47.698995 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 14:05:47.701237 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 14:05:47.701285 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:05:47.702228 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:05:47.713323 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 14:05:47.713466 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 14:05:47.732184 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 14:05:47.732355 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:05:47.733402 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 14:05:47.733459 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 14:05:47.735606 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 14:05:47.735644 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:05:47.737544 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 14:05:47.737590 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 14:05:47.739972 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 14:05:47.740018 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 14:05:47.759140 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 14:05:47.759186 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 14:05:47.775693 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 14:05:47.776949 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 14:05:47.777003 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:05:47.778259 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:05:47.778305 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:05:47.782837 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 14:05:47.782947 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 14:05:47.836845 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 14:05:47.836977 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 14:05:47.839433 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 14:05:47.840923 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 14:05:47.840985 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 14:05:47.861726 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 14:05:47.868977 systemd[1]: Switching root. Jan 30 14:05:47.904548 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jan 30 14:05:47.904621 systemd-journald[192]: Journal stopped Jan 30 14:05:49.204582 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 14:05:49.204667 kernel: SELinux: policy capability open_perms=1 Jan 30 14:05:49.204688 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 14:05:49.204702 kernel: SELinux: policy capability always_check_network=0 Jan 30 14:05:49.204721 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 14:05:49.204736 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 14:05:49.204752 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 14:05:49.204767 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 14:05:49.204782 kernel: audit: type=1403 audit(1738245948.364:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 14:05:49.204811 systemd[1]: Successfully loaded SELinux policy in 45.382ms. Jan 30 14:05:49.204844 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.296ms. Jan 30 14:05:49.204863 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 14:05:49.204880 systemd[1]: Detected virtualization kvm. Jan 30 14:05:49.204899 systemd[1]: Detected architecture x86-64. Jan 30 14:05:49.204917 systemd[1]: Detected first boot. Jan 30 14:05:49.204933 systemd[1]: Initializing machine ID from VM UUID. Jan 30 14:05:49.204950 zram_generator::config[1064]: No configuration found. Jan 30 14:05:49.204968 systemd[1]: Populated /etc with preset unit settings. Jan 30 14:05:49.204984 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 14:05:49.205000 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 14:05:49.205016 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 14:05:49.205037 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 14:05:49.205054 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 14:05:49.205070 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 14:05:49.205087 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 14:05:49.205109 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 14:05:49.205127 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 14:05:49.205144 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 14:05:49.205168 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 14:05:49.205185 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 14:05:49.205205 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 14:05:49.205222 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 14:05:49.205238 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 14:05:49.205255 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 14:05:49.205272 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 14:05:49.205291 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 14:05:49.205308 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 14:05:49.205324 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 14:05:49.205341 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 14:05:49.205361 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 14:05:49.205391 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 14:05:49.205407 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 14:05:49.205424 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 14:05:49.205440 systemd[1]: Reached target slices.target - Slice Units. Jan 30 14:05:49.205457 systemd[1]: Reached target swap.target - Swaps. Jan 30 14:05:49.205472 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 14:05:49.205489 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 14:05:49.205523 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 14:05:49.205541 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 14:05:49.205558 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 14:05:49.205574 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 14:05:49.205590 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 14:05:49.205606 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 14:05:49.205622 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 14:05:49.205639 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:05:49.205656 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 14:05:49.205678 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 14:05:49.205695 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 14:05:49.205711 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 14:05:49.205728 systemd[1]: Reached target machines.target - Containers. Jan 30 14:05:49.205745 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 14:05:49.205761 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:05:49.205778 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 14:05:49.205795 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 14:05:49.205814 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:05:49.205831 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:05:49.205847 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:05:49.205864 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 14:05:49.205880 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:05:49.205898 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 14:05:49.205914 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 14:05:49.205930 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 14:05:49.205950 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 14:05:49.205966 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 14:05:49.205982 kernel: fuse: init (API version 7.39) Jan 30 14:05:49.205997 kernel: loop: module loaded Jan 30 14:05:49.206013 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 14:05:49.206031 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 14:05:49.206048 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 14:05:49.206064 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 14:05:49.206080 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 14:05:49.206096 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 14:05:49.206121 systemd[1]: Stopped verity-setup.service. Jan 30 14:05:49.206138 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:05:49.206154 kernel: ACPI: bus type drm_connector registered Jan 30 14:05:49.206169 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 14:05:49.206208 systemd-journald[1134]: Collecting audit messages is disabled. Jan 30 14:05:49.206240 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 14:05:49.206261 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 14:05:49.206278 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 14:05:49.206294 systemd-journald[1134]: Journal started Jan 30 14:05:49.206323 systemd-journald[1134]: Runtime Journal (/run/log/journal/65e2a9eba7bf4ef28149ce24e91573d0) is 6.0M, max 48.2M, 42.2M free. Jan 30 14:05:48.966201 systemd[1]: Queued start job for default target multi-user.target. Jan 30 14:05:48.986678 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 14:05:48.987142 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 14:05:49.209769 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 14:05:49.210277 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 14:05:49.211613 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 14:05:49.212930 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 14:05:49.214380 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 14:05:49.215905 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 14:05:49.216072 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 14:05:49.217551 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:05:49.217717 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:05:49.219187 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:05:49.219406 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:05:49.220863 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:05:49.221067 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:05:49.222747 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 14:05:49.222944 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 14:05:49.224487 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:05:49.224849 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:05:49.226392 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 14:05:49.227975 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 14:05:49.229706 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 14:05:49.242916 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 14:05:49.254601 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 14:05:49.257055 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 14:05:49.258335 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 14:05:49.258384 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 14:05:49.260879 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 14:05:49.263593 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 14:05:49.268103 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 14:05:49.269431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:05:49.273108 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 14:05:49.275454 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 14:05:49.276850 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:05:49.277847 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 14:05:49.279250 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:05:49.280575 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 14:05:49.284375 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 14:05:49.288613 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 14:05:49.292350 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 14:05:49.293996 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 14:05:49.302141 systemd-journald[1134]: Time spent on flushing to /var/log/journal/65e2a9eba7bf4ef28149ce24e91573d0 is 25.768ms for 1045 entries. Jan 30 14:05:49.302141 systemd-journald[1134]: System Journal (/var/log/journal/65e2a9eba7bf4ef28149ce24e91573d0) is 8.0M, max 195.6M, 187.6M free. Jan 30 14:05:49.337607 systemd-journald[1134]: Received client request to flush runtime journal. Jan 30 14:05:49.337746 kernel: loop0: detected capacity change from 0 to 138184 Jan 30 14:05:49.298486 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 14:05:49.300760 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 14:05:49.307272 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 14:05:49.324889 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 14:05:49.329969 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 14:05:49.339707 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 14:05:49.343329 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 14:05:49.357255 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 14:05:49.359757 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 14:05:49.370410 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 14:05:49.375889 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 14:05:49.373139 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 14:05:49.374120 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 14:05:49.379328 udevadm[1192]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 14:05:49.393458 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 30 14:05:49.393481 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. Jan 30 14:05:49.400312 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 14:05:49.408541 kernel: loop1: detected capacity change from 0 to 218376 Jan 30 14:05:49.442542 kernel: loop2: detected capacity change from 0 to 141000 Jan 30 14:05:49.475753 kernel: loop3: detected capacity change from 0 to 138184 Jan 30 14:05:49.488569 kernel: loop4: detected capacity change from 0 to 218376 Jan 30 14:05:49.501546 kernel: loop5: detected capacity change from 0 to 141000 Jan 30 14:05:49.514816 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 14:05:49.515474 (sd-merge)[1202]: Merged extensions into '/usr'. Jan 30 14:05:49.520031 systemd[1]: Reloading requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 14:05:49.520152 systemd[1]: Reloading... Jan 30 14:05:49.581543 zram_generator::config[1227]: No configuration found. Jan 30 14:05:49.626311 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 14:05:49.706336 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:05:49.755533 systemd[1]: Reloading finished in 232 ms. Jan 30 14:05:49.797726 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 14:05:49.799366 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 14:05:49.815747 systemd[1]: Starting ensure-sysext.service... Jan 30 14:05:49.818722 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 14:05:49.825066 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Jan 30 14:05:49.825082 systemd[1]: Reloading... Jan 30 14:05:49.843939 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 14:05:49.844316 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 14:05:49.845725 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 14:05:49.846072 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jan 30 14:05:49.846171 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Jan 30 14:05:49.850931 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:05:49.850946 systemd-tmpfiles[1266]: Skipping /boot Jan 30 14:05:49.871272 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 14:05:49.871290 systemd-tmpfiles[1266]: Skipping /boot Jan 30 14:05:49.898549 zram_generator::config[1296]: No configuration found. Jan 30 14:05:50.023072 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:05:50.086963 systemd[1]: Reloading finished in 261 ms. Jan 30 14:05:50.107669 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 14:05:50.120153 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 14:05:50.130771 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 14:05:50.133609 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 14:05:50.136404 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 14:05:50.142450 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 14:05:50.145967 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 14:05:50.150617 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 14:05:50.155024 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:05:50.155248 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:05:50.159196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:05:50.163163 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:05:50.166796 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:05:50.168589 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:05:50.171332 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 14:05:50.173013 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:05:50.174293 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:05:50.174909 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:05:50.181201 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:05:50.181432 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:05:50.184483 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:05:50.184747 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:05:50.192010 systemd-udevd[1337]: Using default interface naming scheme 'v255'. Jan 30 14:05:50.194434 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 14:05:50.198104 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 14:05:50.204069 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:05:50.204587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:05:50.214064 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:05:50.217673 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:05:50.218231 augenrules[1367]: No rules Jan 30 14:05:50.221731 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:05:50.223160 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:05:50.227423 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 14:05:50.228670 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:05:50.230078 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:05:50.230354 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 14:05:50.232229 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 14:05:50.233944 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 14:05:50.237012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:05:50.238803 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:05:50.240940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:05:50.241138 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:05:50.243122 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:05:50.243349 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:05:50.260935 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 14:05:50.266885 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 14:05:50.278417 systemd[1]: Finished ensure-sysext.service. Jan 30 14:05:50.285787 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 14:05:50.286677 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:05:50.297792 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 30 14:05:50.299055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 14:05:50.301756 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 14:05:50.313730 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 14:05:50.315431 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 14:05:50.320683 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 14:05:50.321858 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 14:05:50.325187 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 14:05:50.328841 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 14:05:50.329994 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 14:05:50.330041 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 14:05:50.330819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 14:05:50.331068 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 14:05:50.333016 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 14:05:50.333286 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 14:05:50.334789 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 14:05:50.335053 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 14:05:50.346560 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1397) Jan 30 14:05:50.350984 augenrules[1406]: /sbin/augenrules: No change Jan 30 14:05:50.361280 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 14:05:50.368094 augenrules[1438]: No rules Jan 30 14:05:50.368939 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 14:05:50.369211 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 14:05:50.371079 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 14:05:50.372724 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 30 14:05:50.383100 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 14:05:50.390363 systemd-resolved[1335]: Positive Trust Anchors: Jan 30 14:05:50.390875 systemd-resolved[1335]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 14:05:50.390920 systemd-resolved[1335]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 14:05:50.396841 systemd-resolved[1335]: Defaulting to hostname 'linux'. Jan 30 14:05:50.401534 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 14:05:50.401579 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 14:05:50.406056 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 14:05:50.408650 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 14:05:50.416067 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 30 14:05:50.416425 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 14:05:50.416642 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 14:05:50.416865 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 14:05:50.430697 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 14:05:50.452543 kernel: ACPI: button: Power Button [PWRF] Jan 30 14:05:50.459914 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 14:05:50.463608 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 14:05:50.485275 systemd-networkd[1421]: lo: Link UP Jan 30 14:05:50.486280 systemd-networkd[1421]: lo: Gained carrier Jan 30 14:05:50.489979 systemd-networkd[1421]: Enumeration completed Jan 30 14:05:50.490375 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:05:50.490379 systemd-networkd[1421]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 14:05:50.491578 systemd-networkd[1421]: eth0: Link UP Jan 30 14:05:50.491629 systemd-networkd[1421]: eth0: Gained carrier Jan 30 14:05:50.491688 systemd-networkd[1421]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 14:05:50.530109 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:05:50.532317 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 14:05:50.532546 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 14:05:50.533537 systemd-networkd[1421]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 14:05:50.533853 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 14:05:50.536914 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Jan 30 14:05:50.538087 systemd-timesyncd[1422]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 14:05:50.538135 systemd-timesyncd[1422]: Initial clock synchronization to Thu 2025-01-30 14:05:50.763721 UTC. Jan 30 14:05:50.539230 systemd[1]: Reached target network.target - Network. Jan 30 14:05:50.541804 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 14:05:50.552727 kernel: kvm_amd: TSC scaling supported Jan 30 14:05:50.552782 kernel: kvm_amd: Nested Virtualization enabled Jan 30 14:05:50.552796 kernel: kvm_amd: Nested Paging enabled Jan 30 14:05:50.554033 kernel: kvm_amd: LBR virtualization supported Jan 30 14:05:50.554070 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 14:05:50.554716 kernel: kvm_amd: Virtual GIF supported Jan 30 14:05:50.568839 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 14:05:50.570913 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 14:05:50.572569 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:05:50.579562 kernel: EDAC MC: Ver: 3.0.0 Jan 30 14:05:50.583856 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 14:05:50.611132 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 14:05:50.621743 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 14:05:50.623544 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 14:05:50.631403 lvm[1467]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:05:50.664758 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 14:05:50.666547 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 14:05:50.667905 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 14:05:50.669225 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 14:05:50.670681 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 14:05:50.672357 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 14:05:50.673746 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 14:05:50.675234 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 14:05:50.676712 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 14:05:50.676739 systemd[1]: Reached target paths.target - Path Units. Jan 30 14:05:50.677780 systemd[1]: Reached target timers.target - Timer Units. Jan 30 14:05:50.679655 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 14:05:50.682502 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 14:05:50.690957 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 14:05:50.693550 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 14:05:50.695402 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 14:05:50.696746 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 14:05:50.697859 systemd[1]: Reached target basic.target - Basic System. Jan 30 14:05:50.698968 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:05:50.698995 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 14:05:50.700014 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 14:05:50.702692 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 14:05:50.706794 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 14:05:50.709507 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 14:05:50.711043 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 14:05:50.713156 jq[1476]: false Jan 30 14:05:50.713741 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 14:05:50.714141 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 14:05:50.717036 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 14:05:50.722740 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 14:05:50.745695 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 14:05:50.751967 extend-filesystems[1477]: Found loop3 Jan 30 14:05:50.753165 extend-filesystems[1477]: Found loop4 Jan 30 14:05:50.753165 extend-filesystems[1477]: Found loop5 Jan 30 14:05:50.753165 extend-filesystems[1477]: Found sr0 Jan 30 14:05:50.753165 extend-filesystems[1477]: Found vda Jan 30 14:05:50.753165 extend-filesystems[1477]: Found vda1 Jan 30 14:05:50.753165 extend-filesystems[1477]: Found vda2 Jan 30 14:05:50.753165 extend-filesystems[1477]: Found vda3 Jan 30 14:05:50.753165 extend-filesystems[1477]: Found usr Jan 30 14:05:50.753165 extend-filesystems[1477]: Found vda4 Jan 30 14:05:50.753165 extend-filesystems[1477]: Found vda6 Jan 30 14:05:50.753165 extend-filesystems[1477]: Found vda7 Jan 30 14:05:50.753165 extend-filesystems[1477]: Found vda9 Jan 30 14:05:50.753165 extend-filesystems[1477]: Checking size of /dev/vda9 Jan 30 14:05:50.754922 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 14:05:50.763009 dbus-daemon[1475]: [system] SELinux support is enabled Jan 30 14:05:50.765496 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 14:05:50.766204 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 14:05:50.767122 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 14:05:50.769676 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 14:05:50.773179 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 14:05:50.777666 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 14:05:50.780236 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 14:05:50.780541 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 14:05:50.780964 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 14:05:50.781225 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 14:05:50.783694 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 14:05:50.784616 update_engine[1494]: I20250130 14:05:50.783826 1494 main.cc:92] Flatcar Update Engine starting Jan 30 14:05:50.783975 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 14:05:50.785096 update_engine[1494]: I20250130 14:05:50.785060 1494 update_check_scheduler.cc:74] Next update check in 11m19s Jan 30 14:05:50.793362 jq[1495]: true Jan 30 14:05:50.799342 (ntainerd)[1502]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 14:05:50.803943 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 14:05:50.803996 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 14:05:50.808550 extend-filesystems[1477]: Resized partition /dev/vda9 Jan 30 14:05:50.809760 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 14:05:50.809794 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 14:05:50.812138 systemd[1]: Started update-engine.service - Update Engine. Jan 30 14:05:50.814390 extend-filesystems[1510]: resize2fs 1.47.1 (20-May-2024) Jan 30 14:05:50.865298 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1393) Jan 30 14:05:50.865338 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 14:05:50.817214 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 14:05:50.865429 tar[1497]: linux-amd64/LICENSE Jan 30 14:05:50.865429 tar[1497]: linux-amd64/helm Jan 30 14:05:50.843054 systemd-logind[1488]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 14:05:50.865919 jq[1503]: true Jan 30 14:05:50.843079 systemd-logind[1488]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 14:05:50.845756 systemd-logind[1488]: New seat seat0. Jan 30 14:05:50.847015 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 14:05:50.896125 sshd_keygen[1493]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 14:05:50.920889 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 14:05:50.927832 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 14:05:50.935948 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 14:05:50.936241 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 14:05:50.947625 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 14:05:50.988659 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 14:05:50.999133 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 14:05:51.000867 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 14:05:51.020705 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 14:05:51.022629 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 14:05:51.030357 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 14:05:51.033633 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:35644.service - OpenSSH per-connection server daemon (10.0.0.1:35644). Jan 30 14:05:51.125573 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 14:05:51.158531 systemd[1]: sshd@0-10.0.0.12:22-10.0.0.1:35644.service: Deactivated successfully. Jan 30 14:05:51.594028 sshd[1557]: Connection closed by authenticating user core 10.0.0.1 port 35644 [preauth] Jan 30 14:05:51.555301 systemd-networkd[1421]: eth0: Gained IPv6LL Jan 30 14:05:51.594443 containerd[1502]: time="2025-01-30T14:05:51.594221135Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 30 14:05:51.558555 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 14:05:51.560647 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 14:05:51.574772 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 14:05:51.595352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:05:51.596695 extend-filesystems[1510]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 14:05:51.596695 extend-filesystems[1510]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 14:05:51.596695 extend-filesystems[1510]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 14:05:51.603196 extend-filesystems[1477]: Resized filesystem in /dev/vda9 Jan 30 14:05:51.599977 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 14:05:51.606465 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 14:05:51.606763 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 14:05:51.629125 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 14:05:51.631546 containerd[1502]: time="2025-01-30T14:05:51.631296010Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:05:51.632965 containerd[1502]: time="2025-01-30T14:05:51.632937273Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:05:51.632998 containerd[1502]: time="2025-01-30T14:05:51.632963861Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 14:05:51.632998 containerd[1502]: time="2025-01-30T14:05:51.632978253Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 14:05:51.633152 containerd[1502]: time="2025-01-30T14:05:51.633132610Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 14:05:51.633188 containerd[1502]: time="2025-01-30T14:05:51.633153295Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 14:05:51.633364 containerd[1502]: time="2025-01-30T14:05:51.633220440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:05:51.633364 containerd[1502]: time="2025-01-30T14:05:51.633234204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:05:51.633433 containerd[1502]: time="2025-01-30T14:05:51.633411832Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:05:51.633433 containerd[1502]: time="2025-01-30T14:05:51.633429684Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 14:05:51.633481 containerd[1502]: time="2025-01-30T14:05:51.633442118Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:05:51.633481 containerd[1502]: time="2025-01-30T14:05:51.633451616Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 14:05:51.633808 containerd[1502]: time="2025-01-30T14:05:51.633554579Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:05:51.633808 containerd[1502]: time="2025-01-30T14:05:51.633787722Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 14:05:51.633921 containerd[1502]: time="2025-01-30T14:05:51.633899793Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 14:05:51.633921 containerd[1502]: time="2025-01-30T14:05:51.633917603Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 14:05:51.634026 containerd[1502]: time="2025-01-30T14:05:51.634009348Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 14:05:51.634089 containerd[1502]: time="2025-01-30T14:05:51.634072280Z" level=info msg="metadata content store policy set" policy=shared Jan 30 14:05:51.649122 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 14:05:51.649417 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 14:05:51.670007 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 14:05:51.790850 tar[1497]: linux-amd64/README.md Jan 30 14:05:51.804199 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 14:05:51.931592 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Jan 30 14:05:51.933777 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 14:05:51.936038 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 14:05:51.960227 containerd[1502]: time="2025-01-30T14:05:51.960061135Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 14:05:51.960227 containerd[1502]: time="2025-01-30T14:05:51.960143661Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 14:05:51.960227 containerd[1502]: time="2025-01-30T14:05:51.960161771Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 14:05:51.960227 containerd[1502]: time="2025-01-30T14:05:51.960180725Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 14:05:51.960227 containerd[1502]: time="2025-01-30T14:05:51.960197022Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 14:05:51.960486 containerd[1502]: time="2025-01-30T14:05:51.960403937Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 14:05:51.960760 containerd[1502]: time="2025-01-30T14:05:51.960706863Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 14:05:51.960911 containerd[1502]: time="2025-01-30T14:05:51.960855575Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 14:05:51.960911 containerd[1502]: time="2025-01-30T14:05:51.960881916Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 14:05:51.960911 containerd[1502]: time="2025-01-30T14:05:51.960904621Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 14:05:51.960911 containerd[1502]: time="2025-01-30T14:05:51.960921484Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 14:05:51.961101 containerd[1502]: time="2025-01-30T14:05:51.960937503Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 14:05:51.961101 containerd[1502]: time="2025-01-30T14:05:51.960951307Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 14:05:51.961101 containerd[1502]: time="2025-01-30T14:05:51.960966316Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 14:05:51.961101 containerd[1502]: time="2025-01-30T14:05:51.960981150Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 14:05:51.961101 containerd[1502]: time="2025-01-30T14:05:51.960995325Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 14:05:51.961101 containerd[1502]: time="2025-01-30T14:05:51.961008893Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 14:05:51.961101 containerd[1502]: time="2025-01-30T14:05:51.961023665Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 14:05:51.961101 containerd[1502]: time="2025-01-30T14:05:51.961045215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961101 containerd[1502]: time="2025-01-30T14:05:51.961059514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961101 containerd[1502]: time="2025-01-30T14:05:51.961073225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961101 containerd[1502]: time="2025-01-30T14:05:51.961086421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961101 containerd[1502]: time="2025-01-30T14:05:51.961101678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961361 containerd[1502]: time="2025-01-30T14:05:51.961117840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961361 containerd[1502]: time="2025-01-30T14:05:51.961130975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961361 containerd[1502]: time="2025-01-30T14:05:51.961144398Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961361 containerd[1502]: time="2025-01-30T14:05:51.961159273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961361 containerd[1502]: time="2025-01-30T14:05:51.961175045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961361 containerd[1502]: time="2025-01-30T14:05:51.961187613Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961361 containerd[1502]: time="2025-01-30T14:05:51.961200314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961361 containerd[1502]: time="2025-01-30T14:05:51.961213078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961361 containerd[1502]: time="2025-01-30T14:05:51.961227788Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 14:05:51.961361 containerd[1502]: time="2025-01-30T14:05:51.961249442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961361 containerd[1502]: time="2025-01-30T14:05:51.961262597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961361 containerd[1502]: time="2025-01-30T14:05:51.961273815Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 14:05:51.961361 containerd[1502]: time="2025-01-30T14:05:51.961315639Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 14:05:51.961361 containerd[1502]: time="2025-01-30T14:05:51.961336459Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 14:05:51.961696 containerd[1502]: time="2025-01-30T14:05:51.961347945Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 14:05:51.961696 containerd[1502]: time="2025-01-30T14:05:51.961360935Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 14:05:51.961696 containerd[1502]: time="2025-01-30T14:05:51.961370453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961696 containerd[1502]: time="2025-01-30T14:05:51.961393539Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 14:05:51.961696 containerd[1502]: time="2025-01-30T14:05:51.961404541Z" level=info msg="NRI interface is disabled by configuration." Jan 30 14:05:51.961696 containerd[1502]: time="2025-01-30T14:05:51.961415111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 14:05:51.961832 containerd[1502]: time="2025-01-30T14:05:51.961767544Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 14:05:51.961832 containerd[1502]: time="2025-01-30T14:05:51.961825831Z" level=info msg="Connect containerd service" Jan 30 14:05:51.962007 containerd[1502]: time="2025-01-30T14:05:51.961858826Z" level=info msg="using legacy CRI server" Jan 30 14:05:51.962007 containerd[1502]: time="2025-01-30T14:05:51.961868324Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 14:05:51.962077 containerd[1502]: time="2025-01-30T14:05:51.962056244Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 14:05:51.962900 containerd[1502]: time="2025-01-30T14:05:51.962839959Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 14:05:51.963089 containerd[1502]: time="2025-01-30T14:05:51.963019072Z" level=info msg="Start subscribing containerd event" Jan 30 14:05:51.963089 containerd[1502]: time="2025-01-30T14:05:51.963095231Z" level=info msg="Start recovering state" Jan 30 14:05:51.963089 containerd[1502]: time="2025-01-30T14:05:51.963230931Z" level=info msg="Start event monitor" Jan 30 14:05:51.963089 containerd[1502]: time="2025-01-30T14:05:51.963236258Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 14:05:51.963089 containerd[1502]: time="2025-01-30T14:05:51.963251309Z" level=info msg="Start snapshots syncer" Jan 30 14:05:51.963089 containerd[1502]: time="2025-01-30T14:05:51.963260878Z" level=info msg="Start cni network conf syncer for default" Jan 30 14:05:51.963089 containerd[1502]: time="2025-01-30T14:05:51.963268914Z" level=info msg="Start streaming server" Jan 30 14:05:51.963089 containerd[1502]: time="2025-01-30T14:05:51.963297582Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 14:05:51.963089 containerd[1502]: time="2025-01-30T14:05:51.963368591Z" level=info msg="containerd successfully booted in 0.684590s" Jan 30 14:05:51.963739 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 14:05:52.537090 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:05:52.539146 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 14:05:52.540410 systemd[1]: Startup finished in 808ms (kernel) + 6.645s (initrd) + 4.219s (userspace) = 11.672s. Jan 30 14:05:52.545502 (kubelet)[1592]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:05:52.553759 agetty[1544]: failed to open credentials directory Jan 30 14:05:52.553789 agetty[1555]: failed to open credentials directory Jan 30 14:05:52.976011 kubelet[1592]: E0130 14:05:52.975832 1592 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:05:52.980185 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:05:52.980461 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:06:01.332927 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:46386.service - OpenSSH per-connection server daemon (10.0.0.1:46386). Jan 30 14:06:01.371403 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 46386 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:06:01.373105 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:06:01.382166 systemd-logind[1488]: New session 1 of user core. Jan 30 14:06:01.383440 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 14:06:01.389764 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 14:06:01.401533 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 14:06:01.404253 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 14:06:01.411998 (systemd)[1610]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 14:06:01.520909 systemd[1610]: Queued start job for default target default.target. Jan 30 14:06:01.531869 systemd[1610]: Created slice app.slice - User Application Slice. Jan 30 14:06:01.531894 systemd[1610]: Reached target paths.target - Paths. Jan 30 14:06:01.531908 systemd[1610]: Reached target timers.target - Timers. Jan 30 14:06:01.533394 systemd[1610]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 14:06:01.545004 systemd[1610]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 14:06:01.545151 systemd[1610]: Reached target sockets.target - Sockets. Jan 30 14:06:01.545174 systemd[1610]: Reached target basic.target - Basic System. Jan 30 14:06:01.545217 systemd[1610]: Reached target default.target - Main User Target. Jan 30 14:06:01.545255 systemd[1610]: Startup finished in 126ms. Jan 30 14:06:01.545702 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 14:06:01.547404 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 14:06:01.608284 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:46398.service - OpenSSH per-connection server daemon (10.0.0.1:46398). Jan 30 14:06:01.650639 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 46398 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:06:01.652080 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:06:01.656299 systemd-logind[1488]: New session 2 of user core. Jan 30 14:06:01.665653 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 14:06:01.719763 sshd[1623]: Connection closed by 10.0.0.1 port 46398 Jan 30 14:06:01.720158 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Jan 30 14:06:01.729380 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:46398.service: Deactivated successfully. Jan 30 14:06:01.731071 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 14:06:01.732862 systemd-logind[1488]: Session 2 logged out. Waiting for processes to exit. Jan 30 14:06:01.734181 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:46414.service - OpenSSH per-connection server daemon (10.0.0.1:46414). Jan 30 14:06:01.735003 systemd-logind[1488]: Removed session 2. Jan 30 14:06:01.782965 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 46414 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:06:01.784301 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:06:01.788144 systemd-logind[1488]: New session 3 of user core. Jan 30 14:06:01.797629 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 14:06:01.847654 sshd[1630]: Connection closed by 10.0.0.1 port 46414 Jan 30 14:06:01.848064 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Jan 30 14:06:01.861100 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:46414.service: Deactivated successfully. Jan 30 14:06:01.862706 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 14:06:01.864319 systemd-logind[1488]: Session 3 logged out. Waiting for processes to exit. Jan 30 14:06:01.865491 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:46428.service - OpenSSH per-connection server daemon (10.0.0.1:46428). Jan 30 14:06:01.866270 systemd-logind[1488]: Removed session 3. Jan 30 14:06:01.902327 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 46428 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:06:01.904009 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:06:01.907753 systemd-logind[1488]: New session 4 of user core. Jan 30 14:06:01.917691 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 14:06:01.972784 sshd[1637]: Connection closed by 10.0.0.1 port 46428 Jan 30 14:06:01.973120 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Jan 30 14:06:01.982454 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:46428.service: Deactivated successfully. Jan 30 14:06:01.984031 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 14:06:01.985384 systemd-logind[1488]: Session 4 logged out. Waiting for processes to exit. Jan 30 14:06:01.994760 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:46436.service - OpenSSH per-connection server daemon (10.0.0.1:46436). Jan 30 14:06:01.995578 systemd-logind[1488]: Removed session 4. Jan 30 14:06:02.028499 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 46436 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:06:02.030447 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:06:02.034412 systemd-logind[1488]: New session 5 of user core. Jan 30 14:06:02.042706 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 14:06:02.099992 sudo[1645]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 14:06:02.100336 sudo[1645]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 14:06:02.366773 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 14:06:02.366889 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 14:06:02.601711 dockerd[1665]: time="2025-01-30T14:06:02.601644457Z" level=info msg="Starting up" Jan 30 14:06:02.696596 dockerd[1665]: time="2025-01-30T14:06:02.696454258Z" level=info msg="Loading containers: start." Jan 30 14:06:02.875538 kernel: Initializing XFRM netlink socket Jan 30 14:06:02.954267 systemd-networkd[1421]: docker0: Link UP Jan 30 14:06:02.988914 dockerd[1665]: time="2025-01-30T14:06:02.988876011Z" level=info msg="Loading containers: done." Jan 30 14:06:03.002796 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2517642895-merged.mount: Deactivated successfully. Jan 30 14:06:03.003881 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 14:06:03.006037 dockerd[1665]: time="2025-01-30T14:06:03.005993787Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 14:06:03.006100 dockerd[1665]: time="2025-01-30T14:06:03.006063885Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jan 30 14:06:03.006206 dockerd[1665]: time="2025-01-30T14:06:03.006178909Z" level=info msg="Daemon has completed initialization" Jan 30 14:06:03.015745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:06:03.183947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:06:03.188442 (kubelet)[1855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:06:03.268429 kubelet[1855]: E0130 14:06:03.268291 1855 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:06:03.274694 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:06:03.274892 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:06:03.294279 dockerd[1665]: time="2025-01-30T14:06:03.294213932Z" level=info msg="API listen on /run/docker.sock" Jan 30 14:06:03.294360 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 14:06:03.847383 containerd[1502]: time="2025-01-30T14:06:03.847337892Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 14:06:04.914230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3065664545.mount: Deactivated successfully. Jan 30 14:06:05.826805 containerd[1502]: time="2025-01-30T14:06:05.826749275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:05.827571 containerd[1502]: time="2025-01-30T14:06:05.827531746Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 30 14:06:05.828854 containerd[1502]: time="2025-01-30T14:06:05.828807042Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:05.831707 containerd[1502]: time="2025-01-30T14:06:05.831638788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:05.832581 containerd[1502]: time="2025-01-30T14:06:05.832542420Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 1.985158051s" Jan 30 14:06:05.832581 containerd[1502]: time="2025-01-30T14:06:05.832577548Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 30 14:06:05.833183 containerd[1502]: time="2025-01-30T14:06:05.833156961Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 14:06:06.989668 containerd[1502]: time="2025-01-30T14:06:06.989616175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:06.990711 containerd[1502]: time="2025-01-30T14:06:06.990679464Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 30 14:06:06.992100 containerd[1502]: time="2025-01-30T14:06:06.992071907Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:06.995289 containerd[1502]: time="2025-01-30T14:06:06.995254212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:06.996877 containerd[1502]: time="2025-01-30T14:06:06.996824090Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.163613243s" Jan 30 14:06:06.996877 containerd[1502]: time="2025-01-30T14:06:06.996875138Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 30 14:06:06.997545 containerd[1502]: time="2025-01-30T14:06:06.997469835Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 14:06:08.823551 containerd[1502]: time="2025-01-30T14:06:08.823473539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:08.824492 containerd[1502]: time="2025-01-30T14:06:08.824448104Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 30 14:06:08.825860 containerd[1502]: time="2025-01-30T14:06:08.825793088Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:08.828762 containerd[1502]: time="2025-01-30T14:06:08.828741672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:08.830046 containerd[1502]: time="2025-01-30T14:06:08.830014692Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 1.832514846s" Jan 30 14:06:08.830131 containerd[1502]: time="2025-01-30T14:06:08.830055578Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 30 14:06:08.830629 containerd[1502]: time="2025-01-30T14:06:08.830599546Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 14:06:09.854263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3760363173.mount: Deactivated successfully. Jan 30 14:06:10.583658 containerd[1502]: time="2025-01-30T14:06:10.583576293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:10.584455 containerd[1502]: time="2025-01-30T14:06:10.584378993Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 30 14:06:10.585447 containerd[1502]: time="2025-01-30T14:06:10.585400840Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:10.588295 containerd[1502]: time="2025-01-30T14:06:10.588242736Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:10.588932 containerd[1502]: time="2025-01-30T14:06:10.588873371Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.758234746s" Jan 30 14:06:10.588932 containerd[1502]: time="2025-01-30T14:06:10.588917481Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 14:06:10.589426 containerd[1502]: time="2025-01-30T14:06:10.589385569Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 14:06:11.066656 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount896495326.mount: Deactivated successfully. Jan 30 14:06:11.882351 containerd[1502]: time="2025-01-30T14:06:11.882295485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:11.883191 containerd[1502]: time="2025-01-30T14:06:11.883160580Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 30 14:06:11.884376 containerd[1502]: time="2025-01-30T14:06:11.884344955Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:11.887254 containerd[1502]: time="2025-01-30T14:06:11.887210133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:11.888583 containerd[1502]: time="2025-01-30T14:06:11.888536822Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.299086255s" Jan 30 14:06:11.888623 containerd[1502]: time="2025-01-30T14:06:11.888585146Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 30 14:06:11.889166 containerd[1502]: time="2025-01-30T14:06:11.889008132Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 14:06:12.656158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2653257013.mount: Deactivated successfully. Jan 30 14:06:12.663231 containerd[1502]: time="2025-01-30T14:06:12.663187638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:12.664016 containerd[1502]: time="2025-01-30T14:06:12.663945830Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 14:06:12.665177 containerd[1502]: time="2025-01-30T14:06:12.665141479Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:12.667302 containerd[1502]: time="2025-01-30T14:06:12.667267877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:12.667952 containerd[1502]: time="2025-01-30T14:06:12.667921305Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 778.887808ms" Jan 30 14:06:12.667998 containerd[1502]: time="2025-01-30T14:06:12.667949184Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 14:06:12.668437 containerd[1502]: time="2025-01-30T14:06:12.668402707Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 14:06:13.142289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611468176.mount: Deactivated successfully. Jan 30 14:06:13.525601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 14:06:13.534667 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:06:13.924482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:06:13.928930 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 14:06:14.451433 kubelet[2026]: E0130 14:06:14.451377 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 14:06:14.455919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 14:06:14.456125 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 14:06:16.211326 containerd[1502]: time="2025-01-30T14:06:16.211245238Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:16.214280 containerd[1502]: time="2025-01-30T14:06:16.214194565Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 30 14:06:16.215899 containerd[1502]: time="2025-01-30T14:06:16.215851347Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:16.219004 containerd[1502]: time="2025-01-30T14:06:16.218975886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:16.220073 containerd[1502]: time="2025-01-30T14:06:16.220038390Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.551603154s" Jan 30 14:06:16.220135 containerd[1502]: time="2025-01-30T14:06:16.220070602Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 30 14:06:18.529991 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:06:18.539736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:06:18.564297 systemd[1]: Reloading requested from client PID 2107 ('systemctl') (unit session-5.scope)... Jan 30 14:06:18.564312 systemd[1]: Reloading... Jan 30 14:06:18.642617 zram_generator::config[2146]: No configuration found. Jan 30 14:06:18.832028 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:06:18.908488 systemd[1]: Reloading finished in 343 ms. Jan 30 14:06:18.957799 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:06:18.961075 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:06:18.961300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:06:18.962942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:06:19.116208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:06:19.120574 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:06:19.154561 kubelet[2196]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:06:19.154561 kubelet[2196]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 14:06:19.154561 kubelet[2196]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:06:19.154947 kubelet[2196]: I0130 14:06:19.154642 2196 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:06:19.440719 kubelet[2196]: I0130 14:06:19.440627 2196 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 14:06:19.440719 kubelet[2196]: I0130 14:06:19.440651 2196 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:06:19.440928 kubelet[2196]: I0130 14:06:19.440865 2196 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 14:06:19.462942 kubelet[2196]: I0130 14:06:19.462912 2196 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:06:19.463259 kubelet[2196]: E0130 14:06:19.463222 2196 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:06:19.471392 kubelet[2196]: E0130 14:06:19.471348 2196 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 14:06:19.471392 kubelet[2196]: I0130 14:06:19.471380 2196 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 14:06:19.476399 kubelet[2196]: I0130 14:06:19.476359 2196 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:06:19.477444 kubelet[2196]: I0130 14:06:19.477398 2196 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:06:19.477621 kubelet[2196]: I0130 14:06:19.477427 2196 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 14:06:19.477621 kubelet[2196]: I0130 14:06:19.477618 2196 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:06:19.477758 kubelet[2196]: I0130 14:06:19.477630 2196 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 14:06:19.477799 kubelet[2196]: I0130 14:06:19.477772 2196 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:06:19.480193 kubelet[2196]: I0130 14:06:19.480164 2196 kubelet.go:446] "Attempting to sync node with API server" Jan 30 14:06:19.480193 kubelet[2196]: I0130 14:06:19.480183 2196 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:06:19.480273 kubelet[2196]: I0130 14:06:19.480201 2196 kubelet.go:352] "Adding apiserver pod source" Jan 30 14:06:19.480273 kubelet[2196]: I0130 14:06:19.480213 2196 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:06:19.483612 kubelet[2196]: I0130 14:06:19.483166 2196 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 14:06:19.483612 kubelet[2196]: W0130 14:06:19.483565 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 30 14:06:19.483612 kubelet[2196]: I0130 14:06:19.483601 2196 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:06:19.483759 kubelet[2196]: E0130 14:06:19.483613 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:06:19.483759 kubelet[2196]: W0130 14:06:19.483566 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 30 14:06:19.483759 kubelet[2196]: E0130 14:06:19.483643 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:06:19.485942 kubelet[2196]: W0130 14:06:19.485917 2196 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 14:06:19.487992 kubelet[2196]: I0130 14:06:19.487967 2196 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 14:06:19.488030 kubelet[2196]: I0130 14:06:19.488003 2196 server.go:1287] "Started kubelet" Jan 30 14:06:19.488798 kubelet[2196]: I0130 14:06:19.488305 2196 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:06:19.488798 kubelet[2196]: I0130 14:06:19.488639 2196 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:06:19.488798 kubelet[2196]: I0130 14:06:19.488691 2196 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:06:19.489450 kubelet[2196]: I0130 14:06:19.489312 2196 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:06:19.490246 kubelet[2196]: I0130 14:06:19.489596 2196 server.go:490] "Adding debug handlers to kubelet server" Jan 30 14:06:19.490246 kubelet[2196]: I0130 14:06:19.489884 2196 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 14:06:19.490549 kubelet[2196]: E0130 14:06:19.490504 2196 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 14:06:19.490591 kubelet[2196]: I0130 14:06:19.490558 2196 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 14:06:19.491170 kubelet[2196]: I0130 14:06:19.490707 2196 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:06:19.491170 kubelet[2196]: I0130 14:06:19.490754 2196 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:06:19.491170 kubelet[2196]: W0130 14:06:19.491092 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 30 14:06:19.491170 kubelet[2196]: E0130 14:06:19.491134 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:06:19.491554 kubelet[2196]: E0130 14:06:19.491526 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="200ms" Jan 30 14:06:19.492355 kubelet[2196]: E0130 14:06:19.492258 2196 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:06:19.493065 kubelet[2196]: I0130 14:06:19.492735 2196 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:06:19.493065 kubelet[2196]: I0130 14:06:19.492818 2196 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:06:19.494498 kubelet[2196]: I0130 14:06:19.494300 2196 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:06:19.497831 kubelet[2196]: E0130 14:06:19.492229 2196 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.12:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.12:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7d823ba1d482 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 14:06:19.487982722 +0000 UTC m=+0.363488433,LastTimestamp:2025-01-30 14:06:19.487982722 +0000 UTC m=+0.363488433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 14:06:19.505234 kubelet[2196]: I0130 14:06:19.505213 2196 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 14:06:19.505374 kubelet[2196]: I0130 14:06:19.505363 2196 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 14:06:19.505639 kubelet[2196]: I0130 14:06:19.505430 2196 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:06:19.509026 kubelet[2196]: I0130 14:06:19.508993 2196 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:06:19.510411 kubelet[2196]: I0130 14:06:19.510395 2196 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:06:19.510456 kubelet[2196]: I0130 14:06:19.510416 2196 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 14:06:19.510456 kubelet[2196]: I0130 14:06:19.510431 2196 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 14:06:19.510456 kubelet[2196]: I0130 14:06:19.510438 2196 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 14:06:19.510548 kubelet[2196]: E0130 14:06:19.510477 2196 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:06:19.591504 kubelet[2196]: E0130 14:06:19.591434 2196 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 14:06:19.610695 kubelet[2196]: E0130 14:06:19.610663 2196 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 14:06:19.692667 kubelet[2196]: E0130 14:06:19.692534 2196 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 14:06:19.692946 kubelet[2196]: E0130 14:06:19.692905 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="400ms" Jan 30 14:06:19.793359 kubelet[2196]: E0130 14:06:19.793306 2196 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 14:06:19.811552 kubelet[2196]: E0130 14:06:19.811493 2196 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 14:06:19.811937 kubelet[2196]: W0130 14:06:19.811878 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 30 14:06:19.811978 kubelet[2196]: E0130 14:06:19.811941 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:06:19.815262 kubelet[2196]: I0130 14:06:19.815203 2196 policy_none.go:49] "None policy: Start" Jan 30 14:06:19.815262 kubelet[2196]: I0130 14:06:19.815225 2196 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 14:06:19.815262 kubelet[2196]: I0130 14:06:19.815237 2196 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:06:19.822609 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 14:06:19.836778 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 14:06:19.839743 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 14:06:19.849399 kubelet[2196]: I0130 14:06:19.849368 2196 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:06:19.849603 kubelet[2196]: I0130 14:06:19.849582 2196 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 14:06:19.849603 kubelet[2196]: I0130 14:06:19.849594 2196 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:06:19.850135 kubelet[2196]: I0130 14:06:19.849800 2196 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:06:19.850377 kubelet[2196]: E0130 14:06:19.850357 2196 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 14:06:19.850422 kubelet[2196]: E0130 14:06:19.850398 2196 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 14:06:19.950995 kubelet[2196]: I0130 14:06:19.950877 2196 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 14:06:19.951216 kubelet[2196]: E0130 14:06:19.951190 2196 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jan 30 14:06:20.093855 kubelet[2196]: E0130 14:06:20.093805 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="800ms" Jan 30 14:06:20.153058 kubelet[2196]: I0130 14:06:20.153029 2196 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 14:06:20.153356 kubelet[2196]: E0130 14:06:20.153326 2196 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jan 30 14:06:20.219406 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 30 14:06:20.231288 kubelet[2196]: E0130 14:06:20.231247 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 14:06:20.233861 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 30 14:06:20.235620 kubelet[2196]: E0130 14:06:20.235589 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 14:06:20.237654 systemd[1]: Created slice kubepods-burstable-pod51a8bfece0cffe121c2371f82888940c.slice - libcontainer container kubepods-burstable-pod51a8bfece0cffe121c2371f82888940c.slice. Jan 30 14:06:20.239113 kubelet[2196]: E0130 14:06:20.239089 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 14:06:20.296518 kubelet[2196]: I0130 14:06:20.296485 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 14:06:20.296594 kubelet[2196]: I0130 14:06:20.296540 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 14:06:20.296594 kubelet[2196]: I0130 14:06:20.296561 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 14:06:20.296594 kubelet[2196]: I0130 14:06:20.296582 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 14:06:20.296668 kubelet[2196]: I0130 14:06:20.296609 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 14:06:20.296668 kubelet[2196]: I0130 14:06:20.296627 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51a8bfece0cffe121c2371f82888940c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"51a8bfece0cffe121c2371f82888940c\") " pod="kube-system/kube-apiserver-localhost" Jan 30 14:06:20.296668 kubelet[2196]: I0130 14:06:20.296645 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 14:06:20.296736 kubelet[2196]: I0130 14:06:20.296672 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51a8bfece0cffe121c2371f82888940c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"51a8bfece0cffe121c2371f82888940c\") " pod="kube-system/kube-apiserver-localhost" Jan 30 14:06:20.296736 kubelet[2196]: I0130 14:06:20.296705 2196 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51a8bfece0cffe121c2371f82888940c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"51a8bfece0cffe121c2371f82888940c\") " pod="kube-system/kube-apiserver-localhost" Jan 30 14:06:20.532030 kubelet[2196]: E0130 14:06:20.531903 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:20.532531 containerd[1502]: time="2025-01-30T14:06:20.532434849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 30 14:06:20.536829 kubelet[2196]: E0130 14:06:20.536790 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:20.537297 containerd[1502]: time="2025-01-30T14:06:20.537258189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 30 14:06:20.539628 kubelet[2196]: E0130 14:06:20.539595 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:20.539868 kubelet[2196]: W0130 14:06:20.539810 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 30 14:06:20.539944 kubelet[2196]: E0130 14:06:20.539870 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.12:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:06:20.540056 containerd[1502]: time="2025-01-30T14:06:20.540026272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:51a8bfece0cffe121c2371f82888940c,Namespace:kube-system,Attempt:0,}" Jan 30 14:06:20.555356 kubelet[2196]: I0130 14:06:20.555314 2196 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 14:06:20.555680 kubelet[2196]: E0130 14:06:20.555646 2196 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jan 30 14:06:20.655531 kubelet[2196]: W0130 14:06:20.655443 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 30 14:06:20.655531 kubelet[2196]: E0130 14:06:20.655506 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:06:20.673112 kubelet[2196]: W0130 14:06:20.673045 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 30 14:06:20.673162 kubelet[2196]: E0130 14:06:20.673113 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.12:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:06:20.895722 kubelet[2196]: E0130 14:06:20.895498 2196 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.12:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.12:6443: connect: connection refused" interval="1.6s" Jan 30 14:06:21.029029 kubelet[2196]: W0130 14:06:21.028971 2196 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.12:6443: connect: connection refused Jan 30 14:06:21.029029 kubelet[2196]: E0130 14:06:21.029029 2196 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.12:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:06:21.075946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175503532.mount: Deactivated successfully. Jan 30 14:06:21.089626 containerd[1502]: time="2025-01-30T14:06:21.089489693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:06:21.095459 containerd[1502]: time="2025-01-30T14:06:21.095326831Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 14:06:21.101980 containerd[1502]: time="2025-01-30T14:06:21.101787078Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:06:21.103629 containerd[1502]: time="2025-01-30T14:06:21.103543743Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:06:21.104900 containerd[1502]: time="2025-01-30T14:06:21.104712383Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:06:21.106483 containerd[1502]: time="2025-01-30T14:06:21.106290521Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:06:21.113543 containerd[1502]: time="2025-01-30T14:06:21.108747408Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 14:06:21.121735 containerd[1502]: time="2025-01-30T14:06:21.121603847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 14:06:21.123941 containerd[1502]: time="2025-01-30T14:06:21.122623603Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 582.50327ms" Jan 30 14:06:21.123941 containerd[1502]: time="2025-01-30T14:06:21.123723419Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 586.382195ms" Jan 30 14:06:21.125238 containerd[1502]: time="2025-01-30T14:06:21.124963830Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 592.365591ms" Jan 30 14:06:21.357671 kubelet[2196]: I0130 14:06:21.357367 2196 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 14:06:21.359710 kubelet[2196]: E0130 14:06:21.359319 2196 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.12:6443/api/v1/nodes\": dial tcp 10.0.0.12:6443: connect: connection refused" node="localhost" Jan 30 14:06:21.365540 containerd[1502]: time="2025-01-30T14:06:21.364963468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:06:21.365540 containerd[1502]: time="2025-01-30T14:06:21.365049473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:06:21.365540 containerd[1502]: time="2025-01-30T14:06:21.365066214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:06:21.365540 containerd[1502]: time="2025-01-30T14:06:21.365190982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:06:21.368314 containerd[1502]: time="2025-01-30T14:06:21.360848883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:06:21.368314 containerd[1502]: time="2025-01-30T14:06:21.367912710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:06:21.368314 containerd[1502]: time="2025-01-30T14:06:21.367963111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:06:21.368314 containerd[1502]: time="2025-01-30T14:06:21.368118782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:06:21.375927 containerd[1502]: time="2025-01-30T14:06:21.372294934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:06:21.375927 containerd[1502]: time="2025-01-30T14:06:21.372358707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:06:21.375927 containerd[1502]: time="2025-01-30T14:06:21.372378624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:06:21.375927 containerd[1502]: time="2025-01-30T14:06:21.372485880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:06:21.413552 systemd[1]: Started cri-containerd-9b574f5fe559997808b610abf612778d9b9b269f67110c78e975c974b61b2481.scope - libcontainer container 9b574f5fe559997808b610abf612778d9b9b269f67110c78e975c974b61b2481. Jan 30 14:06:21.420847 systemd[1]: Started cri-containerd-4a49fae0e8d194364d71924ac57bdec4880a938ef82feedf034f453e99fa29a0.scope - libcontainer container 4a49fae0e8d194364d71924ac57bdec4880a938ef82feedf034f453e99fa29a0. Jan 30 14:06:21.424578 systemd[1]: Started cri-containerd-ebcf56b3e4f369a9afecc47279da5259595816d5fd04f0ae2eaeb558501ae1c5.scope - libcontainer container ebcf56b3e4f369a9afecc47279da5259595816d5fd04f0ae2eaeb558501ae1c5. Jan 30 14:06:21.474440 containerd[1502]: time="2025-01-30T14:06:21.474367550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:51a8bfece0cffe121c2371f82888940c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b574f5fe559997808b610abf612778d9b9b269f67110c78e975c974b61b2481\"" Jan 30 14:06:21.476546 kubelet[2196]: E0130 14:06:21.476370 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:21.481003 containerd[1502]: time="2025-01-30T14:06:21.480954399Z" level=info msg="CreateContainer within sandbox \"9b574f5fe559997808b610abf612778d9b9b269f67110c78e975c974b61b2481\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 14:06:21.499592 containerd[1502]: time="2025-01-30T14:06:21.499482010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a49fae0e8d194364d71924ac57bdec4880a938ef82feedf034f453e99fa29a0\"" Jan 30 14:06:21.500729 kubelet[2196]: E0130 14:06:21.500701 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:21.507085 containerd[1502]: time="2025-01-30T14:06:21.505669905Z" level=info msg="CreateContainer within sandbox \"4a49fae0e8d194364d71924ac57bdec4880a938ef82feedf034f453e99fa29a0\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 14:06:21.513955 containerd[1502]: time="2025-01-30T14:06:21.512470695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebcf56b3e4f369a9afecc47279da5259595816d5fd04f0ae2eaeb558501ae1c5\"" Jan 30 14:06:21.516663 kubelet[2196]: E0130 14:06:21.516485 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:21.520055 containerd[1502]: time="2025-01-30T14:06:21.519649186Z" level=info msg="CreateContainer within sandbox \"ebcf56b3e4f369a9afecc47279da5259595816d5fd04f0ae2eaeb558501ae1c5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 14:06:21.619806 kubelet[2196]: E0130 14:06:21.618946 2196 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.12:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.12:6443: connect: connection refused" logger="UnhandledError" Jan 30 14:06:21.676552 containerd[1502]: time="2025-01-30T14:06:21.676216588Z" level=info msg="CreateContainer within sandbox \"ebcf56b3e4f369a9afecc47279da5259595816d5fd04f0ae2eaeb558501ae1c5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b193c80ee26123bae3d11dd1377279a7761d73266c34b4283ea5943421ff04d1\"" Jan 30 14:06:21.677227 containerd[1502]: time="2025-01-30T14:06:21.677161294Z" level=info msg="StartContainer for \"b193c80ee26123bae3d11dd1377279a7761d73266c34b4283ea5943421ff04d1\"" Jan 30 14:06:21.682898 containerd[1502]: time="2025-01-30T14:06:21.682812899Z" level=info msg="CreateContainer within sandbox \"9b574f5fe559997808b610abf612778d9b9b269f67110c78e975c974b61b2481\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2291e3678b2e9dc24d67026bb21aa3b0763b1a88d433cb4a6d1f82a0a59bf47f\"" Jan 30 14:06:21.683548 containerd[1502]: time="2025-01-30T14:06:21.683477506Z" level=info msg="StartContainer for \"2291e3678b2e9dc24d67026bb21aa3b0763b1a88d433cb4a6d1f82a0a59bf47f\"" Jan 30 14:06:21.701380 containerd[1502]: time="2025-01-30T14:06:21.698857672Z" level=info msg="CreateContainer within sandbox \"4a49fae0e8d194364d71924ac57bdec4880a938ef82feedf034f453e99fa29a0\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9b301bff763507788b75b504cf11d48f075ae99cd306264ccba10825b0d0b24c\"" Jan 30 14:06:21.701380 containerd[1502]: time="2025-01-30T14:06:21.699648029Z" level=info msg="StartContainer for \"9b301bff763507788b75b504cf11d48f075ae99cd306264ccba10825b0d0b24c\"" Jan 30 14:06:21.735226 systemd[1]: Started cri-containerd-b193c80ee26123bae3d11dd1377279a7761d73266c34b4283ea5943421ff04d1.scope - libcontainer container b193c80ee26123bae3d11dd1377279a7761d73266c34b4283ea5943421ff04d1. Jan 30 14:06:21.741358 systemd[1]: Started cri-containerd-2291e3678b2e9dc24d67026bb21aa3b0763b1a88d433cb4a6d1f82a0a59bf47f.scope - libcontainer container 2291e3678b2e9dc24d67026bb21aa3b0763b1a88d433cb4a6d1f82a0a59bf47f. Jan 30 14:06:21.755866 systemd[1]: Started cri-containerd-9b301bff763507788b75b504cf11d48f075ae99cd306264ccba10825b0d0b24c.scope - libcontainer container 9b301bff763507788b75b504cf11d48f075ae99cd306264ccba10825b0d0b24c. Jan 30 14:06:21.814982 containerd[1502]: time="2025-01-30T14:06:21.814923843Z" level=info msg="StartContainer for \"b193c80ee26123bae3d11dd1377279a7761d73266c34b4283ea5943421ff04d1\" returns successfully" Jan 30 14:06:21.835399 containerd[1502]: time="2025-01-30T14:06:21.835310674Z" level=info msg="StartContainer for \"2291e3678b2e9dc24d67026bb21aa3b0763b1a88d433cb4a6d1f82a0a59bf47f\" returns successfully" Jan 30 14:06:21.835593 containerd[1502]: time="2025-01-30T14:06:21.835310744Z" level=info msg="StartContainer for \"9b301bff763507788b75b504cf11d48f075ae99cd306264ccba10825b0d0b24c\" returns successfully" Jan 30 14:06:22.549466 kubelet[2196]: E0130 14:06:22.549417 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 14:06:22.550002 kubelet[2196]: E0130 14:06:22.549605 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:22.554172 kubelet[2196]: E0130 14:06:22.554132 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 14:06:22.554311 kubelet[2196]: E0130 14:06:22.554285 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:22.557653 kubelet[2196]: E0130 14:06:22.557616 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 14:06:22.557839 kubelet[2196]: E0130 14:06:22.557811 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:22.964459 kubelet[2196]: I0130 14:06:22.963450 2196 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 14:06:23.367251 kubelet[2196]: E0130 14:06:23.366883 2196 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 14:06:23.500341 kubelet[2196]: I0130 14:06:23.499967 2196 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 14:06:23.500341 kubelet[2196]: E0130 14:06:23.500025 2196 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 30 14:06:23.553542 kubelet[2196]: E0130 14:06:23.551338 2196 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 14:06:23.569543 kubelet[2196]: E0130 14:06:23.566204 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 14:06:23.569543 kubelet[2196]: E0130 14:06:23.566368 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:23.569543 kubelet[2196]: E0130 14:06:23.566659 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 14:06:23.569543 kubelet[2196]: E0130 14:06:23.566752 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:23.577595 kubelet[2196]: E0130 14:06:23.577552 2196 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 14:06:23.577745 kubelet[2196]: E0130 14:06:23.577715 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:23.654210 kubelet[2196]: E0130 14:06:23.652363 2196 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 14:06:23.752956 kubelet[2196]: E0130 14:06:23.752795 2196 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 14:06:23.893245 kubelet[2196]: I0130 14:06:23.892337 2196 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 14:06:23.907823 kubelet[2196]: E0130 14:06:23.906490 2196 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 14:06:23.907823 kubelet[2196]: I0130 14:06:23.907732 2196 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 14:06:23.910863 kubelet[2196]: E0130 14:06:23.910139 2196 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 30 14:06:23.911006 kubelet[2196]: I0130 14:06:23.910921 2196 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 14:06:23.914486 kubelet[2196]: E0130 14:06:23.914428 2196 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 30 14:06:24.487007 kubelet[2196]: I0130 14:06:24.486912 2196 apiserver.go:52] "Watching apiserver" Jan 30 14:06:24.492032 kubelet[2196]: I0130 14:06:24.491340 2196 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:06:24.567229 kubelet[2196]: I0130 14:06:24.567179 2196 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 14:06:24.567906 kubelet[2196]: I0130 14:06:24.567608 2196 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 14:06:24.567940 kubelet[2196]: I0130 14:06:24.567881 2196 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 14:06:24.577840 kubelet[2196]: E0130 14:06:24.576549 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:24.580428 kubelet[2196]: E0130 14:06:24.580323 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:24.580428 kubelet[2196]: E0130 14:06:24.580326 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:25.563723 systemd[1]: Reloading requested from client PID 2473 ('systemctl') (unit session-5.scope)... Jan 30 14:06:25.563739 systemd[1]: Reloading... Jan 30 14:06:25.568135 kubelet[2196]: E0130 14:06:25.568095 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:25.568599 kubelet[2196]: E0130 14:06:25.568327 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:25.568667 kubelet[2196]: E0130 14:06:25.568641 2196 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:25.629550 zram_generator::config[2512]: No configuration found. Jan 30 14:06:25.760205 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 14:06:25.854983 systemd[1]: Reloading finished in 290 ms. Jan 30 14:06:25.897269 kubelet[2196]: I0130 14:06:25.897185 2196 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:06:25.897273 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:06:25.909818 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 14:06:25.910173 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:06:25.924779 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 14:06:26.094754 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 14:06:26.102932 (kubelet)[2557]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 14:06:26.182280 kubelet[2557]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:06:26.182280 kubelet[2557]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 14:06:26.182280 kubelet[2557]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 14:06:26.182280 kubelet[2557]: I0130 14:06:26.182092 2557 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 14:06:26.196130 kubelet[2557]: I0130 14:06:26.196024 2557 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 14:06:26.196130 kubelet[2557]: I0130 14:06:26.196060 2557 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 14:06:26.196439 kubelet[2557]: I0130 14:06:26.196411 2557 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 14:06:26.198259 kubelet[2557]: I0130 14:06:26.198211 2557 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 14:06:26.201003 kubelet[2557]: I0130 14:06:26.200930 2557 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 14:06:26.216788 kubelet[2557]: E0130 14:06:26.214656 2557 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 14:06:26.216788 kubelet[2557]: I0130 14:06:26.214784 2557 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 14:06:26.223418 kubelet[2557]: I0130 14:06:26.222295 2557 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 14:06:26.223418 kubelet[2557]: I0130 14:06:26.222603 2557 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 14:06:26.223418 kubelet[2557]: I0130 14:06:26.222642 2557 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 14:06:26.223418 kubelet[2557]: I0130 14:06:26.222874 2557 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 14:06:26.223776 kubelet[2557]: I0130 14:06:26.222888 2557 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 14:06:26.223776 kubelet[2557]: I0130 14:06:26.222949 2557 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:06:26.223776 kubelet[2557]: I0130 14:06:26.223153 2557 kubelet.go:446] "Attempting to sync node with API server" Jan 30 14:06:26.223776 kubelet[2557]: I0130 14:06:26.223170 2557 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 14:06:26.223776 kubelet[2557]: I0130 14:06:26.223193 2557 kubelet.go:352] "Adding apiserver pod source" Jan 30 14:06:26.223776 kubelet[2557]: I0130 14:06:26.223207 2557 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 14:06:26.225608 kubelet[2557]: I0130 14:06:26.225494 2557 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 30 14:06:26.227059 kubelet[2557]: I0130 14:06:26.226157 2557 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 14:06:26.228132 kubelet[2557]: I0130 14:06:26.227850 2557 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 14:06:26.228132 kubelet[2557]: I0130 14:06:26.227888 2557 server.go:1287] "Started kubelet" Jan 30 14:06:26.230643 kubelet[2557]: I0130 14:06:26.230196 2557 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 14:06:26.230643 kubelet[2557]: I0130 14:06:26.230182 2557 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 14:06:26.232158 kubelet[2557]: I0130 14:06:26.231350 2557 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 14:06:26.232158 kubelet[2557]: I0130 14:06:26.231476 2557 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 14:06:26.236687 kubelet[2557]: I0130 14:06:26.234594 2557 server.go:490] "Adding debug handlers to kubelet server" Jan 30 14:06:26.240784 kubelet[2557]: I0130 14:06:26.240708 2557 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 14:06:26.244216 kubelet[2557]: I0130 14:06:26.244155 2557 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 14:06:26.245899 kubelet[2557]: E0130 14:06:26.245769 2557 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 14:06:26.248486 kubelet[2557]: I0130 14:06:26.248168 2557 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 14:06:26.253064 kubelet[2557]: I0130 14:06:26.249385 2557 reconciler.go:26] "Reconciler: start to sync state" Jan 30 14:06:26.253064 kubelet[2557]: I0130 14:06:26.251332 2557 factory.go:221] Registration of the containerd container factory successfully Jan 30 14:06:26.253064 kubelet[2557]: I0130 14:06:26.251347 2557 factory.go:221] Registration of the systemd container factory successfully Jan 30 14:06:26.253064 kubelet[2557]: I0130 14:06:26.251441 2557 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 14:06:26.253064 kubelet[2557]: E0130 14:06:26.252944 2557 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 14:06:26.269744 kubelet[2557]: I0130 14:06:26.269664 2557 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 14:06:26.274641 kubelet[2557]: I0130 14:06:26.274575 2557 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 14:06:26.274641 kubelet[2557]: I0130 14:06:26.274642 2557 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 14:06:26.275099 kubelet[2557]: I0130 14:06:26.274668 2557 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 14:06:26.275099 kubelet[2557]: I0130 14:06:26.274680 2557 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 14:06:26.275099 kubelet[2557]: E0130 14:06:26.274745 2557 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 14:06:26.310537 kubelet[2557]: I0130 14:06:26.310483 2557 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 14:06:26.310724 kubelet[2557]: I0130 14:06:26.310698 2557 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 14:06:26.310765 kubelet[2557]: I0130 14:06:26.310747 2557 state_mem.go:36] "Initialized new in-memory state store" Jan 30 14:06:26.312174 kubelet[2557]: I0130 14:06:26.310964 2557 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 14:06:26.312174 kubelet[2557]: I0130 14:06:26.311003 2557 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 14:06:26.312174 kubelet[2557]: I0130 14:06:26.311031 2557 policy_none.go:49] "None policy: Start" Jan 30 14:06:26.312174 kubelet[2557]: I0130 14:06:26.311044 2557 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 14:06:26.312174 kubelet[2557]: I0130 14:06:26.311055 2557 state_mem.go:35] "Initializing new in-memory state store" Jan 30 14:06:26.312174 kubelet[2557]: I0130 14:06:26.311170 2557 state_mem.go:75] "Updated machine memory state" Jan 30 14:06:26.319532 kubelet[2557]: I0130 14:06:26.319431 2557 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 14:06:26.319730 kubelet[2557]: I0130 14:06:26.319705 2557 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 14:06:26.319827 kubelet[2557]: I0130 14:06:26.319729 2557 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 14:06:26.320066 kubelet[2557]: I0130 14:06:26.319996 2557 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 14:06:26.321903 kubelet[2557]: E0130 14:06:26.321878 2557 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 14:06:26.376282 kubelet[2557]: I0130 14:06:26.375956 2557 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 14:06:26.376282 kubelet[2557]: I0130 14:06:26.375961 2557 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 14:06:26.376282 kubelet[2557]: I0130 14:06:26.376057 2557 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 14:06:26.388378 kubelet[2557]: E0130 14:06:26.388325 2557 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 30 14:06:26.390589 kubelet[2557]: E0130 14:06:26.390503 2557 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 14:06:26.390729 kubelet[2557]: E0130 14:06:26.390698 2557 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 14:06:26.433143 kubelet[2557]: I0130 14:06:26.432978 2557 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 14:06:26.451258 kubelet[2557]: I0130 14:06:26.450744 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 14:06:26.451258 kubelet[2557]: I0130 14:06:26.450802 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 14:06:26.451258 kubelet[2557]: I0130 14:06:26.450831 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 14:06:26.451258 kubelet[2557]: I0130 14:06:26.450853 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51a8bfece0cffe121c2371f82888940c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"51a8bfece0cffe121c2371f82888940c\") " pod="kube-system/kube-apiserver-localhost" Jan 30 14:06:26.451258 kubelet[2557]: I0130 14:06:26.450876 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 14:06:26.451590 kubelet[2557]: I0130 14:06:26.450892 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 14:06:26.451590 kubelet[2557]: I0130 14:06:26.450927 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51a8bfece0cffe121c2371f82888940c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"51a8bfece0cffe121c2371f82888940c\") " pod="kube-system/kube-apiserver-localhost" Jan 30 14:06:26.451590 kubelet[2557]: I0130 14:06:26.450951 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51a8bfece0cffe121c2371f82888940c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"51a8bfece0cffe121c2371f82888940c\") " pod="kube-system/kube-apiserver-localhost" Jan 30 14:06:26.451590 kubelet[2557]: I0130 14:06:26.450971 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 14:06:26.472617 kubelet[2557]: I0130 14:06:26.472243 2557 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 30 14:06:26.472617 kubelet[2557]: I0130 14:06:26.472398 2557 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 14:06:26.689854 kubelet[2557]: E0130 14:06:26.689673 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:26.691083 kubelet[2557]: E0130 14:06:26.690965 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:26.691083 kubelet[2557]: E0130 14:06:26.691039 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:27.225103 kubelet[2557]: I0130 14:06:27.225051 2557 apiserver.go:52] "Watching apiserver" Jan 30 14:06:27.249044 kubelet[2557]: I0130 14:06:27.249011 2557 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 14:06:27.295804 kubelet[2557]: E0130 14:06:27.295759 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:27.295931 kubelet[2557]: I0130 14:06:27.295913 2557 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 14:06:27.296365 kubelet[2557]: E0130 14:06:27.296321 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:27.389536 kubelet[2557]: I0130 14:06:27.389127 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.38911087 podStartE2EDuration="3.38911087s" podCreationTimestamp="2025-01-30 14:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:06:27.388856314 +0000 UTC m=+1.278869590" watchObservedRunningTime="2025-01-30 14:06:27.38911087 +0000 UTC m=+1.279124147" Jan 30 14:06:27.389536 kubelet[2557]: E0130 14:06:27.389306 2557 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 14:06:27.389536 kubelet[2557]: E0130 14:06:27.389453 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:27.495588 kubelet[2557]: I0130 14:06:27.495444 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.495424626 podStartE2EDuration="3.495424626s" podCreationTimestamp="2025-01-30 14:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:06:27.410266497 +0000 UTC m=+1.300279773" watchObservedRunningTime="2025-01-30 14:06:27.495424626 +0000 UTC m=+1.385437902" Jan 30 14:06:27.512985 kubelet[2557]: I0130 14:06:27.512895 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.512877981 podStartE2EDuration="3.512877981s" podCreationTimestamp="2025-01-30 14:06:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:06:27.495647231 +0000 UTC m=+1.385660507" watchObservedRunningTime="2025-01-30 14:06:27.512877981 +0000 UTC m=+1.402891257" Jan 30 14:06:27.801046 sudo[1645]: pam_unix(sudo:session): session closed for user root Jan 30 14:06:27.802452 sshd[1644]: Connection closed by 10.0.0.1 port 46436 Jan 30 14:06:27.802812 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Jan 30 14:06:27.805551 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:46436.service: Deactivated successfully. Jan 30 14:06:27.808110 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 14:06:27.808312 systemd[1]: session-5.scope: Consumed 3.676s CPU time, 151.2M memory peak, 0B memory swap peak. Jan 30 14:06:27.809000 systemd-logind[1488]: Session 5 logged out. Waiting for processes to exit. Jan 30 14:06:27.809841 systemd-logind[1488]: Removed session 5. Jan 30 14:06:28.296455 kubelet[2557]: E0130 14:06:28.296323 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:28.296855 kubelet[2557]: E0130 14:06:28.296557 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:29.297447 kubelet[2557]: E0130 14:06:29.297413 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:31.503033 kubelet[2557]: I0130 14:06:31.502991 2557 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 14:06:31.503738 kubelet[2557]: I0130 14:06:31.503415 2557 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 14:06:31.503774 containerd[1502]: time="2025-01-30T14:06:31.503261258Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 14:06:32.177854 systemd[1]: Created slice kubepods-besteffort-podfede4130_99a5_4b79_8789_779d3d27f265.slice - libcontainer container kubepods-besteffort-podfede4130_99a5_4b79_8789_779d3d27f265.slice. Jan 30 14:06:32.188427 kubelet[2557]: I0130 14:06:32.188393 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/551b8cb8-389f-4d5d-8cb0-291cf85a8125-flannel-cfg\") pod \"kube-flannel-ds-shz68\" (UID: \"551b8cb8-389f-4d5d-8cb0-291cf85a8125\") " pod="kube-flannel/kube-flannel-ds-shz68" Jan 30 14:06:32.188427 kubelet[2557]: I0130 14:06:32.188425 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/551b8cb8-389f-4d5d-8cb0-291cf85a8125-cni\") pod \"kube-flannel-ds-shz68\" (UID: \"551b8cb8-389f-4d5d-8cb0-291cf85a8125\") " pod="kube-flannel/kube-flannel-ds-shz68" Jan 30 14:06:32.188621 kubelet[2557]: I0130 14:06:32.188442 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmpz7\" (UniqueName: \"kubernetes.io/projected/551b8cb8-389f-4d5d-8cb0-291cf85a8125-kube-api-access-cmpz7\") pod \"kube-flannel-ds-shz68\" (UID: \"551b8cb8-389f-4d5d-8cb0-291cf85a8125\") " pod="kube-flannel/kube-flannel-ds-shz68" Jan 30 14:06:32.188621 kubelet[2557]: I0130 14:06:32.188457 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fede4130-99a5-4b79-8789-779d3d27f265-kube-proxy\") pod \"kube-proxy-p67nz\" (UID: \"fede4130-99a5-4b79-8789-779d3d27f265\") " pod="kube-system/kube-proxy-p67nz" Jan 30 14:06:32.188621 kubelet[2557]: I0130 14:06:32.188474 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5dk9\" (UniqueName: \"kubernetes.io/projected/fede4130-99a5-4b79-8789-779d3d27f265-kube-api-access-q5dk9\") pod \"kube-proxy-p67nz\" (UID: \"fede4130-99a5-4b79-8789-779d3d27f265\") " pod="kube-system/kube-proxy-p67nz" Jan 30 14:06:32.188621 kubelet[2557]: I0130 14:06:32.188487 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/551b8cb8-389f-4d5d-8cb0-291cf85a8125-run\") pod \"kube-flannel-ds-shz68\" (UID: \"551b8cb8-389f-4d5d-8cb0-291cf85a8125\") " pod="kube-flannel/kube-flannel-ds-shz68" Jan 30 14:06:32.188621 kubelet[2557]: I0130 14:06:32.188499 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/551b8cb8-389f-4d5d-8cb0-291cf85a8125-xtables-lock\") pod \"kube-flannel-ds-shz68\" (UID: \"551b8cb8-389f-4d5d-8cb0-291cf85a8125\") " pod="kube-flannel/kube-flannel-ds-shz68" Jan 30 14:06:32.188753 kubelet[2557]: I0130 14:06:32.188551 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fede4130-99a5-4b79-8789-779d3d27f265-xtables-lock\") pod \"kube-proxy-p67nz\" (UID: \"fede4130-99a5-4b79-8789-779d3d27f265\") " pod="kube-system/kube-proxy-p67nz" Jan 30 14:06:32.188753 kubelet[2557]: I0130 14:06:32.188565 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fede4130-99a5-4b79-8789-779d3d27f265-lib-modules\") pod \"kube-proxy-p67nz\" (UID: \"fede4130-99a5-4b79-8789-779d3d27f265\") " pod="kube-system/kube-proxy-p67nz" Jan 30 14:06:32.188753 kubelet[2557]: I0130 14:06:32.188579 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/551b8cb8-389f-4d5d-8cb0-291cf85a8125-cni-plugin\") pod \"kube-flannel-ds-shz68\" (UID: \"551b8cb8-389f-4d5d-8cb0-291cf85a8125\") " pod="kube-flannel/kube-flannel-ds-shz68" Jan 30 14:06:32.192070 systemd[1]: Created slice kubepods-burstable-pod551b8cb8_389f_4d5d_8cb0_291cf85a8125.slice - libcontainer container kubepods-burstable-pod551b8cb8_389f_4d5d_8cb0_291cf85a8125.slice. Jan 30 14:06:32.293082 kubelet[2557]: E0130 14:06:32.292884 2557 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 14:06:32.293082 kubelet[2557]: E0130 14:06:32.292913 2557 projected.go:194] Error preparing data for projected volume kube-api-access-cmpz7 for pod kube-flannel/kube-flannel-ds-shz68: configmap "kube-root-ca.crt" not found Jan 30 14:06:32.293082 kubelet[2557]: E0130 14:06:32.292960 2557 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/551b8cb8-389f-4d5d-8cb0-291cf85a8125-kube-api-access-cmpz7 podName:551b8cb8-389f-4d5d-8cb0-291cf85a8125 nodeName:}" failed. No retries permitted until 2025-01-30 14:06:32.792940998 +0000 UTC m=+6.682954274 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cmpz7" (UniqueName: "kubernetes.io/projected/551b8cb8-389f-4d5d-8cb0-291cf85a8125-kube-api-access-cmpz7") pod "kube-flannel-ds-shz68" (UID: "551b8cb8-389f-4d5d-8cb0-291cf85a8125") : configmap "kube-root-ca.crt" not found Jan 30 14:06:32.293082 kubelet[2557]: E0130 14:06:32.292993 2557 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 14:06:32.293082 kubelet[2557]: E0130 14:06:32.293017 2557 projected.go:194] Error preparing data for projected volume kube-api-access-q5dk9 for pod kube-system/kube-proxy-p67nz: configmap "kube-root-ca.crt" not found Jan 30 14:06:32.293082 kubelet[2557]: E0130 14:06:32.293054 2557 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fede4130-99a5-4b79-8789-779d3d27f265-kube-api-access-q5dk9 podName:fede4130-99a5-4b79-8789-779d3d27f265 nodeName:}" failed. No retries permitted until 2025-01-30 14:06:32.793038951 +0000 UTC m=+6.683052227 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-q5dk9" (UniqueName: "kubernetes.io/projected/fede4130-99a5-4b79-8789-779d3d27f265-kube-api-access-q5dk9") pod "kube-proxy-p67nz" (UID: "fede4130-99a5-4b79-8789-779d3d27f265") : configmap "kube-root-ca.crt" not found Jan 30 14:06:33.089798 kubelet[2557]: E0130 14:06:33.089742 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:33.090445 containerd[1502]: time="2025-01-30T14:06:33.090406628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p67nz,Uid:fede4130-99a5-4b79-8789-779d3d27f265,Namespace:kube-system,Attempt:0,}" Jan 30 14:06:33.094160 kubelet[2557]: E0130 14:06:33.094132 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:33.094486 containerd[1502]: time="2025-01-30T14:06:33.094448766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-shz68,Uid:551b8cb8-389f-4d5d-8cb0-291cf85a8125,Namespace:kube-flannel,Attempt:0,}" Jan 30 14:06:33.117819 containerd[1502]: time="2025-01-30T14:06:33.117550999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:06:33.117819 containerd[1502]: time="2025-01-30T14:06:33.117612491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:06:33.117819 containerd[1502]: time="2025-01-30T14:06:33.117625470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:06:33.117819 containerd[1502]: time="2025-01-30T14:06:33.117711044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:06:33.128161 containerd[1502]: time="2025-01-30T14:06:33.128057077Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:06:33.128161 containerd[1502]: time="2025-01-30T14:06:33.128108768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:06:33.128161 containerd[1502]: time="2025-01-30T14:06:33.128123119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:06:33.128286 containerd[1502]: time="2025-01-30T14:06:33.128189422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:06:33.144740 systemd[1]: Started cri-containerd-391b91ec8e381c39ec2f4ca12fbfb7d2cf1fba3d681bd98eae84c293a89a416f.scope - libcontainer container 391b91ec8e381c39ec2f4ca12fbfb7d2cf1fba3d681bd98eae84c293a89a416f. Jan 30 14:06:33.147712 systemd[1]: Started cri-containerd-fb355d9519a7eadbeda93d50415eb52ef79205b2b78eb80991955a358cbf5716.scope - libcontainer container fb355d9519a7eadbeda93d50415eb52ef79205b2b78eb80991955a358cbf5716. Jan 30 14:06:33.171669 containerd[1502]: time="2025-01-30T14:06:33.171630709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p67nz,Uid:fede4130-99a5-4b79-8789-779d3d27f265,Namespace:kube-system,Attempt:0,} returns sandbox id \"391b91ec8e381c39ec2f4ca12fbfb7d2cf1fba3d681bd98eae84c293a89a416f\"" Jan 30 14:06:33.172470 kubelet[2557]: E0130 14:06:33.172444 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:33.175569 containerd[1502]: time="2025-01-30T14:06:33.174929043Z" level=info msg="CreateContainer within sandbox \"391b91ec8e381c39ec2f4ca12fbfb7d2cf1fba3d681bd98eae84c293a89a416f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 14:06:33.186882 containerd[1502]: time="2025-01-30T14:06:33.186828646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-shz68,Uid:551b8cb8-389f-4d5d-8cb0-291cf85a8125,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"fb355d9519a7eadbeda93d50415eb52ef79205b2b78eb80991955a358cbf5716\"" Jan 30 14:06:33.187779 kubelet[2557]: E0130 14:06:33.187750 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:33.188765 containerd[1502]: time="2025-01-30T14:06:33.188667643Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 30 14:06:33.207116 containerd[1502]: time="2025-01-30T14:06:33.207048944Z" level=info msg="CreateContainer within sandbox \"391b91ec8e381c39ec2f4ca12fbfb7d2cf1fba3d681bd98eae84c293a89a416f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f54e5714d53bce5e3ef7934380e476952913dcec9f62f1616625d1d41d2b917\"" Jan 30 14:06:33.207720 containerd[1502]: time="2025-01-30T14:06:33.207684806Z" level=info msg="StartContainer for \"3f54e5714d53bce5e3ef7934380e476952913dcec9f62f1616625d1d41d2b917\"" Jan 30 14:06:33.236179 kubelet[2557]: E0130 14:06:33.236129 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:33.239820 systemd[1]: Started cri-containerd-3f54e5714d53bce5e3ef7934380e476952913dcec9f62f1616625d1d41d2b917.scope - libcontainer container 3f54e5714d53bce5e3ef7934380e476952913dcec9f62f1616625d1d41d2b917. Jan 30 14:06:33.274756 containerd[1502]: time="2025-01-30T14:06:33.274712270Z" level=info msg="StartContainer for \"3f54e5714d53bce5e3ef7934380e476952913dcec9f62f1616625d1d41d2b917\" returns successfully" Jan 30 14:06:33.305236 kubelet[2557]: E0130 14:06:33.305210 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:33.305795 kubelet[2557]: E0130 14:06:33.305763 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:33.315163 kubelet[2557]: I0130 14:06:33.314996 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p67nz" podStartSLOduration=1.314939666 podStartE2EDuration="1.314939666s" podCreationTimestamp="2025-01-30 14:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:06:33.314652907 +0000 UTC m=+7.204666183" watchObservedRunningTime="2025-01-30 14:06:33.314939666 +0000 UTC m=+7.204952942" Jan 30 14:06:34.306319 kubelet[2557]: E0130 14:06:34.306288 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:34.864497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3971301380.mount: Deactivated successfully. Jan 30 14:06:34.918415 containerd[1502]: time="2025-01-30T14:06:34.918360236Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:34.919237 containerd[1502]: time="2025-01-30T14:06:34.919195315Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 30 14:06:34.920305 containerd[1502]: time="2025-01-30T14:06:34.920267874Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:34.922473 containerd[1502]: time="2025-01-30T14:06:34.922439797Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:34.923137 containerd[1502]: time="2025-01-30T14:06:34.923108960Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.734412435s" Jan 30 14:06:34.923137 containerd[1502]: time="2025-01-30T14:06:34.923132330Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 30 14:06:34.924651 containerd[1502]: time="2025-01-30T14:06:34.924625870Z" level=info msg="CreateContainer within sandbox \"fb355d9519a7eadbeda93d50415eb52ef79205b2b78eb80991955a358cbf5716\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 30 14:06:34.937431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount680649083.mount: Deactivated successfully. Jan 30 14:06:34.939167 containerd[1502]: time="2025-01-30T14:06:34.939132390Z" level=info msg="CreateContainer within sandbox \"fb355d9519a7eadbeda93d50415eb52ef79205b2b78eb80991955a358cbf5716\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"941a4a87af0b27bbe6913e8e2673f38bd5c512da986837de2b6770893e5508ec\"" Jan 30 14:06:34.939620 containerd[1502]: time="2025-01-30T14:06:34.939596424Z" level=info msg="StartContainer for \"941a4a87af0b27bbe6913e8e2673f38bd5c512da986837de2b6770893e5508ec\"" Jan 30 14:06:34.977655 systemd[1]: Started cri-containerd-941a4a87af0b27bbe6913e8e2673f38bd5c512da986837de2b6770893e5508ec.scope - libcontainer container 941a4a87af0b27bbe6913e8e2673f38bd5c512da986837de2b6770893e5508ec. Jan 30 14:06:35.002295 systemd[1]: cri-containerd-941a4a87af0b27bbe6913e8e2673f38bd5c512da986837de2b6770893e5508ec.scope: Deactivated successfully. Jan 30 14:06:35.003739 containerd[1502]: time="2025-01-30T14:06:35.003632065Z" level=info msg="StartContainer for \"941a4a87af0b27bbe6913e8e2673f38bd5c512da986837de2b6770893e5508ec\" returns successfully" Jan 30 14:06:35.059546 containerd[1502]: time="2025-01-30T14:06:35.059306534Z" level=info msg="shim disconnected" id=941a4a87af0b27bbe6913e8e2673f38bd5c512da986837de2b6770893e5508ec namespace=k8s.io Jan 30 14:06:35.059546 containerd[1502]: time="2025-01-30T14:06:35.059366031Z" level=warning msg="cleaning up after shim disconnected" id=941a4a87af0b27bbe6913e8e2673f38bd5c512da986837de2b6770893e5508ec namespace=k8s.io Jan 30 14:06:35.059546 containerd[1502]: time="2025-01-30T14:06:35.059374258Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:06:35.309944 kubelet[2557]: E0130 14:06:35.309814 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:35.310933 containerd[1502]: time="2025-01-30T14:06:35.310892547Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 30 14:06:35.408745 kubelet[2557]: E0130 14:06:35.408699 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:35.868793 update_engine[1494]: I20250130 14:06:35.868712 1494 update_attempter.cc:509] Updating boot flags... Jan 30 14:06:35.908537 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2940) Jan 30 14:06:35.936629 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-941a4a87af0b27bbe6913e8e2673f38bd5c512da986837de2b6770893e5508ec-rootfs.mount: Deactivated successfully. Jan 30 14:06:35.943586 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2942) Jan 30 14:06:35.978745 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2942) Jan 30 14:06:36.311142 kubelet[2557]: E0130 14:06:36.311046 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:37.159431 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2902227792.mount: Deactivated successfully. Jan 30 14:06:37.873729 kubelet[2557]: E0130 14:06:37.873433 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:38.107316 containerd[1502]: time="2025-01-30T14:06:38.107258986Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:38.108175 containerd[1502]: time="2025-01-30T14:06:38.108118276Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 30 14:06:38.109321 containerd[1502]: time="2025-01-30T14:06:38.109290470Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:38.112348 containerd[1502]: time="2025-01-30T14:06:38.112316426Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 14:06:38.115526 containerd[1502]: time="2025-01-30T14:06:38.113429337Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.802493489s" Jan 30 14:06:38.115526 containerd[1502]: time="2025-01-30T14:06:38.113463028Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 30 14:06:38.118736 containerd[1502]: time="2025-01-30T14:06:38.118688189Z" level=info msg="CreateContainer within sandbox \"fb355d9519a7eadbeda93d50415eb52ef79205b2b78eb80991955a358cbf5716\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 14:06:38.133540 containerd[1502]: time="2025-01-30T14:06:38.133434268Z" level=info msg="CreateContainer within sandbox \"fb355d9519a7eadbeda93d50415eb52ef79205b2b78eb80991955a358cbf5716\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7ccf27e106ff4e343de1319f67e49cc0e7bcd32db2102ba6aa0af0b79e388cbf\"" Jan 30 14:06:38.134112 containerd[1502]: time="2025-01-30T14:06:38.133945197Z" level=info msg="StartContainer for \"7ccf27e106ff4e343de1319f67e49cc0e7bcd32db2102ba6aa0af0b79e388cbf\"" Jan 30 14:06:38.163654 systemd[1]: Started cri-containerd-7ccf27e106ff4e343de1319f67e49cc0e7bcd32db2102ba6aa0af0b79e388cbf.scope - libcontainer container 7ccf27e106ff4e343de1319f67e49cc0e7bcd32db2102ba6aa0af0b79e388cbf. Jan 30 14:06:38.190075 systemd[1]: cri-containerd-7ccf27e106ff4e343de1319f67e49cc0e7bcd32db2102ba6aa0af0b79e388cbf.scope: Deactivated successfully. Jan 30 14:06:38.192359 containerd[1502]: time="2025-01-30T14:06:38.192319133Z" level=info msg="StartContainer for \"7ccf27e106ff4e343de1319f67e49cc0e7bcd32db2102ba6aa0af0b79e388cbf\" returns successfully" Jan 30 14:06:38.209688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ccf27e106ff4e343de1319f67e49cc0e7bcd32db2102ba6aa0af0b79e388cbf-rootfs.mount: Deactivated successfully. Jan 30 14:06:38.248476 kubelet[2557]: I0130 14:06:38.248441 2557 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 14:06:38.315616 kubelet[2557]: E0130 14:06:38.315573 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:38.315759 kubelet[2557]: E0130 14:06:38.315706 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:38.347334 containerd[1502]: time="2025-01-30T14:06:38.346880134Z" level=info msg="shim disconnected" id=7ccf27e106ff4e343de1319f67e49cc0e7bcd32db2102ba6aa0af0b79e388cbf namespace=k8s.io Jan 30 14:06:38.347334 containerd[1502]: time="2025-01-30T14:06:38.346940290Z" level=warning msg="cleaning up after shim disconnected" id=7ccf27e106ff4e343de1319f67e49cc0e7bcd32db2102ba6aa0af0b79e388cbf namespace=k8s.io Jan 30 14:06:38.347334 containerd[1502]: time="2025-01-30T14:06:38.346959440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 14:06:38.353831 systemd[1]: Created slice kubepods-burstable-pod67928fc6_5139_42ab_a569_4cc5bbdde285.slice - libcontainer container kubepods-burstable-pod67928fc6_5139_42ab_a569_4cc5bbdde285.slice. Jan 30 14:06:38.361582 systemd[1]: Created slice kubepods-burstable-podd059ebb8_9400_495d_a8c9_2d11184eb222.slice - libcontainer container kubepods-burstable-podd059ebb8_9400_495d_a8c9_2d11184eb222.slice. Jan 30 14:06:38.433552 kubelet[2557]: I0130 14:06:38.433370 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67928fc6-5139-42ab-a569-4cc5bbdde285-config-volume\") pod \"coredns-668d6bf9bc-nfn6z\" (UID: \"67928fc6-5139-42ab-a569-4cc5bbdde285\") " pod="kube-system/coredns-668d6bf9bc-nfn6z" Jan 30 14:06:38.433552 kubelet[2557]: I0130 14:06:38.433428 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d059ebb8-9400-495d-a8c9-2d11184eb222-config-volume\") pod \"coredns-668d6bf9bc-vdxgz\" (UID: \"d059ebb8-9400-495d-a8c9-2d11184eb222\") " pod="kube-system/coredns-668d6bf9bc-vdxgz" Jan 30 14:06:38.433552 kubelet[2557]: I0130 14:06:38.433447 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6knnw\" (UniqueName: \"kubernetes.io/projected/d059ebb8-9400-495d-a8c9-2d11184eb222-kube-api-access-6knnw\") pod \"coredns-668d6bf9bc-vdxgz\" (UID: \"d059ebb8-9400-495d-a8c9-2d11184eb222\") " pod="kube-system/coredns-668d6bf9bc-vdxgz" Jan 30 14:06:38.433552 kubelet[2557]: I0130 14:06:38.433470 2557 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjz5z\" (UniqueName: \"kubernetes.io/projected/67928fc6-5139-42ab-a569-4cc5bbdde285-kube-api-access-cjz5z\") pod \"coredns-668d6bf9bc-nfn6z\" (UID: \"67928fc6-5139-42ab-a569-4cc5bbdde285\") " pod="kube-system/coredns-668d6bf9bc-nfn6z" Jan 30 14:06:38.659346 kubelet[2557]: E0130 14:06:38.659291 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:38.659912 containerd[1502]: time="2025-01-30T14:06:38.659872383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nfn6z,Uid:67928fc6-5139-42ab-a569-4cc5bbdde285,Namespace:kube-system,Attempt:0,}" Jan 30 14:06:38.668127 kubelet[2557]: E0130 14:06:38.668080 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:38.668627 containerd[1502]: time="2025-01-30T14:06:38.668578333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vdxgz,Uid:d059ebb8-9400-495d-a8c9-2d11184eb222,Namespace:kube-system,Attempt:0,}" Jan 30 14:06:38.697837 containerd[1502]: time="2025-01-30T14:06:38.697716449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nfn6z,Uid:67928fc6-5139-42ab-a569-4cc5bbdde285,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e4f6eef8cc577ab01d6f61aca2970a50e04974c4a6f21156e6408a794e6fc649\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 14:06:38.698117 kubelet[2557]: E0130 14:06:38.698081 2557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4f6eef8cc577ab01d6f61aca2970a50e04974c4a6f21156e6408a794e6fc649\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 14:06:38.698187 kubelet[2557]: E0130 14:06:38.698160 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4f6eef8cc577ab01d6f61aca2970a50e04974c4a6f21156e6408a794e6fc649\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-nfn6z" Jan 30 14:06:38.698212 kubelet[2557]: E0130 14:06:38.698189 2557 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4f6eef8cc577ab01d6f61aca2970a50e04974c4a6f21156e6408a794e6fc649\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-nfn6z" Jan 30 14:06:38.698288 kubelet[2557]: E0130 14:06:38.698260 2557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nfn6z_kube-system(67928fc6-5139-42ab-a569-4cc5bbdde285)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nfn6z_kube-system(67928fc6-5139-42ab-a569-4cc5bbdde285)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4f6eef8cc577ab01d6f61aca2970a50e04974c4a6f21156e6408a794e6fc649\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-nfn6z" podUID="67928fc6-5139-42ab-a569-4cc5bbdde285" Jan 30 14:06:38.702845 containerd[1502]: time="2025-01-30T14:06:38.702792999Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vdxgz,Uid:d059ebb8-9400-495d-a8c9-2d11184eb222,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"880ce62c6414afa3a98de19f0b437c87c92a8577a7e23c27c1f66a3e9ebe25c1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 14:06:38.703036 kubelet[2557]: E0130 14:06:38.702991 2557 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"880ce62c6414afa3a98de19f0b437c87c92a8577a7e23c27c1f66a3e9ebe25c1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 14:06:38.703077 kubelet[2557]: E0130 14:06:38.703054 2557 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"880ce62c6414afa3a98de19f0b437c87c92a8577a7e23c27c1f66a3e9ebe25c1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-vdxgz" Jan 30 14:06:38.703103 kubelet[2557]: E0130 14:06:38.703079 2557 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"880ce62c6414afa3a98de19f0b437c87c92a8577a7e23c27c1f66a3e9ebe25c1\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-vdxgz" Jan 30 14:06:38.703151 kubelet[2557]: E0130 14:06:38.703126 2557 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vdxgz_kube-system(d059ebb8-9400-495d-a8c9-2d11184eb222)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vdxgz_kube-system(d059ebb8-9400-495d-a8c9-2d11184eb222)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"880ce62c6414afa3a98de19f0b437c87c92a8577a7e23c27c1f66a3e9ebe25c1\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-vdxgz" podUID="d059ebb8-9400-495d-a8c9-2d11184eb222" Jan 30 14:06:39.318529 kubelet[2557]: E0130 14:06:39.318474 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:39.320384 containerd[1502]: time="2025-01-30T14:06:39.320344027Z" level=info msg="CreateContainer within sandbox \"fb355d9519a7eadbeda93d50415eb52ef79205b2b78eb80991955a358cbf5716\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 30 14:06:39.334247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3837389004.mount: Deactivated successfully. Jan 30 14:06:39.336429 containerd[1502]: time="2025-01-30T14:06:39.336386295Z" level=info msg="CreateContainer within sandbox \"fb355d9519a7eadbeda93d50415eb52ef79205b2b78eb80991955a358cbf5716\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"636df9dbb8ca534874ac06f4888bf236ed638026029a07a8af06b54e72d959bc\"" Jan 30 14:06:39.337041 containerd[1502]: time="2025-01-30T14:06:39.336981815Z" level=info msg="StartContainer for \"636df9dbb8ca534874ac06f4888bf236ed638026029a07a8af06b54e72d959bc\"" Jan 30 14:06:39.365726 systemd[1]: Started cri-containerd-636df9dbb8ca534874ac06f4888bf236ed638026029a07a8af06b54e72d959bc.scope - libcontainer container 636df9dbb8ca534874ac06f4888bf236ed638026029a07a8af06b54e72d959bc. Jan 30 14:06:39.398876 containerd[1502]: time="2025-01-30T14:06:39.398830862Z" level=info msg="StartContainer for \"636df9dbb8ca534874ac06f4888bf236ed638026029a07a8af06b54e72d959bc\" returns successfully" Jan 30 14:06:40.322148 kubelet[2557]: E0130 14:06:40.322120 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:40.365086 kubelet[2557]: I0130 14:06:40.365013 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-shz68" podStartSLOduration=3.435871556 podStartE2EDuration="8.364992601s" podCreationTimestamp="2025-01-30 14:06:32 +0000 UTC" firstStartedPulling="2025-01-30 14:06:33.188157963 +0000 UTC m=+7.078171239" lastFinishedPulling="2025-01-30 14:06:38.117279008 +0000 UTC m=+12.007292284" observedRunningTime="2025-01-30 14:06:40.364912225 +0000 UTC m=+14.254925501" watchObservedRunningTime="2025-01-30 14:06:40.364992601 +0000 UTC m=+14.255005877" Jan 30 14:06:40.440094 systemd-networkd[1421]: flannel.1: Link UP Jan 30 14:06:40.440106 systemd-networkd[1421]: flannel.1: Gained carrier Jan 30 14:06:41.323678 kubelet[2557]: E0130 14:06:41.323643 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:42.047666 systemd-networkd[1421]: flannel.1: Gained IPv6LL Jan 30 14:06:49.275224 kubelet[2557]: E0130 14:06:49.275179 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:49.275737 containerd[1502]: time="2025-01-30T14:06:49.275613957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vdxgz,Uid:d059ebb8-9400-495d-a8c9-2d11184eb222,Namespace:kube-system,Attempt:0,}" Jan 30 14:06:49.347122 systemd-networkd[1421]: cni0: Link UP Jan 30 14:06:49.347134 systemd-networkd[1421]: cni0: Gained carrier Jan 30 14:06:49.351116 systemd-networkd[1421]: cni0: Lost carrier Jan 30 14:06:49.356376 systemd-networkd[1421]: veth2f874219: Link UP Jan 30 14:06:49.358798 kernel: cni0: port 1(veth2f874219) entered blocking state Jan 30 14:06:49.358905 kernel: cni0: port 1(veth2f874219) entered disabled state Jan 30 14:06:49.358932 kernel: veth2f874219: entered allmulticast mode Jan 30 14:06:49.360056 kernel: veth2f874219: entered promiscuous mode Jan 30 14:06:49.360994 kernel: cni0: port 1(veth2f874219) entered blocking state Jan 30 14:06:49.361040 kernel: cni0: port 1(veth2f874219) entered forwarding state Jan 30 14:06:49.363135 kernel: cni0: port 1(veth2f874219) entered disabled state Jan 30 14:06:49.372386 kernel: cni0: port 1(veth2f874219) entered blocking state Jan 30 14:06:49.372477 kernel: cni0: port 1(veth2f874219) entered forwarding state Jan 30 14:06:49.372622 systemd-networkd[1421]: veth2f874219: Gained carrier Jan 30 14:06:49.373201 systemd-networkd[1421]: cni0: Gained carrier Jan 30 14:06:49.375820 containerd[1502]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000024938), "name":"cbr0", "type":"bridge"} Jan 30 14:06:49.375820 containerd[1502]: delegateAdd: netconf sent to delegate plugin: Jan 30 14:06:49.399674 containerd[1502]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-30T14:06:49.399592343Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:06:49.399674 containerd[1502]: time="2025-01-30T14:06:49.399646491Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:06:49.399674 containerd[1502]: time="2025-01-30T14:06:49.399656571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:06:49.399988 containerd[1502]: time="2025-01-30T14:06:49.399724709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:06:49.424730 systemd[1]: Started cri-containerd-f1c8333d8297b9945d2e90ceca71cdbeec7d0a366f9caae17a2d2ff94f1b9736.scope - libcontainer container f1c8333d8297b9945d2e90ceca71cdbeec7d0a366f9caae17a2d2ff94f1b9736. Jan 30 14:06:49.435767 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 14:06:49.457883 containerd[1502]: time="2025-01-30T14:06:49.457838691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vdxgz,Uid:d059ebb8-9400-495d-a8c9-2d11184eb222,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1c8333d8297b9945d2e90ceca71cdbeec7d0a366f9caae17a2d2ff94f1b9736\"" Jan 30 14:06:49.458359 kubelet[2557]: E0130 14:06:49.458330 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:49.459812 containerd[1502]: time="2025-01-30T14:06:49.459783941Z" level=info msg="CreateContainer within sandbox \"f1c8333d8297b9945d2e90ceca71cdbeec7d0a366f9caae17a2d2ff94f1b9736\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:06:49.474488 containerd[1502]: time="2025-01-30T14:06:49.474452120Z" level=info msg="CreateContainer within sandbox \"f1c8333d8297b9945d2e90ceca71cdbeec7d0a366f9caae17a2d2ff94f1b9736\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e78a1169854cc827c5d200e6bf04068c07d4e1c27fee82e5800e787e3f96fca4\"" Jan 30 14:06:49.474950 containerd[1502]: time="2025-01-30T14:06:49.474929309Z" level=info msg="StartContainer for \"e78a1169854cc827c5d200e6bf04068c07d4e1c27fee82e5800e787e3f96fca4\"" Jan 30 14:06:49.506684 systemd[1]: Started cri-containerd-e78a1169854cc827c5d200e6bf04068c07d4e1c27fee82e5800e787e3f96fca4.scope - libcontainer container e78a1169854cc827c5d200e6bf04068c07d4e1c27fee82e5800e787e3f96fca4. Jan 30 14:06:49.534327 containerd[1502]: time="2025-01-30T14:06:49.533529388Z" level=info msg="StartContainer for \"e78a1169854cc827c5d200e6bf04068c07d4e1c27fee82e5800e787e3f96fca4\" returns successfully" Jan 30 14:06:50.336738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3963446938.mount: Deactivated successfully. Jan 30 14:06:50.339280 kubelet[2557]: E0130 14:06:50.339250 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:50.484984 kubelet[2557]: I0130 14:06:50.484906 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vdxgz" podStartSLOduration=18.484885604 podStartE2EDuration="18.484885604s" podCreationTimestamp="2025-01-30 14:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:06:50.484645271 +0000 UTC m=+24.374658548" watchObservedRunningTime="2025-01-30 14:06:50.484885604 +0000 UTC m=+24.374898880" Jan 30 14:06:51.007699 systemd-networkd[1421]: cni0: Gained IPv6LL Jan 30 14:06:51.327685 systemd-networkd[1421]: veth2f874219: Gained IPv6LL Jan 30 14:06:51.340733 kubelet[2557]: E0130 14:06:51.340705 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:52.341532 kubelet[2557]: E0130 14:06:52.341473 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:53.141165 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:43766.service - OpenSSH per-connection server daemon (10.0.0.1:43766). Jan 30 14:06:53.186168 sshd[3390]: Accepted publickey for core from 10.0.0.1 port 43766 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:06:53.187645 sshd-session[3390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:06:53.191072 systemd-logind[1488]: New session 6 of user core. Jan 30 14:06:53.197633 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 14:06:53.275805 kubelet[2557]: E0130 14:06:53.275364 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:53.276033 containerd[1502]: time="2025-01-30T14:06:53.275742781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nfn6z,Uid:67928fc6-5139-42ab-a569-4cc5bbdde285,Namespace:kube-system,Attempt:0,}" Jan 30 14:06:53.371401 sshd[3392]: Connection closed by 10.0.0.1 port 43766 Jan 30 14:06:53.371779 sshd-session[3390]: pam_unix(sshd:session): session closed for user core Jan 30 14:06:53.375171 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:43766.service: Deactivated successfully. Jan 30 14:06:53.376995 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 14:06:53.377654 systemd-logind[1488]: Session 6 logged out. Waiting for processes to exit. Jan 30 14:06:53.378447 systemd-logind[1488]: Removed session 6. Jan 30 14:06:53.423756 systemd-networkd[1421]: veth16ec2858: Link UP Jan 30 14:06:53.426286 kernel: cni0: port 2(veth16ec2858) entered blocking state Jan 30 14:06:53.426344 kernel: cni0: port 2(veth16ec2858) entered disabled state Jan 30 14:06:53.426371 kernel: veth16ec2858: entered allmulticast mode Jan 30 14:06:53.427675 kernel: veth16ec2858: entered promiscuous mode Jan 30 14:06:53.434354 kernel: cni0: port 2(veth16ec2858) entered blocking state Jan 30 14:06:53.434416 kernel: cni0: port 2(veth16ec2858) entered forwarding state Jan 30 14:06:53.433782 systemd-networkd[1421]: veth16ec2858: Gained carrier Jan 30 14:06:53.437147 containerd[1502]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0001268e8), "name":"cbr0", "type":"bridge"} Jan 30 14:06:53.437147 containerd[1502]: delegateAdd: netconf sent to delegate plugin: Jan 30 14:06:53.458282 containerd[1502]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-30T14:06:53.458204172Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 14:06:53.458437 containerd[1502]: time="2025-01-30T14:06:53.458259363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 14:06:53.458437 containerd[1502]: time="2025-01-30T14:06:53.458273620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:06:53.458437 containerd[1502]: time="2025-01-30T14:06:53.458355504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 14:06:53.482632 systemd[1]: Started cri-containerd-05968202ac2bade02e64044ecb6155b661fdc61148bf13bf189c5583ae13ee20.scope - libcontainer container 05968202ac2bade02e64044ecb6155b661fdc61148bf13bf189c5583ae13ee20. Jan 30 14:06:53.494746 systemd-resolved[1335]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 14:06:53.519992 containerd[1502]: time="2025-01-30T14:06:53.519938797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nfn6z,Uid:67928fc6-5139-42ab-a569-4cc5bbdde285,Namespace:kube-system,Attempt:0,} returns sandbox id \"05968202ac2bade02e64044ecb6155b661fdc61148bf13bf189c5583ae13ee20\"" Jan 30 14:06:53.520570 kubelet[2557]: E0130 14:06:53.520543 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:53.522023 containerd[1502]: time="2025-01-30T14:06:53.521981386Z" level=info msg="CreateContainer within sandbox \"05968202ac2bade02e64044ecb6155b661fdc61148bf13bf189c5583ae13ee20\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 14:06:53.536208 containerd[1502]: time="2025-01-30T14:06:53.536169930Z" level=info msg="CreateContainer within sandbox \"05968202ac2bade02e64044ecb6155b661fdc61148bf13bf189c5583ae13ee20\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"49f33d6c155827de2f678fa00a6827871594cfe672981a0febb697fad567a615\"" Jan 30 14:06:53.536637 containerd[1502]: time="2025-01-30T14:06:53.536607391Z" level=info msg="StartContainer for \"49f33d6c155827de2f678fa00a6827871594cfe672981a0febb697fad567a615\"" Jan 30 14:06:53.566660 systemd[1]: Started cri-containerd-49f33d6c155827de2f678fa00a6827871594cfe672981a0febb697fad567a615.scope - libcontainer container 49f33d6c155827de2f678fa00a6827871594cfe672981a0febb697fad567a615. Jan 30 14:06:53.595592 containerd[1502]: time="2025-01-30T14:06:53.595549223Z" level=info msg="StartContainer for \"49f33d6c155827de2f678fa00a6827871594cfe672981a0febb697fad567a615\" returns successfully" Jan 30 14:06:54.347106 kubelet[2557]: E0130 14:06:54.347071 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:54.416895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1556633557.mount: Deactivated successfully. Jan 30 14:06:54.527639 systemd-networkd[1421]: veth16ec2858: Gained IPv6LL Jan 30 14:06:54.544611 kubelet[2557]: I0130 14:06:54.544549 2557 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nfn6z" podStartSLOduration=22.544526917 podStartE2EDuration="22.544526917s" podCreationTimestamp="2025-01-30 14:06:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 14:06:54.535923322 +0000 UTC m=+28.425936618" watchObservedRunningTime="2025-01-30 14:06:54.544526917 +0000 UTC m=+28.434540213" Jan 30 14:06:55.348612 kubelet[2557]: E0130 14:06:55.348565 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:56.350038 kubelet[2557]: E0130 14:06:56.350004 2557 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 14:06:58.386725 systemd[1]: Started sshd@7-10.0.0.12:22-10.0.0.1:49326.service - OpenSSH per-connection server daemon (10.0.0.1:49326). Jan 30 14:06:58.425306 sshd[3556]: Accepted publickey for core from 10.0.0.1 port 49326 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:06:58.426878 sshd-session[3556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:06:58.430762 systemd-logind[1488]: New session 7 of user core. Jan 30 14:06:58.438701 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 14:06:58.545115 sshd[3558]: Connection closed by 10.0.0.1 port 49326 Jan 30 14:06:58.545523 sshd-session[3556]: pam_unix(sshd:session): session closed for user core Jan 30 14:06:58.549444 systemd[1]: sshd@7-10.0.0.12:22-10.0.0.1:49326.service: Deactivated successfully. Jan 30 14:06:58.551660 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 14:06:58.552387 systemd-logind[1488]: Session 7 logged out. Waiting for processes to exit. Jan 30 14:06:58.553384 systemd-logind[1488]: Removed session 7. Jan 30 14:07:03.562338 systemd[1]: Started sshd@8-10.0.0.12:22-10.0.0.1:49330.service - OpenSSH per-connection server daemon (10.0.0.1:49330). Jan 30 14:07:03.602711 sshd[3595]: Accepted publickey for core from 10.0.0.1 port 49330 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:07:03.604196 sshd-session[3595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:07:03.607912 systemd-logind[1488]: New session 8 of user core. Jan 30 14:07:03.618619 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 14:07:03.723253 sshd[3597]: Connection closed by 10.0.0.1 port 49330 Jan 30 14:07:03.723664 sshd-session[3595]: pam_unix(sshd:session): session closed for user core Jan 30 14:07:03.727379 systemd[1]: sshd@8-10.0.0.12:22-10.0.0.1:49330.service: Deactivated successfully. Jan 30 14:07:03.729325 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 14:07:03.729993 systemd-logind[1488]: Session 8 logged out. Waiting for processes to exit. Jan 30 14:07:03.730941 systemd-logind[1488]: Removed session 8. Jan 30 14:07:08.737620 systemd[1]: Started sshd@9-10.0.0.12:22-10.0.0.1:53030.service - OpenSSH per-connection server daemon (10.0.0.1:53030). Jan 30 14:07:08.776384 sshd[3631]: Accepted publickey for core from 10.0.0.1 port 53030 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:07:08.777962 sshd-session[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:07:08.782228 systemd-logind[1488]: New session 9 of user core. Jan 30 14:07:08.791655 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 14:07:08.900589 sshd[3633]: Connection closed by 10.0.0.1 port 53030 Jan 30 14:07:08.901627 sshd-session[3631]: pam_unix(sshd:session): session closed for user core Jan 30 14:07:08.912548 systemd[1]: sshd@9-10.0.0.12:22-10.0.0.1:53030.service: Deactivated successfully. Jan 30 14:07:08.914494 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 14:07:08.916643 systemd-logind[1488]: Session 9 logged out. Waiting for processes to exit. Jan 30 14:07:08.924774 systemd[1]: Started sshd@10-10.0.0.12:22-10.0.0.1:53038.service - OpenSSH per-connection server daemon (10.0.0.1:53038). Jan 30 14:07:08.925627 systemd-logind[1488]: Removed session 9. Jan 30 14:07:08.959928 sshd[3646]: Accepted publickey for core from 10.0.0.1 port 53038 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:07:08.961435 sshd-session[3646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:07:08.965452 systemd-logind[1488]: New session 10 of user core. Jan 30 14:07:08.972626 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 14:07:09.226241 sshd[3648]: Connection closed by 10.0.0.1 port 53038 Jan 30 14:07:09.226696 sshd-session[3646]: pam_unix(sshd:session): session closed for user core Jan 30 14:07:09.246479 systemd[1]: sshd@10-10.0.0.12:22-10.0.0.1:53038.service: Deactivated successfully. Jan 30 14:07:09.248634 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 14:07:09.250696 systemd-logind[1488]: Session 10 logged out. Waiting for processes to exit. Jan 30 14:07:09.260952 systemd[1]: Started sshd@11-10.0.0.12:22-10.0.0.1:53046.service - OpenSSH per-connection server daemon (10.0.0.1:53046). Jan 30 14:07:09.261876 systemd-logind[1488]: Removed session 10. Jan 30 14:07:09.298913 sshd[3658]: Accepted publickey for core from 10.0.0.1 port 53046 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:07:09.300540 sshd-session[3658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:07:09.305694 systemd-logind[1488]: New session 11 of user core. Jan 30 14:07:09.314723 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 14:07:09.549234 sshd[3660]: Connection closed by 10.0.0.1 port 53046 Jan 30 14:07:09.549520 sshd-session[3658]: pam_unix(sshd:session): session closed for user core Jan 30 14:07:09.553728 systemd[1]: sshd@11-10.0.0.12:22-10.0.0.1:53046.service: Deactivated successfully. Jan 30 14:07:09.555815 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 14:07:09.556588 systemd-logind[1488]: Session 11 logged out. Waiting for processes to exit. Jan 30 14:07:09.557400 systemd-logind[1488]: Removed session 11. Jan 30 14:07:14.561323 systemd[1]: Started sshd@12-10.0.0.12:22-10.0.0.1:53056.service - OpenSSH per-connection server daemon (10.0.0.1:53056). Jan 30 14:07:14.598107 sshd[3693]: Accepted publickey for core from 10.0.0.1 port 53056 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:07:14.599479 sshd-session[3693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:07:14.603063 systemd-logind[1488]: New session 12 of user core. Jan 30 14:07:14.619629 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 14:07:14.722907 sshd[3695]: Connection closed by 10.0.0.1 port 53056 Jan 30 14:07:14.723252 sshd-session[3693]: pam_unix(sshd:session): session closed for user core Jan 30 14:07:14.731150 systemd[1]: sshd@12-10.0.0.12:22-10.0.0.1:53056.service: Deactivated successfully. Jan 30 14:07:14.732831 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 14:07:14.734567 systemd-logind[1488]: Session 12 logged out. Waiting for processes to exit. Jan 30 14:07:14.742011 systemd[1]: Started sshd@13-10.0.0.12:22-10.0.0.1:53068.service - OpenSSH per-connection server daemon (10.0.0.1:53068). Jan 30 14:07:14.743610 systemd-logind[1488]: Removed session 12. Jan 30 14:07:14.775648 sshd[3707]: Accepted publickey for core from 10.0.0.1 port 53068 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:07:14.777141 sshd-session[3707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:07:14.781377 systemd-logind[1488]: New session 13 of user core. Jan 30 14:07:14.786620 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 14:07:14.940608 sshd[3709]: Connection closed by 10.0.0.1 port 53068 Jan 30 14:07:14.940902 sshd-session[3707]: pam_unix(sshd:session): session closed for user core Jan 30 14:07:14.953247 systemd[1]: sshd@13-10.0.0.12:22-10.0.0.1:53068.service: Deactivated successfully. Jan 30 14:07:14.954983 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 14:07:14.956807 systemd-logind[1488]: Session 13 logged out. Waiting for processes to exit. Jan 30 14:07:14.964733 systemd[1]: Started sshd@14-10.0.0.12:22-10.0.0.1:53080.service - OpenSSH per-connection server daemon (10.0.0.1:53080). Jan 30 14:07:14.965561 systemd-logind[1488]: Removed session 13. Jan 30 14:07:15.001052 sshd[3720]: Accepted publickey for core from 10.0.0.1 port 53080 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:07:15.002554 sshd-session[3720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:07:15.006495 systemd-logind[1488]: New session 14 of user core. Jan 30 14:07:15.018684 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 14:07:15.895959 sshd[3722]: Connection closed by 10.0.0.1 port 53080 Jan 30 14:07:15.901244 sshd-session[3720]: pam_unix(sshd:session): session closed for user core Jan 30 14:07:15.911667 systemd[1]: sshd@14-10.0.0.12:22-10.0.0.1:53080.service: Deactivated successfully. Jan 30 14:07:15.914097 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 14:07:15.914924 systemd-logind[1488]: Session 14 logged out. Waiting for processes to exit. Jan 30 14:07:15.922943 systemd[1]: Started sshd@15-10.0.0.12:22-10.0.0.1:53088.service - OpenSSH per-connection server daemon (10.0.0.1:53088). Jan 30 14:07:15.923997 systemd-logind[1488]: Removed session 14. Jan 30 14:07:15.961197 sshd[3761]: Accepted publickey for core from 10.0.0.1 port 53088 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:07:15.962692 sshd-session[3761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:07:15.966623 systemd-logind[1488]: New session 15 of user core. Jan 30 14:07:15.976715 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 14:07:16.184015 sshd[3763]: Connection closed by 10.0.0.1 port 53088 Jan 30 14:07:16.185235 sshd-session[3761]: pam_unix(sshd:session): session closed for user core Jan 30 14:07:16.197928 systemd[1]: sshd@15-10.0.0.12:22-10.0.0.1:53088.service: Deactivated successfully. Jan 30 14:07:16.199969 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 14:07:16.201734 systemd-logind[1488]: Session 15 logged out. Waiting for processes to exit. Jan 30 14:07:16.211751 systemd[1]: Started sshd@16-10.0.0.12:22-10.0.0.1:53098.service - OpenSSH per-connection server daemon (10.0.0.1:53098). Jan 30 14:07:16.212775 systemd-logind[1488]: Removed session 15. Jan 30 14:07:16.247535 sshd[3773]: Accepted publickey for core from 10.0.0.1 port 53098 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:07:16.249007 sshd-session[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:07:16.253748 systemd-logind[1488]: New session 16 of user core. Jan 30 14:07:16.263636 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 14:07:16.369273 sshd[3775]: Connection closed by 10.0.0.1 port 53098 Jan 30 14:07:16.369658 sshd-session[3773]: pam_unix(sshd:session): session closed for user core Jan 30 14:07:16.373228 systemd[1]: sshd@16-10.0.0.12:22-10.0.0.1:53098.service: Deactivated successfully. Jan 30 14:07:16.375447 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 14:07:16.376102 systemd-logind[1488]: Session 16 logged out. Waiting for processes to exit. Jan 30 14:07:16.377094 systemd-logind[1488]: Removed session 16. Jan 30 14:07:21.386771 systemd[1]: Started sshd@17-10.0.0.12:22-10.0.0.1:36264.service - OpenSSH per-connection server daemon (10.0.0.1:36264). Jan 30 14:07:21.423937 sshd[3809]: Accepted publickey for core from 10.0.0.1 port 36264 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:07:21.425180 sshd-session[3809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:07:21.428961 systemd-logind[1488]: New session 17 of user core. Jan 30 14:07:21.445628 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 14:07:21.549639 sshd[3811]: Connection closed by 10.0.0.1 port 36264 Jan 30 14:07:21.549997 sshd-session[3809]: pam_unix(sshd:session): session closed for user core Jan 30 14:07:21.553579 systemd[1]: sshd@17-10.0.0.12:22-10.0.0.1:36264.service: Deactivated successfully. Jan 30 14:07:21.555719 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 14:07:21.556410 systemd-logind[1488]: Session 17 logged out. Waiting for processes to exit. Jan 30 14:07:21.557458 systemd-logind[1488]: Removed session 17. Jan 30 14:07:26.563103 systemd[1]: Started sshd@18-10.0.0.12:22-10.0.0.1:36280.service - OpenSSH per-connection server daemon (10.0.0.1:36280). Jan 30 14:07:26.607137 sshd[3849]: Accepted publickey for core from 10.0.0.1 port 36280 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:07:26.608717 sshd-session[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:07:26.613083 systemd-logind[1488]: New session 18 of user core. Jan 30 14:07:26.623650 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 14:07:26.735171 sshd[3851]: Connection closed by 10.0.0.1 port 36280 Jan 30 14:07:26.735571 sshd-session[3849]: pam_unix(sshd:session): session closed for user core Jan 30 14:07:26.739900 systemd[1]: sshd@18-10.0.0.12:22-10.0.0.1:36280.service: Deactivated successfully. Jan 30 14:07:26.742032 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 14:07:26.742730 systemd-logind[1488]: Session 18 logged out. Waiting for processes to exit. Jan 30 14:07:26.743600 systemd-logind[1488]: Removed session 18. Jan 30 14:07:31.746330 systemd[1]: Started sshd@19-10.0.0.12:22-10.0.0.1:41328.service - OpenSSH per-connection server daemon (10.0.0.1:41328). Jan 30 14:07:31.786157 sshd[3885]: Accepted publickey for core from 10.0.0.1 port 41328 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:07:31.787614 sshd-session[3885]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:07:31.791043 systemd-logind[1488]: New session 19 of user core. Jan 30 14:07:31.805680 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 14:07:31.906629 sshd[3887]: Connection closed by 10.0.0.1 port 41328 Jan 30 14:07:31.906947 sshd-session[3885]: pam_unix(sshd:session): session closed for user core Jan 30 14:07:31.909885 systemd[1]: sshd@19-10.0.0.12:22-10.0.0.1:41328.service: Deactivated successfully. Jan 30 14:07:31.913231 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 14:07:31.913926 systemd-logind[1488]: Session 19 logged out. Waiting for processes to exit. Jan 30 14:07:31.914888 systemd-logind[1488]: Removed session 19. Jan 30 14:07:36.917587 systemd[1]: Started sshd@20-10.0.0.12:22-10.0.0.1:41338.service - OpenSSH per-connection server daemon (10.0.0.1:41338). Jan 30 14:07:36.954200 sshd[3922]: Accepted publickey for core from 10.0.0.1 port 41338 ssh2: RSA SHA256:/icux5ThNTV6gDrxjQBuUfyGEAba+h/9jtfnl9/p+fc Jan 30 14:07:36.955568 sshd-session[3922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 14:07:36.959159 systemd-logind[1488]: New session 20 of user core. Jan 30 14:07:36.969631 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 14:07:37.068287 sshd[3924]: Connection closed by 10.0.0.1 port 41338 Jan 30 14:07:37.068652 sshd-session[3922]: pam_unix(sshd:session): session closed for user core Jan 30 14:07:37.072087 systemd[1]: sshd@20-10.0.0.12:22-10.0.0.1:41338.service: Deactivated successfully. Jan 30 14:07:37.074224 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 14:07:37.074936 systemd-logind[1488]: Session 20 logged out. Waiting for processes to exit. Jan 30 14:07:37.075859 systemd-logind[1488]: Removed session 20.