Jan 17 12:13:26.960477 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:13:26.960498 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:13:26.960509 kernel: BIOS-provided physical RAM map: Jan 17 12:13:26.960515 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 12:13:26.960521 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jan 17 12:13:26.960527 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jan 17 12:13:26.960534 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jan 17 12:13:26.960540 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jan 17 12:13:26.960546 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jan 17 12:13:26.960552 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jan 17 12:13:26.960560 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jan 17 12:13:26.960566 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jan 17 12:13:26.960572 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jan 17 12:13:26.960578 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jan 17 12:13:26.960592 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jan 17 12:13:26.960599 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jan 17 12:13:26.960608 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jan 17 12:13:26.960614 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jan 17 12:13:26.960621 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jan 17 12:13:26.960627 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 17 12:13:26.960633 kernel: NX (Execute Disable) protection: active Jan 17 12:13:26.960640 kernel: APIC: Static calls initialized Jan 17 12:13:26.960646 kernel: efi: EFI v2.7 by EDK II Jan 17 12:13:26.960653 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jan 17 12:13:26.960659 kernel: SMBIOS 2.8 present. Jan 17 12:13:26.960665 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jan 17 12:13:26.960671 kernel: Hypervisor detected: KVM Jan 17 12:13:26.960680 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 17 12:13:26.960686 kernel: kvm-clock: using sched offset of 4035646751 cycles Jan 17 12:13:26.960693 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 17 12:13:26.960700 kernel: tsc: Detected 2794.748 MHz processor Jan 17 12:13:26.960707 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:13:26.960714 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:13:26.960720 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jan 17 12:13:26.960727 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 12:13:26.960733 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:13:26.960742 kernel: Using GB pages for direct mapping Jan 17 12:13:26.960749 kernel: Secure boot disabled Jan 17 12:13:26.960755 kernel: ACPI: Early table checksum verification disabled Jan 17 12:13:26.960762 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jan 17 12:13:26.960772 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 17 12:13:26.960779 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:13:26.960798 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:13:26.960808 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jan 17 12:13:26.960815 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:13:26.960822 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:13:26.960829 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:13:26.960836 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 17 12:13:26.960843 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 17 12:13:26.960850 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jan 17 12:13:26.960859 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Jan 17 12:13:26.960866 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jan 17 12:13:26.960872 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jan 17 12:13:26.960879 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jan 17 12:13:26.960886 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jan 17 12:13:26.960893 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jan 17 12:13:26.960900 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jan 17 12:13:26.960906 kernel: No NUMA configuration found Jan 17 12:13:26.960913 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jan 17 12:13:26.960923 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jan 17 12:13:26.960930 kernel: Zone ranges: Jan 17 12:13:26.960936 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:13:26.960943 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jan 17 12:13:26.960950 kernel: Normal empty Jan 17 12:13:26.960957 kernel: Movable zone start for each node Jan 17 12:13:26.960964 kernel: Early memory node ranges Jan 17 12:13:26.960970 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 12:13:26.960977 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jan 17 12:13:26.960984 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jan 17 12:13:26.960993 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jan 17 12:13:26.961000 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jan 17 12:13:26.961007 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jan 17 12:13:26.961014 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jan 17 12:13:26.961021 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:13:26.961027 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 12:13:26.961036 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jan 17 12:13:26.961046 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:13:26.961054 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jan 17 12:13:26.961063 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jan 17 12:13:26.961070 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jan 17 12:13:26.961077 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 17 12:13:26.961084 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 17 12:13:26.961091 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:13:26.961097 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 17 12:13:26.961104 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 17 12:13:26.961111 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:13:26.961118 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 17 12:13:26.961126 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 17 12:13:26.961133 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:13:26.961140 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 17 12:13:26.961147 kernel: TSC deadline timer available Jan 17 12:13:26.961154 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 17 12:13:26.961160 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 17 12:13:26.961167 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 17 12:13:26.961174 kernel: kvm-guest: setup PV sched yield Jan 17 12:13:26.961181 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jan 17 12:13:26.961187 kernel: Booting paravirtualized kernel on KVM Jan 17 12:13:26.961196 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:13:26.961203 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 17 12:13:26.961210 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 17 12:13:26.961217 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 17 12:13:26.961224 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 17 12:13:26.961230 kernel: kvm-guest: PV spinlocks enabled Jan 17 12:13:26.961237 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:13:26.961245 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:13:26.961255 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:13:26.961262 kernel: random: crng init done Jan 17 12:13:26.961269 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:13:26.961276 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:13:26.961283 kernel: Fallback order for Node 0: 0 Jan 17 12:13:26.961290 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jan 17 12:13:26.961296 kernel: Policy zone: DMA32 Jan 17 12:13:26.961303 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:13:26.961310 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 171124K reserved, 0K cma-reserved) Jan 17 12:13:26.961320 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 17 12:13:26.961326 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:13:26.961333 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:13:26.961340 kernel: Dynamic Preempt: voluntary Jan 17 12:13:26.961354 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:13:26.961364 kernel: rcu: RCU event tracing is enabled. Jan 17 12:13:26.961372 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 17 12:13:26.961379 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:13:26.961386 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:13:26.961393 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:13:26.961401 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:13:26.961408 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 17 12:13:26.961417 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 17 12:13:26.961424 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:13:26.961432 kernel: Console: colour dummy device 80x25 Jan 17 12:13:26.961439 kernel: printk: console [ttyS0] enabled Jan 17 12:13:26.961446 kernel: ACPI: Core revision 20230628 Jan 17 12:13:26.961456 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 17 12:13:26.961463 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:13:26.961470 kernel: x2apic enabled Jan 17 12:13:26.961477 kernel: APIC: Switched APIC routing to: physical x2apic Jan 17 12:13:26.961485 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 17 12:13:26.961495 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 17 12:13:26.961504 kernel: kvm-guest: setup PV IPIs Jan 17 12:13:26.961511 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 17 12:13:26.961518 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 17 12:13:26.961528 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 17 12:13:26.961535 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 17 12:13:26.961542 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 17 12:13:26.961549 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 17 12:13:26.961556 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:13:26.961563 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:13:26.961571 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:13:26.961578 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:13:26.961591 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 17 12:13:26.961600 kernel: RETBleed: Mitigation: untrained return thunk Jan 17 12:13:26.961608 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 17 12:13:26.961615 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 17 12:13:26.961623 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 17 12:13:26.961631 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 17 12:13:26.961638 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 17 12:13:26.961645 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:13:26.961652 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:13:26.961662 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:13:26.961669 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:13:26.961676 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 17 12:13:26.961683 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:13:26.961690 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:13:26.961697 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:13:26.961704 kernel: landlock: Up and running. Jan 17 12:13:26.961711 kernel: SELinux: Initializing. Jan 17 12:13:26.961718 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:13:26.961728 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:13:26.961735 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 17 12:13:26.961742 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:13:26.961749 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:13:26.961757 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 17 12:13:26.961764 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 17 12:13:26.961771 kernel: ... version: 0 Jan 17 12:13:26.961778 kernel: ... bit width: 48 Jan 17 12:13:26.962204 kernel: ... generic registers: 6 Jan 17 12:13:26.962217 kernel: ... value mask: 0000ffffffffffff Jan 17 12:13:26.962225 kernel: ... max period: 00007fffffffffff Jan 17 12:13:26.962232 kernel: ... fixed-purpose events: 0 Jan 17 12:13:26.962239 kernel: ... event mask: 000000000000003f Jan 17 12:13:26.962246 kernel: signal: max sigframe size: 1776 Jan 17 12:13:26.962253 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:13:26.962261 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:13:26.962268 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:13:26.962276 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:13:26.962285 kernel: .... node #0, CPUs: #1 #2 #3 Jan 17 12:13:26.962293 kernel: smp: Brought up 1 node, 4 CPUs Jan 17 12:13:26.962303 kernel: smpboot: Max logical packages: 1 Jan 17 12:13:26.962313 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 17 12:13:26.962320 kernel: devtmpfs: initialized Jan 17 12:13:26.962327 kernel: x86/mm: Memory block size: 128MB Jan 17 12:13:26.962335 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jan 17 12:13:26.962342 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jan 17 12:13:26.962349 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jan 17 12:13:26.962359 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jan 17 12:13:26.962366 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jan 17 12:13:26.962373 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:13:26.962381 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 17 12:13:26.962388 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:13:26.962395 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:13:26.962402 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:13:26.962409 kernel: audit: type=2000 audit(1737116006.523:1): state=initialized audit_enabled=0 res=1 Jan 17 12:13:26.962416 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:13:26.962427 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:13:26.962434 kernel: cpuidle: using governor menu Jan 17 12:13:26.962441 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:13:26.962448 kernel: dca service started, version 1.12.1 Jan 17 12:13:26.962455 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 17 12:13:26.962462 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 17 12:13:26.962470 kernel: PCI: Using configuration type 1 for base access Jan 17 12:13:26.962477 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:13:26.962484 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:13:26.962493 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:13:26.962500 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:13:26.962507 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:13:26.962514 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:13:26.962521 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:13:26.962529 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:13:26.962536 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:13:26.962543 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:13:26.962550 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:13:26.962559 kernel: ACPI: Interpreter enabled Jan 17 12:13:26.962566 kernel: ACPI: PM: (supports S0 S3 S5) Jan 17 12:13:26.962573 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:13:26.962589 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:13:26.962597 kernel: PCI: Using E820 reservations for host bridge windows Jan 17 12:13:26.962604 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 17 12:13:26.962611 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 17 12:13:26.962800 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 12:13:26.962939 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 17 12:13:26.963067 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 17 12:13:26.963077 kernel: PCI host bridge to bus 0000:00 Jan 17 12:13:26.963203 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 17 12:13:26.963316 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 17 12:13:26.963437 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 17 12:13:26.963547 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 17 12:13:26.963671 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 17 12:13:26.963815 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jan 17 12:13:26.963929 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 17 12:13:26.964067 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 17 12:13:26.964210 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 17 12:13:26.964334 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jan 17 12:13:26.964458 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jan 17 12:13:26.964592 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jan 17 12:13:26.964714 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jan 17 12:13:26.964850 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 17 12:13:26.964981 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 17 12:13:26.965109 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jan 17 12:13:26.965230 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jan 17 12:13:26.965354 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jan 17 12:13:26.965488 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 17 12:13:26.965619 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jan 17 12:13:26.965740 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jan 17 12:13:26.965899 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jan 17 12:13:26.966031 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 17 12:13:26.966159 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jan 17 12:13:26.966285 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jan 17 12:13:26.966414 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jan 17 12:13:26.966535 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jan 17 12:13:26.966680 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 17 12:13:26.966827 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 17 12:13:26.966970 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 17 12:13:26.967101 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jan 17 12:13:26.967222 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jan 17 12:13:26.967353 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 17 12:13:26.967480 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jan 17 12:13:26.967490 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 17 12:13:26.967498 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 17 12:13:26.967505 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 17 12:13:26.967513 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 17 12:13:26.967523 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 17 12:13:26.967531 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 17 12:13:26.967538 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 17 12:13:26.967545 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 17 12:13:26.967553 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 17 12:13:26.967560 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 17 12:13:26.967567 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 17 12:13:26.967575 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 17 12:13:26.967592 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 17 12:13:26.967602 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 17 12:13:26.967610 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 17 12:13:26.967617 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 17 12:13:26.967625 kernel: iommu: Default domain type: Translated Jan 17 12:13:26.967632 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:13:26.967639 kernel: efivars: Registered efivars operations Jan 17 12:13:26.967647 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:13:26.967654 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 17 12:13:26.967662 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jan 17 12:13:26.967672 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jan 17 12:13:26.967679 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jan 17 12:13:26.967686 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jan 17 12:13:26.967886 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 17 12:13:26.968006 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 17 12:13:26.968123 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 17 12:13:26.968133 kernel: vgaarb: loaded Jan 17 12:13:26.968140 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 17 12:13:26.968148 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 17 12:13:26.968159 kernel: clocksource: Switched to clocksource kvm-clock Jan 17 12:13:26.968167 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:13:26.968178 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:13:26.968188 kernel: pnp: PnP ACPI init Jan 17 12:13:26.968333 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 17 12:13:26.968345 kernel: pnp: PnP ACPI: found 6 devices Jan 17 12:13:26.968352 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:13:26.968359 kernel: NET: Registered PF_INET protocol family Jan 17 12:13:26.968371 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:13:26.968378 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:13:26.968386 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:13:26.968393 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:13:26.968400 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:13:26.968408 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:13:26.968416 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:13:26.968423 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:13:26.968431 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:13:26.968440 kernel: NET: Registered PF_XDP protocol family Jan 17 12:13:26.968561 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jan 17 12:13:26.968696 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jan 17 12:13:26.968829 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 17 12:13:26.968941 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 17 12:13:26.969049 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 17 12:13:26.969162 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 17 12:13:26.969278 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 17 12:13:26.969387 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jan 17 12:13:26.969397 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:13:26.969405 kernel: Initialise system trusted keyrings Jan 17 12:13:26.969413 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:13:26.969420 kernel: Key type asymmetric registered Jan 17 12:13:26.969428 kernel: Asymmetric key parser 'x509' registered Jan 17 12:13:26.969435 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:13:26.969443 kernel: io scheduler mq-deadline registered Jan 17 12:13:26.969453 kernel: io scheduler kyber registered Jan 17 12:13:26.969461 kernel: io scheduler bfq registered Jan 17 12:13:26.969471 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:13:26.969482 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 17 12:13:26.969490 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 17 12:13:26.969498 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 17 12:13:26.969506 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:13:26.969513 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:13:26.969521 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 17 12:13:26.969531 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 17 12:13:26.969538 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 17 12:13:26.969675 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 17 12:13:26.969687 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 17 12:13:26.969809 kernel: rtc_cmos 00:04: registered as rtc0 Jan 17 12:13:26.969930 kernel: rtc_cmos 00:04: setting system clock to 2025-01-17T12:13:26 UTC (1737116006) Jan 17 12:13:26.970043 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 17 12:13:26.970053 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 17 12:13:26.970064 kernel: efifb: probing for efifb Jan 17 12:13:26.970072 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jan 17 12:13:26.970079 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jan 17 12:13:26.970087 kernel: efifb: scrolling: redraw Jan 17 12:13:26.970094 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jan 17 12:13:26.970102 kernel: Console: switching to colour frame buffer device 100x37 Jan 17 12:13:26.970128 kernel: fb0: EFI VGA frame buffer device Jan 17 12:13:26.970138 kernel: pstore: Using crash dump compression: deflate Jan 17 12:13:26.970146 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 12:13:26.970156 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:13:26.970163 kernel: Segment Routing with IPv6 Jan 17 12:13:26.970171 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:13:26.970178 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:13:26.970186 kernel: Key type dns_resolver registered Jan 17 12:13:26.970193 kernel: IPI shorthand broadcast: enabled Jan 17 12:13:26.970201 kernel: sched_clock: Marking stable (641002882, 116179976)->(777049065, -19866207) Jan 17 12:13:26.970209 kernel: registered taskstats version 1 Jan 17 12:13:26.970217 kernel: Loading compiled-in X.509 certificates Jan 17 12:13:26.970227 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:13:26.970235 kernel: Key type .fscrypt registered Jan 17 12:13:26.970242 kernel: Key type fscrypt-provisioning registered Jan 17 12:13:26.970251 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:13:26.970262 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:13:26.970271 kernel: ima: No architecture policies found Jan 17 12:13:26.970279 kernel: clk: Disabling unused clocks Jan 17 12:13:26.970288 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:13:26.970298 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:13:26.970313 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:13:26.970322 kernel: Run /init as init process Jan 17 12:13:26.970331 kernel: with arguments: Jan 17 12:13:26.970341 kernel: /init Jan 17 12:13:26.970350 kernel: with environment: Jan 17 12:13:26.970359 kernel: HOME=/ Jan 17 12:13:26.970368 kernel: TERM=linux Jan 17 12:13:26.970378 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:13:26.970390 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:13:26.970406 systemd[1]: Detected virtualization kvm. Jan 17 12:13:26.970416 systemd[1]: Detected architecture x86-64. Jan 17 12:13:26.970429 systemd[1]: Running in initrd. Jan 17 12:13:26.970445 systemd[1]: No hostname configured, using default hostname. Jan 17 12:13:26.970461 systemd[1]: Hostname set to . Jan 17 12:13:26.970475 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:13:26.970489 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:13:26.970501 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:13:26.970511 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:13:26.970521 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:13:26.970530 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:13:26.970538 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:13:26.970549 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:13:26.970559 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:13:26.970568 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:13:26.970577 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:13:26.970598 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:13:26.970609 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:13:26.970619 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:13:26.970633 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:13:26.970644 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:13:26.970654 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:13:26.970662 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:13:26.970671 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:13:26.970679 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:13:26.970688 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:13:26.970696 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:13:26.970707 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:13:26.970715 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:13:26.970723 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:13:26.970731 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:13:26.970740 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:13:26.970748 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:13:26.970756 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:13:26.970764 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:13:26.970773 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:13:26.970810 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:13:26.970822 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:13:26.970833 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:13:26.970844 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:13:26.970880 systemd-journald[191]: Collecting audit messages is disabled. Jan 17 12:13:26.970913 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:13:26.970924 systemd-journald[191]: Journal started Jan 17 12:13:26.970951 systemd-journald[191]: Runtime Journal (/run/log/journal/3e17cf631c324f0c8bbf82b93c167eb2) is 6.0M, max 48.3M, 42.2M free. Jan 17 12:13:26.972677 systemd-modules-load[194]: Inserted module 'overlay' Jan 17 12:13:26.980096 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:13:26.982826 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:13:26.983450 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:26.987509 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:13:26.990702 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:13:26.992165 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:13:27.004993 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:13:27.013824 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:13:27.013804 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:13:27.018587 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 17 12:13:27.051600 kernel: Bridge firewalling registered Jan 17 12:13:27.057110 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:13:27.057908 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:13:27.059897 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:13:27.071307 dracut-cmdline[222]: dracut-dracut-053 Jan 17 12:13:27.071929 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:13:27.074900 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:13:27.085927 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:13:27.116462 systemd-resolved[247]: Positive Trust Anchors: Jan 17 12:13:27.116484 systemd-resolved[247]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:13:27.116517 systemd-resolved[247]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:13:27.119168 systemd-resolved[247]: Defaulting to hostname 'linux'. Jan 17 12:13:27.120267 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:13:27.128996 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:13:27.181947 kernel: SCSI subsystem initialized Jan 17 12:13:27.191809 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:13:27.201813 kernel: iscsi: registered transport (tcp) Jan 17 12:13:27.223818 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:13:27.223848 kernel: QLogic iSCSI HBA Driver Jan 17 12:13:27.272832 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:13:27.286953 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:13:27.315508 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:13:27.315543 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:13:27.315559 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:13:27.359807 kernel: raid6: avx2x4 gen() 30019 MB/s Jan 17 12:13:27.384814 kernel: raid6: avx2x2 gen() 31309 MB/s Jan 17 12:13:27.401926 kernel: raid6: avx2x1 gen() 25763 MB/s Jan 17 12:13:27.401948 kernel: raid6: using algorithm avx2x2 gen() 31309 MB/s Jan 17 12:13:27.436817 kernel: raid6: .... xor() 17575 MB/s, rmw enabled Jan 17 12:13:27.436884 kernel: raid6: using avx2x2 recovery algorithm Jan 17 12:13:27.460824 kernel: xor: automatically using best checksumming function avx Jan 17 12:13:27.617832 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:13:27.630920 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:13:27.643032 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:13:27.658820 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 17 12:13:27.664344 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:13:27.675965 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:13:27.691600 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Jan 17 12:13:27.725925 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:13:27.731920 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:13:27.802300 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:13:27.812977 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:13:27.826810 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:13:27.830572 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:13:27.833174 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:13:27.835760 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:13:27.846953 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:13:27.851631 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 17 12:13:27.889085 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 17 12:13:27.889235 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:13:27.889253 kernel: libata version 3.00 loaded. Jan 17 12:13:27.889263 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:13:27.889273 kernel: AES CTR mode by8 optimization enabled Jan 17 12:13:27.889282 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 12:13:27.889293 kernel: GPT:9289727 != 19775487 Jan 17 12:13:27.889302 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 12:13:27.889312 kernel: GPT:9289727 != 19775487 Jan 17 12:13:27.889321 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 12:13:27.889331 kernel: ahci 0000:00:1f.2: version 3.0 Jan 17 12:13:27.909347 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:13:27.909362 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 17 12:13:27.909373 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 17 12:13:27.909523 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 17 12:13:27.909672 kernel: scsi host0: ahci Jan 17 12:13:27.909870 kernel: scsi host1: ahci Jan 17 12:13:27.910013 kernel: scsi host2: ahci Jan 17 12:13:27.910164 kernel: scsi host3: ahci Jan 17 12:13:27.910309 kernel: scsi host4: ahci Jan 17 12:13:27.910448 kernel: scsi host5: ahci Jan 17 12:13:27.910599 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jan 17 12:13:27.910611 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jan 17 12:13:27.910621 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jan 17 12:13:27.910631 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jan 17 12:13:27.910644 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jan 17 12:13:27.910654 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jan 17 12:13:27.858855 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:13:27.873669 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:13:27.873874 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:13:27.875599 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:13:27.918836 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (462) Jan 17 12:13:27.918856 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (460) Jan 17 12:13:27.876894 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:13:27.877157 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:27.878611 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:13:27.890834 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:13:27.898316 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:13:27.898460 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:27.912364 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:13:27.940168 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 17 12:13:27.945229 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 17 12:13:27.945721 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:27.953222 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:13:27.957236 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 17 12:13:27.957701 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 17 12:13:27.972919 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:13:27.975478 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:13:27.982351 disk-uuid[558]: Primary Header is updated. Jan 17 12:13:27.982351 disk-uuid[558]: Secondary Entries is updated. Jan 17 12:13:27.982351 disk-uuid[558]: Secondary Header is updated. Jan 17 12:13:27.985898 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:13:27.989824 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:13:28.002146 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:13:28.219812 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 17 12:13:28.219874 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 17 12:13:28.227838 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 17 12:13:28.227867 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 17 12:13:28.228805 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 17 12:13:28.228830 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 17 12:13:28.229818 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 17 12:13:28.231013 kernel: ata3.00: applying bridge limits Jan 17 12:13:28.231024 kernel: ata3.00: configured for UDMA/100 Jan 17 12:13:28.231809 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 17 12:13:28.275819 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 17 12:13:28.289429 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:13:28.289443 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 17 12:13:28.992809 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 17 12:13:28.992876 disk-uuid[559]: The operation has completed successfully. Jan 17 12:13:29.022442 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:13:29.022572 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:13:29.051006 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:13:29.054683 sh[595]: Success Jan 17 12:13:29.067824 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 17 12:13:29.101413 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:13:29.114671 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:13:29.117876 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:13:29.131456 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:13:29.131500 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:13:29.131511 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:13:29.132643 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:13:29.133491 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:13:29.137950 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:13:29.139013 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:13:29.152086 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:13:29.154119 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:13:29.162996 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:13:29.163037 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:13:29.163051 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:13:29.165807 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:13:29.175714 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:13:29.178809 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:13:29.188456 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:13:29.198934 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:13:29.258072 ignition[690]: Ignition 2.19.0 Jan 17 12:13:29.258085 ignition[690]: Stage: fetch-offline Jan 17 12:13:29.258124 ignition[690]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:29.258135 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:13:29.258230 ignition[690]: parsed url from cmdline: "" Jan 17 12:13:29.258235 ignition[690]: no config URL provided Jan 17 12:13:29.258240 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:13:29.258251 ignition[690]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:13:29.258286 ignition[690]: op(1): [started] loading QEMU firmware config module Jan 17 12:13:29.258293 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 17 12:13:29.266986 ignition[690]: op(1): [finished] loading QEMU firmware config module Jan 17 12:13:29.279148 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:13:29.283737 ignition[690]: parsing config with SHA512: ff4a46394a903541afc2f51c92b89efd2dabb1a9e64cd72722c7fb02efabb6d70074e10e55710dbc76e4e4aa04e4b299b2b8e1ed4f243d311981e814f22f4957 Jan 17 12:13:29.287485 unknown[690]: fetched base config from "system" Jan 17 12:13:29.287499 unknown[690]: fetched user config from "qemu" Jan 17 12:13:29.287894 ignition[690]: fetch-offline: fetch-offline passed Jan 17 12:13:29.287956 ignition[690]: Ignition finished successfully Jan 17 12:13:29.289553 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:13:29.295354 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:13:29.314123 systemd-networkd[785]: lo: Link UP Jan 17 12:13:29.314134 systemd-networkd[785]: lo: Gained carrier Jan 17 12:13:29.315890 systemd-networkd[785]: Enumeration completed Jan 17 12:13:29.316285 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:13:29.316289 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:13:29.317260 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:13:29.317636 systemd-networkd[785]: eth0: Link UP Jan 17 12:13:29.317640 systemd-networkd[785]: eth0: Gained carrier Jan 17 12:13:29.317647 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:13:29.326844 systemd[1]: Reached target network.target - Network. Jan 17 12:13:29.328793 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 17 12:13:29.340988 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:13:29.347850 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:13:29.355933 ignition[788]: Ignition 2.19.0 Jan 17 12:13:29.355945 ignition[788]: Stage: kargs Jan 17 12:13:29.356136 ignition[788]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:29.356147 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:13:29.356992 ignition[788]: kargs: kargs passed Jan 17 12:13:29.360759 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:13:29.357040 ignition[788]: Ignition finished successfully Jan 17 12:13:29.372941 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:13:29.385357 ignition[797]: Ignition 2.19.0 Jan 17 12:13:29.385368 ignition[797]: Stage: disks Jan 17 12:13:29.385546 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:29.385556 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:13:29.387958 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:13:29.386365 ignition[797]: disks: disks passed Jan 17 12:13:29.390305 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:13:29.386409 ignition[797]: Ignition finished successfully Jan 17 12:13:29.391974 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:13:29.394484 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:13:29.395760 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:13:29.397666 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:13:29.410058 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:13:29.424658 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 12:13:29.431584 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:13:29.443918 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:13:29.532817 kernel: EXT4-fs (vda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:13:29.533153 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:13:29.534119 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:13:29.555941 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:13:29.558011 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:13:29.559152 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 12:13:29.559195 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:13:29.572876 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (816) Jan 17 12:13:29.572913 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:13:29.572928 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:13:29.572942 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:13:29.572956 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:13:29.559217 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:13:29.565969 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:13:29.574010 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:13:29.577129 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:13:29.612066 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:13:29.618147 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:13:29.623839 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:13:29.629234 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:13:29.721264 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:13:29.728954 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:13:29.730747 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:13:29.738836 kernel: BTRFS info (device vda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:13:29.756162 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:13:29.761908 ignition[928]: INFO : Ignition 2.19.0 Jan 17 12:13:29.761908 ignition[928]: INFO : Stage: mount Jan 17 12:13:29.763740 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:29.763740 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:13:29.767109 ignition[928]: INFO : mount: mount passed Jan 17 12:13:29.767971 ignition[928]: INFO : Ignition finished successfully Jan 17 12:13:29.771251 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:13:29.788986 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:13:30.130814 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:13:30.144025 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:13:30.151490 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (942) Jan 17 12:13:30.151526 kernel: BTRFS info (device vda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:13:30.151537 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:13:30.153065 kernel: BTRFS info (device vda6): using free space tree Jan 17 12:13:30.155817 kernel: BTRFS info (device vda6): auto enabling async discard Jan 17 12:13:30.157303 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:13:30.178678 ignition[959]: INFO : Ignition 2.19.0 Jan 17 12:13:30.178678 ignition[959]: INFO : Stage: files Jan 17 12:13:30.180748 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:30.180748 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:13:30.180748 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:13:30.184320 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:13:30.184320 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:13:30.187019 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:13:30.188502 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:13:30.190373 unknown[959]: wrote ssh authorized keys file for user: core Jan 17 12:13:30.191851 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:13:30.194354 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:13:30.196380 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:13:30.196380 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:13:30.200440 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:13:30.242741 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:13:30.318386 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:13:30.318386 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:13:30.322866 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:13:30.322866 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:13:30.322866 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:13:30.322866 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:13:30.322866 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:13:30.322866 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:13:30.322866 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:13:30.322866 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:13:30.322866 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:13:30.322866 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:13:30.322866 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:13:30.322866 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:13:30.322866 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:13:30.556994 systemd-networkd[785]: eth0: Gained IPv6LL Jan 17 12:13:30.925810 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:13:31.469115 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:13:31.469115 ignition[959]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 12:13:31.473697 ignition[959]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:13:31.473697 ignition[959]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:13:31.473697 ignition[959]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 12:13:31.473697 ignition[959]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 12:13:31.473697 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:13:31.473697 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:13:31.473697 ignition[959]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 12:13:31.473697 ignition[959]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 17 12:13:31.473697 ignition[959]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:13:31.473697 ignition[959]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 17 12:13:31.473697 ignition[959]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 17 12:13:31.473697 ignition[959]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 17 12:13:31.500247 ignition[959]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:13:31.505224 ignition[959]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 17 12:13:31.507188 ignition[959]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 17 12:13:31.507188 ignition[959]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:13:31.510054 ignition[959]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:13:31.511781 ignition[959]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:13:31.513678 ignition[959]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:13:31.515435 ignition[959]: INFO : files: files passed Jan 17 12:13:31.516254 ignition[959]: INFO : Ignition finished successfully Jan 17 12:13:31.518682 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:13:31.532964 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:13:31.534656 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:13:31.537399 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:13:31.537528 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:13:31.544170 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Jan 17 12:13:31.546927 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:13:31.546927 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:13:31.550031 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:13:31.553733 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:13:31.556495 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:13:31.563970 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:13:31.586998 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:13:31.587137 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:13:31.589857 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:13:31.592427 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:13:31.593156 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:13:31.593918 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:13:31.611985 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:13:31.626103 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:13:31.635146 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:13:31.637685 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:13:31.640263 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:13:31.642299 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:13:31.642475 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:13:31.645173 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:13:31.646843 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:13:31.648705 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:13:31.650675 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:13:31.652869 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:13:31.655066 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:13:31.657129 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:13:31.659353 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:13:31.661567 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:13:31.663574 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:13:31.665296 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:13:31.665469 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:13:31.667639 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:13:31.669140 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:13:31.671288 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:13:31.671410 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:13:31.673812 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:13:31.673977 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:13:31.676345 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:13:31.676500 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:13:31.678258 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:13:31.680687 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:13:31.683877 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:13:31.686132 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:13:31.687880 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:13:31.689885 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:13:31.690005 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:13:31.692353 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:13:31.692474 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:13:31.694202 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:13:31.694347 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:13:31.696285 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:13:31.696438 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:13:31.709128 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:13:31.711118 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:13:31.711352 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:13:31.715142 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:13:31.717120 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:13:31.717383 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:13:31.719984 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:13:31.720267 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:13:31.724934 ignition[1013]: INFO : Ignition 2.19.0 Jan 17 12:13:31.724934 ignition[1013]: INFO : Stage: umount Jan 17 12:13:31.726916 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:13:31.726916 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 17 12:13:31.726916 ignition[1013]: INFO : umount: umount passed Jan 17 12:13:31.726916 ignition[1013]: INFO : Ignition finished successfully Jan 17 12:13:31.727056 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:13:31.727206 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:13:31.730124 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:13:31.730251 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:13:31.734566 systemd[1]: Stopped target network.target - Network. Jan 17 12:13:31.736532 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:13:31.736598 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:13:31.739186 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:13:31.739246 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:13:31.741560 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:13:31.741616 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:13:31.743980 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:13:31.744034 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:13:31.746652 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:13:31.748974 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:13:31.752116 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:13:31.755865 systemd-networkd[785]: eth0: DHCPv6 lease lost Jan 17 12:13:31.759558 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:13:31.759724 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:13:31.762598 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:13:31.762739 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:13:31.766715 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:13:31.766773 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:13:31.782036 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:13:31.784334 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:13:31.784431 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:13:31.787177 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:13:31.788722 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:13:31.792020 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:13:31.793161 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:13:31.795614 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:13:31.796773 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:13:31.799708 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:13:31.812962 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:13:31.814241 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:13:31.817557 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:13:31.818710 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:13:31.821742 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:13:31.822985 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:13:31.825416 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:13:31.825482 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:13:31.828876 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:13:31.829991 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:13:31.832485 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:13:31.833546 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:13:31.835798 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:13:31.836765 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:13:31.855086 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:13:31.856433 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:13:31.856516 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:13:31.856821 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:13:31.856868 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:31.863465 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:13:31.863597 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:13:31.959824 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:13:31.959972 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:13:31.961421 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:13:31.963403 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:13:31.963491 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:13:31.970080 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:13:31.978433 systemd[1]: Switching root. Jan 17 12:13:32.013655 systemd-journald[191]: Journal stopped Jan 17 12:13:33.445709 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Jan 17 12:13:33.445798 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:13:33.445823 kernel: SELinux: policy capability open_perms=1 Jan 17 12:13:33.445838 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:13:33.445854 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:13:33.445873 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:13:33.445888 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:13:33.445906 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:13:33.445921 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:13:33.445937 kernel: audit: type=1403 audit(1737116012.563:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:13:33.445953 systemd[1]: Successfully loaded SELinux policy in 43.651ms. Jan 17 12:13:33.445983 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.647ms. Jan 17 12:13:33.446001 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:13:33.446017 systemd[1]: Detected virtualization kvm. Jan 17 12:13:33.446036 systemd[1]: Detected architecture x86-64. Jan 17 12:13:33.446053 systemd[1]: Detected first boot. Jan 17 12:13:33.446069 systemd[1]: Initializing machine ID from VM UUID. Jan 17 12:13:33.446085 zram_generator::config[1074]: No configuration found. Jan 17 12:13:33.446102 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:13:33.446118 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:13:33.446134 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 17 12:13:33.446151 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:13:33.446171 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:13:33.446187 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:13:33.446203 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:13:33.446219 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:13:33.446235 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:13:33.446252 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:13:33.446270 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:13:33.446291 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:13:33.446308 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:13:33.446328 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:13:33.446344 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:13:33.446360 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:13:33.446377 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:13:33.446393 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:13:33.446421 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:13:33.446438 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:13:33.446454 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:13:33.446480 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:13:33.446502 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:13:33.446518 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:13:33.446534 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:13:33.446551 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:13:33.446577 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:13:33.446593 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:13:33.446610 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:13:33.446625 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:13:33.446645 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:13:33.446662 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:13:33.446688 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:13:33.446704 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:13:33.446720 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:13:33.446736 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:33.446752 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:13:33.446768 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:13:33.446817 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:13:33.446840 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:13:33.446856 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:13:33.446872 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:13:33.446888 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:13:33.446904 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:13:33.446920 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:13:33.446953 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:13:33.446969 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:13:33.446989 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:13:33.447005 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:13:33.447021 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:13:33.447037 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:13:33.447053 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:13:33.447091 systemd-journald[1152]: Collecting audit messages is disabled. Jan 17 12:13:33.447119 kernel: ACPI: bus type drm_connector registered Jan 17 12:13:33.447137 kernel: fuse: init (API version 7.39) Jan 17 12:13:33.447151 kernel: loop: module loaded Jan 17 12:13:33.447167 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:13:33.447183 systemd-journald[1152]: Journal started Jan 17 12:13:33.447212 systemd-journald[1152]: Runtime Journal (/run/log/journal/3e17cf631c324f0c8bbf82b93c167eb2) is 6.0M, max 48.3M, 42.2M free. Jan 17 12:13:33.454807 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:13:33.468908 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:13:33.472057 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:13:33.474846 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:33.477806 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:13:33.479663 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:13:33.480809 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:13:33.481974 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:13:33.483025 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:13:33.484259 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:13:33.485532 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:13:33.494937 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:13:33.496599 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:13:33.496876 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:13:33.498448 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:13:33.498707 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:13:33.501351 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:13:33.501612 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:13:33.516030 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:13:33.516256 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:13:33.517905 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:13:33.518164 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:13:33.519756 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:13:33.520046 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:13:33.521712 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:13:33.523235 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:13:33.529516 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:13:33.541196 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:13:33.550902 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:13:33.553375 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:13:33.554645 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:13:33.558254 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:13:33.567967 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:13:33.569194 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:13:33.570716 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:13:33.572047 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:13:33.573914 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:13:33.577048 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:13:33.579578 systemd-journald[1152]: Time spent on flushing to /var/log/journal/3e17cf631c324f0c8bbf82b93c167eb2 is 15.316ms for 981 entries. Jan 17 12:13:33.579578 systemd-journald[1152]: System Journal (/var/log/journal/3e17cf631c324f0c8bbf82b93c167eb2) is 8.0M, max 195.6M, 187.6M free. Jan 17 12:13:33.774619 systemd-journald[1152]: Received client request to flush runtime journal. Jan 17 12:13:33.580601 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:13:33.581364 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:13:33.588350 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:13:33.597957 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:13:33.609064 udevadm[1213]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:13:33.622385 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:13:33.625046 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jan 17 12:13:33.625063 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jan 17 12:13:33.630632 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:13:33.694578 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:13:33.705125 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:13:33.712088 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:13:33.723706 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:13:33.764022 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:13:33.772933 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:13:33.776852 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:13:33.790340 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 17 12:13:33.790360 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 17 12:13:33.795532 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:13:34.208321 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:13:34.226098 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:13:34.250492 systemd-udevd[1239]: Using default interface naming scheme 'v255'. Jan 17 12:13:34.266061 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:13:34.279209 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:13:34.290957 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:13:34.295427 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 12:13:34.339315 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:13:34.347857 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1243) Jan 17 12:13:34.386754 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 17 12:13:34.398808 kernel: ACPI: button: Power Button [PWRF] Jan 17 12:13:34.407052 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 17 12:13:34.414957 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 17 12:13:34.414953 systemd-networkd[1246]: lo: Link UP Jan 17 12:13:34.414973 systemd-networkd[1246]: lo: Gained carrier Jan 17 12:13:34.423100 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 17 12:13:34.423437 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 17 12:13:34.423645 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 17 12:13:34.423886 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 17 12:13:34.419169 systemd-networkd[1246]: Enumeration completed Jan 17 12:13:34.419287 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:13:34.419675 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:13:34.419681 systemd-networkd[1246]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:13:34.420642 systemd-networkd[1246]: eth0: Link UP Jan 17 12:13:34.420647 systemd-networkd[1246]: eth0: Gained carrier Jan 17 12:13:34.420662 systemd-networkd[1246]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:13:34.429921 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:13:34.437955 systemd-networkd[1246]: eth0: DHCPv4 address 10.0.0.91/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 17 12:13:34.441343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:13:34.448359 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:13:34.461112 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:13:34.461446 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:34.465391 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:13:34.513833 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:13:34.546030 kernel: kvm_amd: TSC scaling supported Jan 17 12:13:34.546103 kernel: kvm_amd: Nested Virtualization enabled Jan 17 12:13:34.546123 kernel: kvm_amd: Nested Paging enabled Jan 17 12:13:34.546138 kernel: kvm_amd: LBR virtualization supported Jan 17 12:13:34.547062 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 17 12:13:34.547085 kernel: kvm_amd: Virtual GIF supported Jan 17 12:13:34.563900 kernel: EDAC MC: Ver: 3.0.0 Jan 17 12:13:34.593202 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:13:34.615967 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:13:34.624239 lvm[1289]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:13:34.654735 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:13:34.656350 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:13:34.664927 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:13:34.669274 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:13:34.702897 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:13:34.704506 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:13:34.705831 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:13:34.705859 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:13:34.706948 systemd[1]: Reached target machines.target - Containers. Jan 17 12:13:34.708969 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:13:34.723913 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:13:34.726536 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:13:34.727733 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:13:34.728894 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:13:34.731359 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:13:34.733839 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:13:34.735838 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:13:34.751836 kernel: loop0: detected capacity change from 0 to 142488 Jan 17 12:13:34.755449 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:13:34.762646 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:13:34.763518 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:13:34.777806 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:13:34.798825 kernel: loop1: detected capacity change from 0 to 211296 Jan 17 12:13:34.835826 kernel: loop2: detected capacity change from 0 to 140768 Jan 17 12:13:34.876820 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 12:13:34.888820 kernel: loop4: detected capacity change from 0 to 211296 Jan 17 12:13:34.896822 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 12:13:34.903830 (sd-merge)[1312]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 17 12:13:34.904488 (sd-merge)[1312]: Merged extensions into '/usr'. Jan 17 12:13:34.908431 systemd[1]: Reloading requested from client PID 1300 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:13:34.908446 systemd[1]: Reloading... Jan 17 12:13:34.959808 zram_generator::config[1340]: No configuration found. Jan 17 12:13:34.988391 ldconfig[1296]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:13:35.083901 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:13:35.148674 systemd[1]: Reloading finished in 239 ms. Jan 17 12:13:35.168896 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:13:35.170577 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:13:35.189959 systemd[1]: Starting ensure-sysext.service... Jan 17 12:13:35.192135 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:13:35.197453 systemd[1]: Reloading requested from client PID 1384 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:13:35.197472 systemd[1]: Reloading... Jan 17 12:13:35.214771 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:13:35.215291 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:13:35.216286 systemd-tmpfiles[1385]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:13:35.216618 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 17 12:13:35.216703 systemd-tmpfiles[1385]: ACLs are not supported, ignoring. Jan 17 12:13:35.220531 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:13:35.220546 systemd-tmpfiles[1385]: Skipping /boot Jan 17 12:13:35.235506 systemd-tmpfiles[1385]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:13:35.235655 systemd-tmpfiles[1385]: Skipping /boot Jan 17 12:13:35.244936 zram_generator::config[1414]: No configuration found. Jan 17 12:13:35.361098 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:13:35.428016 systemd[1]: Reloading finished in 230 ms. Jan 17 12:13:35.447364 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:13:35.468681 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:13:35.471186 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:13:35.473512 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:13:35.477116 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:13:35.483018 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:13:35.489160 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:35.489726 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:13:35.496095 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:13:35.501886 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:13:35.508913 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:13:35.510304 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:13:35.510500 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:35.513043 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:13:35.517295 augenrules[1481]: No rules Jan 17 12:13:35.518021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:13:35.518288 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:13:35.520309 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:13:35.522115 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:13:35.522324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:13:35.524761 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:13:35.525072 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:13:35.537356 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:35.537746 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:13:35.545033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:13:35.548532 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:13:35.553158 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:13:35.554490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:13:35.557095 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:13:35.558402 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:35.560204 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:13:35.562600 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:13:35.564878 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:13:35.565151 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:13:35.567257 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:13:35.567525 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:13:35.569817 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:13:35.570113 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:13:35.573639 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:13:35.584685 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:35.585236 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:13:35.592140 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:13:35.595094 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:13:35.595757 systemd-resolved[1463]: Positive Trust Anchors: Jan 17 12:13:35.596100 systemd-resolved[1463]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:13:35.596199 systemd-resolved[1463]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:13:35.598092 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:13:35.601513 systemd-resolved[1463]: Defaulting to hostname 'linux'. Jan 17 12:13:35.603036 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:13:35.604715 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:13:35.605031 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:13:35.605310 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:13:35.606943 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:13:35.609119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:13:35.609402 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:13:35.611213 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:13:35.611429 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:13:35.612952 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:13:35.613155 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:13:35.614838 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:13:35.615071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:13:35.618572 systemd[1]: Finished ensure-sysext.service. Jan 17 12:13:35.623904 systemd[1]: Reached target network.target - Network. Jan 17 12:13:35.624874 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:13:35.626086 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:13:35.626174 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:13:35.641999 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 17 12:13:35.706072 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 17 12:13:35.707521 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:13:36.370802 systemd-resolved[1463]: Clock change detected. Flushing caches. Jan 17 12:13:36.370817 systemd-timesyncd[1531]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 17 12:13:36.370854 systemd-timesyncd[1531]: Initial clock synchronization to Fri 2025-01-17 12:13:36.370734 UTC. Jan 17 12:13:36.371689 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:13:36.372953 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:13:36.374244 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:13:36.375592 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:13:36.375620 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:13:36.376587 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:13:36.377802 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:13:36.379020 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:13:36.380303 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:13:36.382032 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:13:36.385274 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:13:36.387638 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:13:36.392707 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:13:36.393831 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:13:36.394826 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:13:36.395932 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:13:36.395968 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:13:36.395989 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:13:36.397242 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:13:36.399664 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:13:36.401766 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:13:36.405376 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:13:36.406722 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:13:36.409693 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:13:36.413717 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:13:36.419443 jq[1537]: false Jan 17 12:13:36.418689 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:13:36.422217 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:13:36.435511 dbus-daemon[1536]: [system] SELinux support is enabled Jan 17 12:13:36.440847 extend-filesystems[1538]: Found loop3 Jan 17 12:13:36.440847 extend-filesystems[1538]: Found loop4 Jan 17 12:13:36.440847 extend-filesystems[1538]: Found loop5 Jan 17 12:13:36.440847 extend-filesystems[1538]: Found sr0 Jan 17 12:13:36.440847 extend-filesystems[1538]: Found vda Jan 17 12:13:36.440847 extend-filesystems[1538]: Found vda1 Jan 17 12:13:36.440847 extend-filesystems[1538]: Found vda2 Jan 17 12:13:36.440847 extend-filesystems[1538]: Found vda3 Jan 17 12:13:36.440847 extend-filesystems[1538]: Found usr Jan 17 12:13:36.440847 extend-filesystems[1538]: Found vda4 Jan 17 12:13:36.440847 extend-filesystems[1538]: Found vda6 Jan 17 12:13:36.440847 extend-filesystems[1538]: Found vda7 Jan 17 12:13:36.440847 extend-filesystems[1538]: Found vda9 Jan 17 12:13:36.440847 extend-filesystems[1538]: Checking size of /dev/vda9 Jan 17 12:13:36.488980 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 17 12:13:36.489013 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1252) Jan 17 12:13:36.440829 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:13:36.491730 extend-filesystems[1538]: Resized partition /dev/vda9 Jan 17 12:13:36.442541 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:13:36.493961 extend-filesystems[1560]: resize2fs 1.47.1 (20-May-2024) Jan 17 12:13:36.520997 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 17 12:13:36.449647 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:13:36.457489 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:13:36.521357 update_engine[1559]: I20250117 12:13:36.482885 1559 main.cc:92] Flatcar Update Engine starting Jan 17 12:13:36.521357 update_engine[1559]: I20250117 12:13:36.485855 1559 update_check_scheduler.cc:74] Next update check in 5m4s Jan 17 12:13:36.463472 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:13:36.526970 jq[1562]: true Jan 17 12:13:36.527201 extend-filesystems[1560]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 17 12:13:36.527201 extend-filesystems[1560]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 17 12:13:36.527201 extend-filesystems[1560]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 17 12:13:36.477222 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:13:36.540491 extend-filesystems[1538]: Resized filesystem in /dev/vda9 Jan 17 12:13:36.477618 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:13:36.546366 jq[1568]: true Jan 17 12:13:36.477966 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:13:36.478266 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:13:36.547213 tar[1567]: linux-amd64/helm Jan 17 12:13:36.483029 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:13:36.483424 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:13:36.504695 (ntainerd)[1571]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:13:36.529918 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:13:36.530264 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:13:36.533212 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:13:36.538525 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:13:36.538956 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:13:36.541376 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:13:36.541394 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:13:36.543478 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:13:36.544461 systemd-logind[1552]: Watching system buttons on /dev/input/event1 (Power Button) Jan 17 12:13:36.544485 systemd-logind[1552]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:13:36.546137 systemd-logind[1552]: New seat seat0. Jan 17 12:13:36.556035 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:13:36.557513 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:13:36.575964 bash[1598]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:13:36.574430 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:13:36.581607 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:13:36.594249 locksmithd[1597]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:13:36.724064 containerd[1571]: time="2025-01-17T12:13:36.723963485Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:13:36.748973 containerd[1571]: time="2025-01-17T12:13:36.748930076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:13:36.750804 containerd[1571]: time="2025-01-17T12:13:36.750774115Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:13:36.750848 containerd[1571]: time="2025-01-17T12:13:36.750804612Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:13:36.750848 containerd[1571]: time="2025-01-17T12:13:36.750821453Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:13:36.751011 containerd[1571]: time="2025-01-17T12:13:36.750994588Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:13:36.751033 containerd[1571]: time="2025-01-17T12:13:36.751014726Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:13:36.751103 containerd[1571]: time="2025-01-17T12:13:36.751086400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:13:36.751127 containerd[1571]: time="2025-01-17T12:13:36.751102781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:13:36.751414 containerd[1571]: time="2025-01-17T12:13:36.751390601Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:13:36.751443 containerd[1571]: time="2025-01-17T12:13:36.751414456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:13:36.751443 containerd[1571]: time="2025-01-17T12:13:36.751432970Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:13:36.751484 containerd[1571]: time="2025-01-17T12:13:36.751447237Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:13:36.751577 containerd[1571]: time="2025-01-17T12:13:36.751560910Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:13:36.751827 containerd[1571]: time="2025-01-17T12:13:36.751809386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:13:36.751994 containerd[1571]: time="2025-01-17T12:13:36.751976670Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:13:36.752017 containerd[1571]: time="2025-01-17T12:13:36.751995024Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:13:36.752120 containerd[1571]: time="2025-01-17T12:13:36.752105071Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:13:36.752187 containerd[1571]: time="2025-01-17T12:13:36.752171034Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:13:36.851675 systemd-networkd[1246]: eth0: Gained IPv6LL Jan 17 12:13:36.858941 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:13:36.862361 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:13:36.874921 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 17 12:13:36.877747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:13:36.904908 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:13:36.935240 tar[1567]: linux-amd64/LICENSE Jan 17 12:13:36.935340 tar[1567]: linux-amd64/README.md Jan 17 12:13:36.941997 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:13:36.943803 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 17 12:13:36.944651 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 17 12:13:36.952322 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:13:36.953268 containerd[1571]: time="2025-01-17T12:13:36.952452273Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:13:36.953268 containerd[1571]: time="2025-01-17T12:13:36.952514059Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:13:36.953268 containerd[1571]: time="2025-01-17T12:13:36.952543234Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:13:36.953268 containerd[1571]: time="2025-01-17T12:13:36.952558583Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:13:36.953268 containerd[1571]: time="2025-01-17T12:13:36.952574943Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:13:36.953268 containerd[1571]: time="2025-01-17T12:13:36.952737548Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:13:36.953268 containerd[1571]: time="2025-01-17T12:13:36.953131327Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:13:36.954236 containerd[1571]: time="2025-01-17T12:13:36.954198869Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:13:36.954278 containerd[1571]: time="2025-01-17T12:13:36.954235648Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:13:36.954278 containerd[1571]: time="2025-01-17T12:13:36.954253130Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:13:36.954325 containerd[1571]: time="2025-01-17T12:13:36.954277536Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:13:36.954325 containerd[1571]: time="2025-01-17T12:13:36.954292715Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:13:36.954325 containerd[1571]: time="2025-01-17T12:13:36.954307763Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:13:36.954325 containerd[1571]: time="2025-01-17T12:13:36.954323182Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:13:36.954423 containerd[1571]: time="2025-01-17T12:13:36.954339212Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:13:36.954423 containerd[1571]: time="2025-01-17T12:13:36.954354791Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:13:36.954423 containerd[1571]: time="2025-01-17T12:13:36.954369308Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:13:36.954423 containerd[1571]: time="2025-01-17T12:13:36.954387703Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:13:36.954423 containerd[1571]: time="2025-01-17T12:13:36.954411968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954557 containerd[1571]: time="2025-01-17T12:13:36.954428429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954557 containerd[1571]: time="2025-01-17T12:13:36.954444249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954557 containerd[1571]: time="2025-01-17T12:13:36.954457754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954557 containerd[1571]: time="2025-01-17T12:13:36.954469226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954557 containerd[1571]: time="2025-01-17T12:13:36.954485506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954557 containerd[1571]: time="2025-01-17T12:13:36.954498781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954557 containerd[1571]: time="2025-01-17T12:13:36.954514450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954557 containerd[1571]: time="2025-01-17T12:13:36.954540710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954557 containerd[1571]: time="2025-01-17T12:13:36.954555007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954557 containerd[1571]: time="2025-01-17T12:13:36.954566268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954800 containerd[1571]: time="2025-01-17T12:13:36.954578611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954800 containerd[1571]: time="2025-01-17T12:13:36.954590373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954800 containerd[1571]: time="2025-01-17T12:13:36.954609399Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:13:36.954800 containerd[1571]: time="2025-01-17T12:13:36.954633975Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954800 containerd[1571]: time="2025-01-17T12:13:36.954648422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954800 containerd[1571]: time="2025-01-17T12:13:36.954661446Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:13:36.954800 containerd[1571]: time="2025-01-17T12:13:36.954719184Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:13:36.954800 containerd[1571]: time="2025-01-17T12:13:36.954735104Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:13:36.954800 containerd[1571]: time="2025-01-17T12:13:36.954747317Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:13:36.954800 containerd[1571]: time="2025-01-17T12:13:36.954767054Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:13:36.954800 containerd[1571]: time="2025-01-17T12:13:36.954776782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.954800 containerd[1571]: time="2025-01-17T12:13:36.954790969Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:13:36.954800 containerd[1571]: time="2025-01-17T12:13:36.954800507Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:13:36.955121 containerd[1571]: time="2025-01-17T12:13:36.955041289Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:13:36.956674 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:13:36.959625 containerd[1571]: time="2025-01-17T12:13:36.958841415Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:13:36.959625 containerd[1571]: time="2025-01-17T12:13:36.958937435Z" level=info msg="Connect containerd service" Jan 17 12:13:36.959625 containerd[1571]: time="2025-01-17T12:13:36.959017034Z" level=info msg="using legacy CRI server" Jan 17 12:13:36.959625 containerd[1571]: time="2025-01-17T12:13:36.959031341Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:13:36.959625 containerd[1571]: time="2025-01-17T12:13:36.959247356Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:13:36.961636 containerd[1571]: time="2025-01-17T12:13:36.961592945Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:13:36.961797 containerd[1571]: time="2025-01-17T12:13:36.961760920Z" level=info msg="Start subscribing containerd event" Jan 17 12:13:36.961887 containerd[1571]: time="2025-01-17T12:13:36.961869524Z" level=info msg="Start recovering state" Jan 17 12:13:36.962005 containerd[1571]: time="2025-01-17T12:13:36.961979650Z" level=info msg="Start event monitor" Jan 17 12:13:36.962105 containerd[1571]: time="2025-01-17T12:13:36.962057015Z" level=info msg="Start snapshots syncer" Jan 17 12:13:36.962105 containerd[1571]: time="2025-01-17T12:13:36.962086300Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:13:36.962105 containerd[1571]: time="2025-01-17T12:13:36.962095748Z" level=info msg="Start streaming server" Jan 17 12:13:36.962226 containerd[1571]: time="2025-01-17T12:13:36.961991072Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:13:36.962283 containerd[1571]: time="2025-01-17T12:13:36.962265737Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:13:36.962343 containerd[1571]: time="2025-01-17T12:13:36.962324427Z" level=info msg="containerd successfully booted in 0.239944s" Jan 17 12:13:36.962557 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:13:36.992887 sshd_keygen[1570]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:13:37.016954 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:13:37.024797 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:13:37.034232 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:13:37.034605 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:13:37.037738 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:13:37.052661 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:13:37.063850 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:13:37.066330 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:13:37.067665 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:13:37.536761 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:13:37.538510 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:13:37.540083 systemd[1]: Startup finished in 6.630s (kernel) + 4.355s (userspace) = 10.986s. Jan 17 12:13:37.543038 (kubelet)[1672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:13:38.072456 kubelet[1672]: E0117 12:13:38.072367 1672 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:13:38.077205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:13:38.077489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:13:42.345049 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:13:42.358836 systemd[1]: Started sshd@0-10.0.0.91:22-10.0.0.1:36606.service - OpenSSH per-connection server daemon (10.0.0.1:36606). Jan 17 12:13:42.395116 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 36606 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:13:42.397048 sshd[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:13:42.405469 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:13:42.418874 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:13:42.420971 systemd-logind[1552]: New session 1 of user core. Jan 17 12:13:42.432523 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:13:42.441950 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:13:42.445244 (systemd)[1693]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:13:42.540971 systemd[1693]: Queued start job for default target default.target. Jan 17 12:13:42.541322 systemd[1693]: Created slice app.slice - User Application Slice. Jan 17 12:13:42.541344 systemd[1693]: Reached target paths.target - Paths. Jan 17 12:13:42.541356 systemd[1693]: Reached target timers.target - Timers. Jan 17 12:13:42.555630 systemd[1693]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:13:42.562205 systemd[1693]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:13:42.562309 systemd[1693]: Reached target sockets.target - Sockets. Jan 17 12:13:42.562330 systemd[1693]: Reached target basic.target - Basic System. Jan 17 12:13:42.562384 systemd[1693]: Reached target default.target - Main User Target. Jan 17 12:13:42.562421 systemd[1693]: Startup finished in 110ms. Jan 17 12:13:42.562877 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:13:42.564486 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:13:42.622842 systemd[1]: Started sshd@1-10.0.0.91:22-10.0.0.1:36620.service - OpenSSH per-connection server daemon (10.0.0.1:36620). Jan 17 12:13:42.652086 sshd[1705]: Accepted publickey for core from 10.0.0.1 port 36620 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:13:42.653777 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:13:42.658173 systemd-logind[1552]: New session 2 of user core. Jan 17 12:13:42.664809 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:13:42.718183 sshd[1705]: pam_unix(sshd:session): session closed for user core Jan 17 12:13:42.729774 systemd[1]: Started sshd@2-10.0.0.91:22-10.0.0.1:36636.service - OpenSSH per-connection server daemon (10.0.0.1:36636). Jan 17 12:13:42.730366 systemd[1]: sshd@1-10.0.0.91:22-10.0.0.1:36620.service: Deactivated successfully. Jan 17 12:13:42.732895 systemd-logind[1552]: Session 2 logged out. Waiting for processes to exit. Jan 17 12:13:42.733722 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 12:13:42.734847 systemd-logind[1552]: Removed session 2. Jan 17 12:13:42.756480 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 36636 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:13:42.757935 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:13:42.761825 systemd-logind[1552]: New session 3 of user core. Jan 17 12:13:42.771773 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:13:42.820053 sshd[1710]: pam_unix(sshd:session): session closed for user core Jan 17 12:13:42.829772 systemd[1]: Started sshd@3-10.0.0.91:22-10.0.0.1:36648.service - OpenSSH per-connection server daemon (10.0.0.1:36648). Jan 17 12:13:42.830386 systemd[1]: sshd@2-10.0.0.91:22-10.0.0.1:36636.service: Deactivated successfully. Jan 17 12:13:42.833145 systemd-logind[1552]: Session 3 logged out. Waiting for processes to exit. Jan 17 12:13:42.834386 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 12:13:42.835317 systemd-logind[1552]: Removed session 3. Jan 17 12:13:42.856250 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 36648 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:13:42.857713 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:13:42.861267 systemd-logind[1552]: New session 4 of user core. Jan 17 12:13:42.876789 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:13:42.931522 sshd[1718]: pam_unix(sshd:session): session closed for user core Jan 17 12:13:42.942851 systemd[1]: Started sshd@4-10.0.0.91:22-10.0.0.1:36660.service - OpenSSH per-connection server daemon (10.0.0.1:36660). Jan 17 12:13:42.943465 systemd[1]: sshd@3-10.0.0.91:22-10.0.0.1:36648.service: Deactivated successfully. Jan 17 12:13:42.946183 systemd-logind[1552]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:13:42.947493 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:13:42.948371 systemd-logind[1552]: Removed session 4. Jan 17 12:13:42.968961 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 36660 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:13:42.970361 sshd[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:13:42.974260 systemd-logind[1552]: New session 5 of user core. Jan 17 12:13:42.991780 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:13:43.049065 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:13:43.049386 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:13:43.630740 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:13:43.631049 (dockerd)[1751]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:13:44.217085 dockerd[1751]: time="2025-01-17T12:13:44.217020822Z" level=info msg="Starting up" Jan 17 12:13:44.976121 dockerd[1751]: time="2025-01-17T12:13:44.976053610Z" level=info msg="Loading containers: start." Jan 17 12:13:45.154603 kernel: Initializing XFRM netlink socket Jan 17 12:13:45.230768 systemd-networkd[1246]: docker0: Link UP Jan 17 12:13:45.252180 dockerd[1751]: time="2025-01-17T12:13:45.252122828Z" level=info msg="Loading containers: done." Jan 17 12:13:45.274109 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck176317448-merged.mount: Deactivated successfully. Jan 17 12:13:45.275089 dockerd[1751]: time="2025-01-17T12:13:45.275054052Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:13:45.275177 dockerd[1751]: time="2025-01-17T12:13:45.275154951Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:13:45.275298 dockerd[1751]: time="2025-01-17T12:13:45.275275207Z" level=info msg="Daemon has completed initialization" Jan 17 12:13:45.319463 dockerd[1751]: time="2025-01-17T12:13:45.319382664Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:13:45.319663 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:13:46.361441 containerd[1571]: time="2025-01-17T12:13:46.361400049Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:13:47.075423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1455311064.mount: Deactivated successfully. Jan 17 12:13:48.327660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:13:48.338790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:13:48.497378 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:13:48.502467 (kubelet)[1973]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:13:48.709589 kubelet[1973]: E0117 12:13:48.709346 1973 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:13:48.713350 containerd[1571]: time="2025-01-17T12:13:48.713290343Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:48.713962 containerd[1571]: time="2025-01-17T12:13:48.713916878Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140730" Jan 17 12:13:48.715903 containerd[1571]: time="2025-01-17T12:13:48.715872505Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:48.717909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:13:48.718293 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:13:48.720489 containerd[1571]: time="2025-01-17T12:13:48.720296001Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 2.358856297s" Jan 17 12:13:48.720489 containerd[1571]: time="2025-01-17T12:13:48.720331567Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 17 12:13:48.721369 containerd[1571]: time="2025-01-17T12:13:48.721318829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:48.746525 containerd[1571]: time="2025-01-17T12:13:48.746471219Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:13:50.454029 containerd[1571]: time="2025-01-17T12:13:50.453944861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:50.468188 containerd[1571]: time="2025-01-17T12:13:50.468136946Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216641" Jan 17 12:13:50.488196 containerd[1571]: time="2025-01-17T12:13:50.488166589Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:50.524167 containerd[1571]: time="2025-01-17T12:13:50.524123140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:50.525181 containerd[1571]: time="2025-01-17T12:13:50.525118056Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 1.778609848s" Jan 17 12:13:50.525181 containerd[1571]: time="2025-01-17T12:13:50.525151008Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 17 12:13:50.551269 containerd[1571]: time="2025-01-17T12:13:50.551232661Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:13:52.305204 containerd[1571]: time="2025-01-17T12:13:52.305145643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:52.306420 containerd[1571]: time="2025-01-17T12:13:52.306380349Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332841" Jan 17 12:13:52.308005 containerd[1571]: time="2025-01-17T12:13:52.307978386Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:52.311864 containerd[1571]: time="2025-01-17T12:13:52.311806655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:52.313123 containerd[1571]: time="2025-01-17T12:13:52.313085724Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.761822837s" Jan 17 12:13:52.313123 containerd[1571]: time="2025-01-17T12:13:52.313119928Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 17 12:13:52.335085 containerd[1571]: time="2025-01-17T12:13:52.335038633Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:13:53.675129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount419337443.mount: Deactivated successfully. Jan 17 12:13:54.489843 containerd[1571]: time="2025-01-17T12:13:54.489778059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:54.490714 containerd[1571]: time="2025-01-17T12:13:54.490676183Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620941" Jan 17 12:13:54.491854 containerd[1571]: time="2025-01-17T12:13:54.491792607Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:54.495884 containerd[1571]: time="2025-01-17T12:13:54.495840478Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 2.160759475s" Jan 17 12:13:54.495932 containerd[1571]: time="2025-01-17T12:13:54.495890732Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:13:54.496277 containerd[1571]: time="2025-01-17T12:13:54.496142565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:54.519257 containerd[1571]: time="2025-01-17T12:13:54.519204083Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:13:55.060508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount76521693.mount: Deactivated successfully. Jan 17 12:13:56.736548 containerd[1571]: time="2025-01-17T12:13:56.736477382Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:56.737288 containerd[1571]: time="2025-01-17T12:13:56.737248709Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 17 12:13:56.738563 containerd[1571]: time="2025-01-17T12:13:56.738518640Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:56.741292 containerd[1571]: time="2025-01-17T12:13:56.741228202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:56.742825 containerd[1571]: time="2025-01-17T12:13:56.742768510Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.223527167s" Jan 17 12:13:56.742825 containerd[1571]: time="2025-01-17T12:13:56.742817462Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:13:56.764266 containerd[1571]: time="2025-01-17T12:13:56.764010186Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:13:57.229968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2571631854.mount: Deactivated successfully. Jan 17 12:13:57.235944 containerd[1571]: time="2025-01-17T12:13:57.235894918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:57.237473 containerd[1571]: time="2025-01-17T12:13:57.237432712Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 17 12:13:57.238616 containerd[1571]: time="2025-01-17T12:13:57.238577259Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:57.240898 containerd[1571]: time="2025-01-17T12:13:57.240851754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:13:57.241447 containerd[1571]: time="2025-01-17T12:13:57.241408158Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 477.357506ms" Jan 17 12:13:57.241447 containerd[1571]: time="2025-01-17T12:13:57.241434808Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:13:57.263374 containerd[1571]: time="2025-01-17T12:13:57.263332153Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:13:57.808948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223783560.mount: Deactivated successfully. Jan 17 12:13:58.968455 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:13:58.983673 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:13:59.126166 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:13:59.132660 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:13:59.485879 kubelet[2125]: E0117 12:13:59.485812 2125 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:13:59.490219 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:13:59.490499 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:14:00.325313 containerd[1571]: time="2025-01-17T12:14:00.325257877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:14:00.326090 containerd[1571]: time="2025-01-17T12:14:00.326057667Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 17 12:14:00.328202 containerd[1571]: time="2025-01-17T12:14:00.328172012Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:14:00.331063 containerd[1571]: time="2025-01-17T12:14:00.331037335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:14:00.332218 containerd[1571]: time="2025-01-17T12:14:00.332166152Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.068794936s" Jan 17 12:14:00.332269 containerd[1571]: time="2025-01-17T12:14:00.332216507Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 17 12:14:02.696319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:14:02.714989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:14:02.734247 systemd[1]: Reloading requested from client PID 2234 ('systemctl') (unit session-5.scope)... Jan 17 12:14:02.734267 systemd[1]: Reloading... Jan 17 12:14:02.828391 zram_generator::config[2279]: No configuration found. Jan 17 12:14:03.085008 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:14:03.157596 systemd[1]: Reloading finished in 422 ms. Jan 17 12:14:03.204687 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:14:03.204808 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:14:03.205258 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:14:03.208575 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:14:03.362673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:14:03.367335 (kubelet)[2334]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:14:03.409505 kubelet[2334]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:14:03.409505 kubelet[2334]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:14:03.409505 kubelet[2334]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:14:03.410006 kubelet[2334]: I0117 12:14:03.409580 2334 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:14:03.614454 kubelet[2334]: I0117 12:14:03.614359 2334 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:14:03.614454 kubelet[2334]: I0117 12:14:03.614388 2334 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:14:03.614630 kubelet[2334]: I0117 12:14:03.614618 2334 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:14:03.631307 kubelet[2334]: E0117 12:14:03.631268 2334 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:03.634551 kubelet[2334]: I0117 12:14:03.634500 2334 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:14:03.645479 kubelet[2334]: I0117 12:14:03.645435 2334 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:14:03.646724 kubelet[2334]: I0117 12:14:03.646693 2334 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:14:03.646877 kubelet[2334]: I0117 12:14:03.646849 2334 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:14:03.646877 kubelet[2334]: I0117 12:14:03.646873 2334 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:14:03.647020 kubelet[2334]: I0117 12:14:03.646882 2334 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:14:03.647020 kubelet[2334]: I0117 12:14:03.647002 2334 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:14:03.647106 kubelet[2334]: I0117 12:14:03.647088 2334 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:14:03.647106 kubelet[2334]: I0117 12:14:03.647102 2334 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:14:03.647180 kubelet[2334]: I0117 12:14:03.647127 2334 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:14:03.647180 kubelet[2334]: I0117 12:14:03.647146 2334 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:14:03.648113 kubelet[2334]: I0117 12:14:03.648088 2334 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:14:03.648707 kubelet[2334]: W0117 12:14:03.648614 2334 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:03.648707 kubelet[2334]: E0117 12:14:03.648660 2334 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:03.648707 kubelet[2334]: W0117 12:14:03.648682 2334 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:03.648841 kubelet[2334]: E0117 12:14:03.648724 2334 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:03.650349 kubelet[2334]: I0117 12:14:03.650321 2334 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:14:03.650410 kubelet[2334]: W0117 12:14:03.650381 2334 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:14:03.651181 kubelet[2334]: I0117 12:14:03.650872 2334 server.go:1256] "Started kubelet" Jan 17 12:14:03.651181 kubelet[2334]: I0117 12:14:03.650912 2334 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:14:03.652517 kubelet[2334]: I0117 12:14:03.651369 2334 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:14:03.652517 kubelet[2334]: I0117 12:14:03.651724 2334 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:14:03.652646 kubelet[2334]: I0117 12:14:03.652636 2334 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:14:03.653429 kubelet[2334]: I0117 12:14:03.653165 2334 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:14:03.655561 kubelet[2334]: I0117 12:14:03.654633 2334 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:14:03.655561 kubelet[2334]: I0117 12:14:03.654797 2334 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:14:03.655561 kubelet[2334]: I0117 12:14:03.654862 2334 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:14:03.655561 kubelet[2334]: W0117 12:14:03.655199 2334 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:03.655561 kubelet[2334]: E0117 12:14:03.655246 2334 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:03.656489 kubelet[2334]: E0117 12:14:03.656457 2334 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="200ms" Jan 17 12:14:03.658886 kubelet[2334]: E0117 12:14:03.658863 2334 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b79d68b70f02e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 12:14:03.650854958 +0000 UTC m=+0.279111785,LastTimestamp:2025-01-17 12:14:03.650854958 +0000 UTC m=+0.279111785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 12:14:03.659393 kubelet[2334]: I0117 12:14:03.659369 2334 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:14:03.660681 kubelet[2334]: E0117 12:14:03.660663 2334 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:14:03.660955 kubelet[2334]: I0117 12:14:03.660931 2334 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:14:03.660955 kubelet[2334]: I0117 12:14:03.660945 2334 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:14:03.673225 kubelet[2334]: I0117 12:14:03.673083 2334 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:14:03.674411 kubelet[2334]: I0117 12:14:03.674380 2334 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:14:03.674411 kubelet[2334]: I0117 12:14:03.674411 2334 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:14:03.674479 kubelet[2334]: I0117 12:14:03.674433 2334 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:14:03.674522 kubelet[2334]: E0117 12:14:03.674503 2334 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:14:03.679399 kubelet[2334]: W0117 12:14:03.679324 2334 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:03.679399 kubelet[2334]: E0117 12:14:03.679394 2334 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:03.687726 kubelet[2334]: I0117 12:14:03.687692 2334 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:14:03.687726 kubelet[2334]: I0117 12:14:03.687713 2334 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:14:03.687876 kubelet[2334]: I0117 12:14:03.687750 2334 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:14:03.756295 kubelet[2334]: I0117 12:14:03.756258 2334 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:14:03.756690 kubelet[2334]: E0117 12:14:03.756663 2334 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jan 17 12:14:03.775019 kubelet[2334]: E0117 12:14:03.774957 2334 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:14:03.857762 kubelet[2334]: E0117 12:14:03.857718 2334 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="400ms" Jan 17 12:14:03.958405 kubelet[2334]: I0117 12:14:03.958284 2334 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:14:03.958645 kubelet[2334]: E0117 12:14:03.958626 2334 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jan 17 12:14:03.975802 kubelet[2334]: E0117 12:14:03.975779 2334 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:14:04.258498 kubelet[2334]: E0117 12:14:04.258375 2334 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="800ms" Jan 17 12:14:04.360071 kubelet[2334]: I0117 12:14:04.360029 2334 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:14:04.360472 kubelet[2334]: E0117 12:14:04.360439 2334 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jan 17 12:14:04.376545 kubelet[2334]: E0117 12:14:04.376507 2334 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:14:04.434326 kubelet[2334]: E0117 12:14:04.434267 2334 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.91:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.91:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181b79d68b70f02e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-17 12:14:03.650854958 +0000 UTC m=+0.279111785,LastTimestamp:2025-01-17 12:14:03.650854958 +0000 UTC m=+0.279111785,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 17 12:14:04.630562 kubelet[2334]: W0117 12:14:04.630417 2334 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:04.630562 kubelet[2334]: E0117 12:14:04.630475 2334 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.91:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:04.662392 kubelet[2334]: W0117 12:14:04.662349 2334 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:04.662446 kubelet[2334]: E0117 12:14:04.662394 2334 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:04.702826 kubelet[2334]: I0117 12:14:04.702784 2334 policy_none.go:49] "None policy: Start" Jan 17 12:14:04.703400 kubelet[2334]: I0117 12:14:04.703361 2334 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:14:04.703400 kubelet[2334]: I0117 12:14:04.703391 2334 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:14:04.795285 kubelet[2334]: I0117 12:14:04.795249 2334 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:14:04.795681 kubelet[2334]: I0117 12:14:04.795666 2334 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:14:04.797050 kubelet[2334]: E0117 12:14:04.797029 2334 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 17 12:14:04.837668 kubelet[2334]: W0117 12:14:04.837624 2334 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:04.837710 kubelet[2334]: E0117 12:14:04.837677 2334 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.91:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:05.059240 kubelet[2334]: E0117 12:14:05.059112 2334 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="1.6s" Jan 17 12:14:05.128891 kubelet[2334]: W0117 12:14:05.128834 2334 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:05.128891 kubelet[2334]: E0117 12:14:05.128887 2334 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.91:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:05.162378 kubelet[2334]: I0117 12:14:05.162355 2334 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:14:05.162661 kubelet[2334]: E0117 12:14:05.162636 2334 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jan 17 12:14:05.176926 kubelet[2334]: I0117 12:14:05.176890 2334 topology_manager.go:215] "Topology Admit Handler" podUID="605dd245551545e29d4e79fb03fd341e" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:14:05.177919 kubelet[2334]: I0117 12:14:05.177898 2334 topology_manager.go:215] "Topology Admit Handler" podUID="027734d6ff3d844af6bd41209181ef18" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:14:05.178684 kubelet[2334]: I0117 12:14:05.178632 2334 topology_manager.go:215] "Topology Admit Handler" podUID="dd466de870bdf0e573d7965dbd759acf" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:14:05.264161 kubelet[2334]: I0117 12:14:05.264110 2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:14:05.264161 kubelet[2334]: I0117 12:14:05.264153 2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:14:05.264397 kubelet[2334]: I0117 12:14:05.264190 2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:14:05.264397 kubelet[2334]: I0117 12:14:05.264216 2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/027734d6ff3d844af6bd41209181ef18-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"027734d6ff3d844af6bd41209181ef18\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:14:05.264397 kubelet[2334]: I0117 12:14:05.264234 2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/027734d6ff3d844af6bd41209181ef18-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"027734d6ff3d844af6bd41209181ef18\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:14:05.264397 kubelet[2334]: I0117 12:14:05.264252 2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/027734d6ff3d844af6bd41209181ef18-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"027734d6ff3d844af6bd41209181ef18\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:14:05.264397 kubelet[2334]: I0117 12:14:05.264270 2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:14:05.264517 kubelet[2334]: I0117 12:14:05.264288 2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:14:05.264517 kubelet[2334]: I0117 12:14:05.264317 2334 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605dd245551545e29d4e79fb03fd341e-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"605dd245551545e29d4e79fb03fd341e\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:14:05.483556 kubelet[2334]: E0117 12:14:05.483389 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:05.484545 containerd[1571]: time="2025-01-17T12:14:05.484080470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:605dd245551545e29d4e79fb03fd341e,Namespace:kube-system,Attempt:0,}" Jan 17 12:14:05.484843 kubelet[2334]: E0117 12:14:05.484238 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:05.484901 containerd[1571]: time="2025-01-17T12:14:05.484852659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:027734d6ff3d844af6bd41209181ef18,Namespace:kube-system,Attempt:0,}" Jan 17 12:14:05.486269 kubelet[2334]: E0117 12:14:05.486232 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:05.486561 containerd[1571]: time="2025-01-17T12:14:05.486524314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd466de870bdf0e573d7965dbd759acf,Namespace:kube-system,Attempt:0,}" Jan 17 12:14:05.712096 kubelet[2334]: E0117 12:14:05.712061 2334 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.91:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:06.578884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834848027.mount: Deactivated successfully. Jan 17 12:14:06.586931 containerd[1571]: time="2025-01-17T12:14:06.586876276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:14:06.588063 containerd[1571]: time="2025-01-17T12:14:06.588034939Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:14:06.588894 containerd[1571]: time="2025-01-17T12:14:06.588835380Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:14:06.589817 containerd[1571]: time="2025-01-17T12:14:06.589781444Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:14:06.590570 containerd[1571]: time="2025-01-17T12:14:06.590521663Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:14:06.591444 containerd[1571]: time="2025-01-17T12:14:06.591412423Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:14:06.592297 containerd[1571]: time="2025-01-17T12:14:06.592241237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 17 12:14:06.595628 containerd[1571]: time="2025-01-17T12:14:06.595595778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:14:06.596528 containerd[1571]: time="2025-01-17T12:14:06.596495075Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.111546436s" Jan 17 12:14:06.598597 containerd[1571]: time="2025-01-17T12:14:06.598555128Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.111946236s" Jan 17 12:14:06.600034 containerd[1571]: time="2025-01-17T12:14:06.599998956Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.115824459s" Jan 17 12:14:06.644830 kubelet[2334]: W0117 12:14:06.644790 2334 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:06.644830 kubelet[2334]: E0117 12:14:06.644831 2334 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.91:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.91:6443: connect: connection refused Jan 17 12:14:06.660176 kubelet[2334]: E0117 12:14:06.660147 2334 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.91:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.91:6443: connect: connection refused" interval="3.2s" Jan 17 12:14:06.727317 containerd[1571]: time="2025-01-17T12:14:06.727203650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:14:06.727317 containerd[1571]: time="2025-01-17T12:14:06.727274233Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:14:06.727466 containerd[1571]: time="2025-01-17T12:14:06.727289762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:14:06.727865 containerd[1571]: time="2025-01-17T12:14:06.727625201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:14:06.728091 containerd[1571]: time="2025-01-17T12:14:06.727869900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:14:06.728091 containerd[1571]: time="2025-01-17T12:14:06.727884778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:14:06.728091 containerd[1571]: time="2025-01-17T12:14:06.728056339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:14:06.728305 containerd[1571]: time="2025-01-17T12:14:06.727771766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:14:06.728305 containerd[1571]: time="2025-01-17T12:14:06.728219104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:14:06.728968 containerd[1571]: time="2025-01-17T12:14:06.728635245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:14:06.731575 containerd[1571]: time="2025-01-17T12:14:06.730844628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:14:06.731575 containerd[1571]: time="2025-01-17T12:14:06.730951228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:14:06.764860 kubelet[2334]: I0117 12:14:06.764816 2334 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:14:06.765241 kubelet[2334]: E0117 12:14:06.765168 2334 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.91:6443/api/v1/nodes\": dial tcp 10.0.0.91:6443: connect: connection refused" node="localhost" Jan 17 12:14:06.793794 containerd[1571]: time="2025-01-17T12:14:06.793754837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd466de870bdf0e573d7965dbd759acf,Namespace:kube-system,Attempt:0,} returns sandbox id \"9640d0d55498e7a17216a4389be39ef87ffe4aa3c263958e8af1e2df2263e13e\"" Jan 17 12:14:06.795155 kubelet[2334]: E0117 12:14:06.795134 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:06.797942 containerd[1571]: time="2025-01-17T12:14:06.797875515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:605dd245551545e29d4e79fb03fd341e,Namespace:kube-system,Attempt:0,} returns sandbox id \"284e073793b36091bf15d3c2a2ae89b2c4a23f2e2cce5e2faadc577aee9a691f\"" Jan 17 12:14:06.798771 kubelet[2334]: E0117 12:14:06.798746 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:06.799085 containerd[1571]: time="2025-01-17T12:14:06.798993141Z" level=info msg="CreateContainer within sandbox \"9640d0d55498e7a17216a4389be39ef87ffe4aa3c263958e8af1e2df2263e13e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:14:06.800719 containerd[1571]: time="2025-01-17T12:14:06.800586118Z" level=info msg="CreateContainer within sandbox \"284e073793b36091bf15d3c2a2ae89b2c4a23f2e2cce5e2faadc577aee9a691f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:14:06.802359 containerd[1571]: time="2025-01-17T12:14:06.802333255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:027734d6ff3d844af6bd41209181ef18,Namespace:kube-system,Attempt:0,} returns sandbox id \"3453d60e384f1107a02eebc89c94396cf129252007ec949bfef84f9d3ab406ad\"" Jan 17 12:14:06.802981 kubelet[2334]: E0117 12:14:06.802962 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:06.805308 containerd[1571]: time="2025-01-17T12:14:06.805223365Z" level=info msg="CreateContainer within sandbox \"3453d60e384f1107a02eebc89c94396cf129252007ec949bfef84f9d3ab406ad\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:14:06.827866 containerd[1571]: time="2025-01-17T12:14:06.827796848Z" level=info msg="CreateContainer within sandbox \"9640d0d55498e7a17216a4389be39ef87ffe4aa3c263958e8af1e2df2263e13e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a7feb2b3ae1c750bd5cd91335268701717d08ee8358904721011d2d87f0f0918\"" Jan 17 12:14:06.828642 containerd[1571]: time="2025-01-17T12:14:06.828574847Z" level=info msg="StartContainer for \"a7feb2b3ae1c750bd5cd91335268701717d08ee8358904721011d2d87f0f0918\"" Jan 17 12:14:06.835047 containerd[1571]: time="2025-01-17T12:14:06.834852841Z" level=info msg="CreateContainer within sandbox \"284e073793b36091bf15d3c2a2ae89b2c4a23f2e2cce5e2faadc577aee9a691f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"14fd7936c43f59b1edcd15dad09f290e7928a8160140e134c3855ad63c957814\"" Jan 17 12:14:06.835276 containerd[1571]: time="2025-01-17T12:14:06.835244435Z" level=info msg="StartContainer for \"14fd7936c43f59b1edcd15dad09f290e7928a8160140e134c3855ad63c957814\"" Jan 17 12:14:06.840791 containerd[1571]: time="2025-01-17T12:14:06.840733329Z" level=info msg="CreateContainer within sandbox \"3453d60e384f1107a02eebc89c94396cf129252007ec949bfef84f9d3ab406ad\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2320f55d38b2c45989540bc9691f4dd7f19565febd99be8c8f5e1db314461ed5\"" Jan 17 12:14:06.842573 containerd[1571]: time="2025-01-17T12:14:06.841914223Z" level=info msg="StartContainer for \"2320f55d38b2c45989540bc9691f4dd7f19565febd99be8c8f5e1db314461ed5\"" Jan 17 12:14:06.912785 containerd[1571]: time="2025-01-17T12:14:06.912726181Z" level=info msg="StartContainer for \"14fd7936c43f59b1edcd15dad09f290e7928a8160140e134c3855ad63c957814\" returns successfully" Jan 17 12:14:06.912891 containerd[1571]: time="2025-01-17T12:14:06.912845765Z" level=info msg="StartContainer for \"a7feb2b3ae1c750bd5cd91335268701717d08ee8358904721011d2d87f0f0918\" returns successfully" Jan 17 12:14:06.921990 containerd[1571]: time="2025-01-17T12:14:06.921946753Z" level=info msg="StartContainer for \"2320f55d38b2c45989540bc9691f4dd7f19565febd99be8c8f5e1db314461ed5\" returns successfully" Jan 17 12:14:07.697491 kubelet[2334]: E0117 12:14:07.697456 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:07.704553 kubelet[2334]: E0117 12:14:07.702136 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:07.704553 kubelet[2334]: E0117 12:14:07.702743 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:08.256719 kubelet[2334]: E0117 12:14:08.256670 2334 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 17 12:14:08.606658 kubelet[2334]: E0117 12:14:08.606546 2334 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 17 12:14:08.650872 kubelet[2334]: I0117 12:14:08.650836 2334 apiserver.go:52] "Watching apiserver" Jan 17 12:14:08.655551 kubelet[2334]: I0117 12:14:08.655516 2334 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:14:08.703181 kubelet[2334]: E0117 12:14:08.703143 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:09.149451 kubelet[2334]: E0117 12:14:09.149422 2334 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jan 17 12:14:09.872561 kubelet[2334]: E0117 12:14:09.869753 2334 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 17 12:14:09.966825 kubelet[2334]: I0117 12:14:09.966796 2334 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:14:09.972458 kubelet[2334]: I0117 12:14:09.972434 2334 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:14:10.151131 kubelet[2334]: E0117 12:14:10.151002 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:10.705614 kubelet[2334]: E0117 12:14:10.705580 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:10.975828 systemd[1]: Reloading requested from client PID 2612 ('systemctl') (unit session-5.scope)... Jan 17 12:14:10.975846 systemd[1]: Reloading... Jan 17 12:14:11.054636 zram_generator::config[2654]: No configuration found. Jan 17 12:14:11.181042 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:14:11.211343 kubelet[2334]: E0117 12:14:11.211303 2334 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:11.276010 systemd[1]: Reloading finished in 299 ms. Jan 17 12:14:11.312754 kubelet[2334]: I0117 12:14:11.312650 2334 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:14:11.312704 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:14:11.330121 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:14:11.330647 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:14:11.343905 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:14:11.484566 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:14:11.497232 (kubelet)[2706]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:14:11.550022 kubelet[2706]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:14:11.550022 kubelet[2706]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:14:11.550022 kubelet[2706]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:14:11.550022 kubelet[2706]: I0117 12:14:11.549985 2706 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:14:11.554745 kubelet[2706]: I0117 12:14:11.554719 2706 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:14:11.554745 kubelet[2706]: I0117 12:14:11.554745 2706 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:14:11.554977 kubelet[2706]: I0117 12:14:11.554962 2706 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:14:11.556384 kubelet[2706]: I0117 12:14:11.556361 2706 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:14:11.558472 kubelet[2706]: I0117 12:14:11.558106 2706 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:14:11.567493 kubelet[2706]: I0117 12:14:11.567449 2706 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:14:11.568214 kubelet[2706]: I0117 12:14:11.568159 2706 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:14:11.568454 kubelet[2706]: I0117 12:14:11.568402 2706 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:14:11.568454 kubelet[2706]: I0117 12:14:11.568445 2706 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:14:11.568454 kubelet[2706]: I0117 12:14:11.568458 2706 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:14:11.568647 kubelet[2706]: I0117 12:14:11.568496 2706 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:14:11.568680 kubelet[2706]: I0117 12:14:11.568671 2706 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:14:11.568722 kubelet[2706]: I0117 12:14:11.568688 2706 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:14:11.568759 kubelet[2706]: I0117 12:14:11.568722 2706 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:14:11.568759 kubelet[2706]: I0117 12:14:11.568739 2706 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:14:11.570848 kubelet[2706]: I0117 12:14:11.570250 2706 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:14:11.570848 kubelet[2706]: I0117 12:14:11.570557 2706 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:14:11.572928 kubelet[2706]: I0117 12:14:11.572912 2706 server.go:1256] "Started kubelet" Jan 17 12:14:11.575872 kubelet[2706]: I0117 12:14:11.575836 2706 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:14:11.577071 kubelet[2706]: I0117 12:14:11.577042 2706 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:14:11.577071 kubelet[2706]: I0117 12:14:11.577061 2706 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:14:11.578146 kubelet[2706]: I0117 12:14:11.578118 2706 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:14:11.578363 kubelet[2706]: I0117 12:14:11.578340 2706 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:14:11.582459 kubelet[2706]: I0117 12:14:11.582427 2706 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:14:11.583685 kubelet[2706]: I0117 12:14:11.583658 2706 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:14:11.583913 kubelet[2706]: I0117 12:14:11.583891 2706 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:14:11.588030 kubelet[2706]: I0117 12:14:11.588003 2706 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:14:11.588117 kubelet[2706]: I0117 12:14:11.588091 2706 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:14:11.589067 kubelet[2706]: E0117 12:14:11.588186 2706 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:14:11.589628 kubelet[2706]: I0117 12:14:11.589604 2706 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:14:11.597370 kubelet[2706]: I0117 12:14:11.597327 2706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:14:11.598525 kubelet[2706]: I0117 12:14:11.598486 2706 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:14:11.598525 kubelet[2706]: I0117 12:14:11.598523 2706 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:14:11.598525 kubelet[2706]: I0117 12:14:11.598555 2706 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:14:11.598696 kubelet[2706]: E0117 12:14:11.598615 2706 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:14:11.646232 kubelet[2706]: I0117 12:14:11.646195 2706 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:14:11.646232 kubelet[2706]: I0117 12:14:11.646220 2706 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:14:11.646232 kubelet[2706]: I0117 12:14:11.646253 2706 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:14:11.646451 kubelet[2706]: I0117 12:14:11.646402 2706 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:14:11.646451 kubelet[2706]: I0117 12:14:11.646426 2706 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:14:11.646451 kubelet[2706]: I0117 12:14:11.646434 2706 policy_none.go:49] "None policy: Start" Jan 17 12:14:11.647419 kubelet[2706]: I0117 12:14:11.647384 2706 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:14:11.647419 kubelet[2706]: I0117 12:14:11.647412 2706 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:14:11.647640 kubelet[2706]: I0117 12:14:11.647582 2706 state_mem.go:75] "Updated machine memory state" Jan 17 12:14:11.649085 kubelet[2706]: I0117 12:14:11.649055 2706 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:14:11.649593 kubelet[2706]: I0117 12:14:11.649316 2706 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:14:11.687227 kubelet[2706]: I0117 12:14:11.687198 2706 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 17 12:14:11.693177 kubelet[2706]: I0117 12:14:11.693137 2706 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 17 12:14:11.693368 kubelet[2706]: I0117 12:14:11.693214 2706 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 17 12:14:11.699316 kubelet[2706]: I0117 12:14:11.699271 2706 topology_manager.go:215] "Topology Admit Handler" podUID="027734d6ff3d844af6bd41209181ef18" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 17 12:14:11.699433 kubelet[2706]: I0117 12:14:11.699384 2706 topology_manager.go:215] "Topology Admit Handler" podUID="dd466de870bdf0e573d7965dbd759acf" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 17 12:14:11.699433 kubelet[2706]: I0117 12:14:11.699427 2706 topology_manager.go:215] "Topology Admit Handler" podUID="605dd245551545e29d4e79fb03fd341e" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 17 12:14:11.704721 kubelet[2706]: E0117 12:14:11.704672 2706 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 12:14:11.705091 kubelet[2706]: E0117 12:14:11.705054 2706 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jan 17 12:14:11.785509 kubelet[2706]: I0117 12:14:11.785470 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:14:11.785509 kubelet[2706]: I0117 12:14:11.785513 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:14:11.785657 kubelet[2706]: I0117 12:14:11.785553 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/027734d6ff3d844af6bd41209181ef18-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"027734d6ff3d844af6bd41209181ef18\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:14:11.785657 kubelet[2706]: I0117 12:14:11.785627 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/027734d6ff3d844af6bd41209181ef18-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"027734d6ff3d844af6bd41209181ef18\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:14:11.785714 kubelet[2706]: I0117 12:14:11.785687 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:14:11.785714 kubelet[2706]: I0117 12:14:11.785711 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:14:11.785763 kubelet[2706]: I0117 12:14:11.785732 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd466de870bdf0e573d7965dbd759acf-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd466de870bdf0e573d7965dbd759acf\") " pod="kube-system/kube-controller-manager-localhost" Jan 17 12:14:11.785763 kubelet[2706]: I0117 12:14:11.785751 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/605dd245551545e29d4e79fb03fd341e-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"605dd245551545e29d4e79fb03fd341e\") " pod="kube-system/kube-scheduler-localhost" Jan 17 12:14:11.785807 kubelet[2706]: I0117 12:14:11.785774 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/027734d6ff3d844af6bd41209181ef18-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"027734d6ff3d844af6bd41209181ef18\") " pod="kube-system/kube-apiserver-localhost" Jan 17 12:14:12.007689 kubelet[2706]: E0117 12:14:12.007647 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:12.008046 kubelet[2706]: E0117 12:14:12.007764 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:12.008233 kubelet[2706]: E0117 12:14:12.008143 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:12.569045 kubelet[2706]: I0117 12:14:12.568977 2706 apiserver.go:52] "Watching apiserver" Jan 17 12:14:12.582737 kubelet[2706]: I0117 12:14:12.582687 2706 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:14:12.587905 sudo[1733]: pam_unix(sudo:session): session closed for user root Jan 17 12:14:12.590169 sshd[1726]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:12.595384 systemd[1]: sshd@4-10.0.0.91:22-10.0.0.1:36660.service: Deactivated successfully. Jan 17 12:14:12.598114 systemd-logind[1552]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:14:12.598178 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:14:12.599507 systemd-logind[1552]: Removed session 5. Jan 17 12:14:12.612126 kubelet[2706]: E0117 12:14:12.611997 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:12.612126 kubelet[2706]: E0117 12:14:12.612032 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:12.618381 kubelet[2706]: E0117 12:14:12.618359 2706 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 17 12:14:12.618904 kubelet[2706]: E0117 12:14:12.618890 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:12.628269 kubelet[2706]: I0117 12:14:12.628227 2706 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.62818553 podStartE2EDuration="2.62818553s" podCreationTimestamp="2025-01-17 12:14:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:14:12.628172224 +0000 UTC m=+1.125690643" watchObservedRunningTime="2025-01-17 12:14:12.62818553 +0000 UTC m=+1.125703949" Jan 17 12:14:12.641053 kubelet[2706]: I0117 12:14:12.641028 2706 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.640990298 podStartE2EDuration="1.640990298s" podCreationTimestamp="2025-01-17 12:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:14:12.634966454 +0000 UTC m=+1.132484883" watchObservedRunningTime="2025-01-17 12:14:12.640990298 +0000 UTC m=+1.138508717" Jan 17 12:14:12.641304 kubelet[2706]: I0117 12:14:12.641285 2706 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.641231911 podStartE2EDuration="1.641231911s" podCreationTimestamp="2025-01-17 12:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:14:12.640975178 +0000 UTC m=+1.138493627" watchObservedRunningTime="2025-01-17 12:14:12.641231911 +0000 UTC m=+1.138750330" Jan 17 12:14:13.613026 kubelet[2706]: E0117 12:14:13.613000 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:14.614291 kubelet[2706]: E0117 12:14:14.614230 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:17.040714 kubelet[2706]: E0117 12:14:17.040670 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:17.617957 kubelet[2706]: E0117 12:14:17.617922 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:17.862265 kubelet[2706]: E0117 12:14:17.862228 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:18.619500 kubelet[2706]: E0117 12:14:18.619468 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:21.559774 update_engine[1559]: I20250117 12:14:21.559695 1559 update_attempter.cc:509] Updating boot flags... Jan 17 12:14:21.587583 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2778) Jan 17 12:14:21.615562 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2779) Jan 17 12:14:21.642596 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2779) Jan 17 12:14:24.045977 kubelet[2706]: E0117 12:14:24.045929 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:26.745440 kubelet[2706]: I0117 12:14:26.745409 2706 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:14:26.745971 kubelet[2706]: I0117 12:14:26.745920 2706 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:14:26.746005 containerd[1571]: time="2025-01-17T12:14:26.745746290Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:14:28.048729 kubelet[2706]: I0117 12:14:28.048685 2706 topology_manager.go:215] "Topology Admit Handler" podUID="c62977c2-f736-48ef-bbac-59dda3aa9c59" podNamespace="kube-system" podName="kube-proxy-svqxr" Jan 17 12:14:28.060325 kubelet[2706]: I0117 12:14:28.058549 2706 topology_manager.go:215] "Topology Admit Handler" podUID="cb454d66-8883-438a-8e4f-157389cd451a" podNamespace="kube-flannel" podName="kube-flannel-ds-bfwjp" Jan 17 12:14:28.178185 kubelet[2706]: I0117 12:14:28.178134 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/cb454d66-8883-438a-8e4f-157389cd451a-cni-plugin\") pod \"kube-flannel-ds-bfwjp\" (UID: \"cb454d66-8883-438a-8e4f-157389cd451a\") " pod="kube-flannel/kube-flannel-ds-bfwjp" Jan 17 12:14:28.178185 kubelet[2706]: I0117 12:14:28.178181 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/cb454d66-8883-438a-8e4f-157389cd451a-flannel-cfg\") pod \"kube-flannel-ds-bfwjp\" (UID: \"cb454d66-8883-438a-8e4f-157389cd451a\") " pod="kube-flannel/kube-flannel-ds-bfwjp" Jan 17 12:14:28.178358 kubelet[2706]: I0117 12:14:28.178216 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c62977c2-f736-48ef-bbac-59dda3aa9c59-xtables-lock\") pod \"kube-proxy-svqxr\" (UID: \"c62977c2-f736-48ef-bbac-59dda3aa9c59\") " pod="kube-system/kube-proxy-svqxr" Jan 17 12:14:28.178358 kubelet[2706]: I0117 12:14:28.178256 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c62977c2-f736-48ef-bbac-59dda3aa9c59-lib-modules\") pod \"kube-proxy-svqxr\" (UID: \"c62977c2-f736-48ef-bbac-59dda3aa9c59\") " pod="kube-system/kube-proxy-svqxr" Jan 17 12:14:28.178358 kubelet[2706]: I0117 12:14:28.178279 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8zxn\" (UniqueName: \"kubernetes.io/projected/c62977c2-f736-48ef-bbac-59dda3aa9c59-kube-api-access-c8zxn\") pod \"kube-proxy-svqxr\" (UID: \"c62977c2-f736-48ef-bbac-59dda3aa9c59\") " pod="kube-system/kube-proxy-svqxr" Jan 17 12:14:28.178358 kubelet[2706]: I0117 12:14:28.178306 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/cb454d66-8883-438a-8e4f-157389cd451a-run\") pod \"kube-flannel-ds-bfwjp\" (UID: \"cb454d66-8883-438a-8e4f-157389cd451a\") " pod="kube-flannel/kube-flannel-ds-bfwjp" Jan 17 12:14:28.178358 kubelet[2706]: I0117 12:14:28.178349 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfsf6\" (UniqueName: \"kubernetes.io/projected/cb454d66-8883-438a-8e4f-157389cd451a-kube-api-access-xfsf6\") pod \"kube-flannel-ds-bfwjp\" (UID: \"cb454d66-8883-438a-8e4f-157389cd451a\") " pod="kube-flannel/kube-flannel-ds-bfwjp" Jan 17 12:14:28.178550 kubelet[2706]: I0117 12:14:28.178401 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c62977c2-f736-48ef-bbac-59dda3aa9c59-kube-proxy\") pod \"kube-proxy-svqxr\" (UID: \"c62977c2-f736-48ef-bbac-59dda3aa9c59\") " pod="kube-system/kube-proxy-svqxr" Jan 17 12:14:28.178550 kubelet[2706]: I0117 12:14:28.178432 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cb454d66-8883-438a-8e4f-157389cd451a-xtables-lock\") pod \"kube-flannel-ds-bfwjp\" (UID: \"cb454d66-8883-438a-8e4f-157389cd451a\") " pod="kube-flannel/kube-flannel-ds-bfwjp" Jan 17 12:14:28.178550 kubelet[2706]: I0117 12:14:28.178454 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/cb454d66-8883-438a-8e4f-157389cd451a-cni\") pod \"kube-flannel-ds-bfwjp\" (UID: \"cb454d66-8883-438a-8e4f-157389cd451a\") " pod="kube-flannel/kube-flannel-ds-bfwjp" Jan 17 12:14:28.354415 kubelet[2706]: E0117 12:14:28.354274 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:28.355030 containerd[1571]: time="2025-01-17T12:14:28.354980056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-svqxr,Uid:c62977c2-f736-48ef-bbac-59dda3aa9c59,Namespace:kube-system,Attempt:0,}" Jan 17 12:14:28.364579 kubelet[2706]: E0117 12:14:28.364547 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:28.365177 containerd[1571]: time="2025-01-17T12:14:28.365135349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-bfwjp,Uid:cb454d66-8883-438a-8e4f-157389cd451a,Namespace:kube-flannel,Attempt:0,}" Jan 17 12:14:28.397689 containerd[1571]: time="2025-01-17T12:14:28.397552285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:14:28.398340 containerd[1571]: time="2025-01-17T12:14:28.398260804Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:14:28.398340 containerd[1571]: time="2025-01-17T12:14:28.398282966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:14:28.398478 containerd[1571]: time="2025-01-17T12:14:28.398407301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:14:28.441066 containerd[1571]: time="2025-01-17T12:14:28.440837972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:14:28.441066 containerd[1571]: time="2025-01-17T12:14:28.441028072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:14:28.441066 containerd[1571]: time="2025-01-17T12:14:28.441047148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:14:28.441386 containerd[1571]: time="2025-01-17T12:14:28.441180880Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:14:28.469899 containerd[1571]: time="2025-01-17T12:14:28.469826248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-bfwjp,Uid:cb454d66-8883-438a-8e4f-157389cd451a,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"d63ce21ea3cba04f2404ab36c856e1fd84c25faa2e604eeb156d751d1ef3c10f\"" Jan 17 12:14:28.470515 kubelet[2706]: E0117 12:14:28.470493 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:28.474714 containerd[1571]: time="2025-01-17T12:14:28.474611722Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 17 12:14:28.481934 containerd[1571]: time="2025-01-17T12:14:28.481887873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-svqxr,Uid:c62977c2-f736-48ef-bbac-59dda3aa9c59,Namespace:kube-system,Attempt:0,} returns sandbox id \"9e5342e34a029e11318ca0ea5c23059360d82eb1738ea3a231999dd5e226a82d\"" Jan 17 12:14:28.482731 kubelet[2706]: E0117 12:14:28.482710 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:28.484526 containerd[1571]: time="2025-01-17T12:14:28.484481205Z" level=info msg="CreateContainer within sandbox \"9e5342e34a029e11318ca0ea5c23059360d82eb1738ea3a231999dd5e226a82d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:14:28.998306 containerd[1571]: time="2025-01-17T12:14:28.998236008Z" level=info msg="CreateContainer within sandbox \"9e5342e34a029e11318ca0ea5c23059360d82eb1738ea3a231999dd5e226a82d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b49b86645586b10b5885b5f39b141c838ba89c662a99d6850ce662a29f56b46b\"" Jan 17 12:14:28.998959 containerd[1571]: time="2025-01-17T12:14:28.998873854Z" level=info msg="StartContainer for \"b49b86645586b10b5885b5f39b141c838ba89c662a99d6850ce662a29f56b46b\"" Jan 17 12:14:29.124746 containerd[1571]: time="2025-01-17T12:14:29.124669066Z" level=info msg="StartContainer for \"b49b86645586b10b5885b5f39b141c838ba89c662a99d6850ce662a29f56b46b\" returns successfully" Jan 17 12:14:29.644920 kubelet[2706]: E0117 12:14:29.644877 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:29.652813 kubelet[2706]: I0117 12:14:29.652682 2706 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-svqxr" podStartSLOduration=2.652595723 podStartE2EDuration="2.652595723s" podCreationTimestamp="2025-01-17 12:14:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:14:29.652252464 +0000 UTC m=+18.149770883" watchObservedRunningTime="2025-01-17 12:14:29.652595723 +0000 UTC m=+18.150114142" Jan 17 12:14:30.527682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount344813932.mount: Deactivated successfully. Jan 17 12:14:30.572713 containerd[1571]: time="2025-01-17T12:14:30.572647284Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:14:30.573406 containerd[1571]: time="2025-01-17T12:14:30.573345122Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 17 12:14:30.574542 containerd[1571]: time="2025-01-17T12:14:30.574496957Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:14:30.576856 containerd[1571]: time="2025-01-17T12:14:30.576817438Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:14:30.577793 containerd[1571]: time="2025-01-17T12:14:30.577764747Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.103111706s" Jan 17 12:14:30.577825 containerd[1571]: time="2025-01-17T12:14:30.577793441Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 17 12:14:30.579657 containerd[1571]: time="2025-01-17T12:14:30.579617896Z" level=info msg="CreateContainer within sandbox \"d63ce21ea3cba04f2404ab36c856e1fd84c25faa2e604eeb156d751d1ef3c10f\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 17 12:14:30.599136 containerd[1571]: time="2025-01-17T12:14:30.599086666Z" level=info msg="CreateContainer within sandbox \"d63ce21ea3cba04f2404ab36c856e1fd84c25faa2e604eeb156d751d1ef3c10f\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"f3e4f59d77a77283a43a0624a0410b155e4e35b86d84a5d86f04f7326b5f7a06\"" Jan 17 12:14:30.599688 containerd[1571]: time="2025-01-17T12:14:30.599646684Z" level=info msg="StartContainer for \"f3e4f59d77a77283a43a0624a0410b155e4e35b86d84a5d86f04f7326b5f7a06\"" Jan 17 12:14:30.647478 kubelet[2706]: E0117 12:14:30.647447 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:30.651088 containerd[1571]: time="2025-01-17T12:14:30.651055322Z" level=info msg="StartContainer for \"f3e4f59d77a77283a43a0624a0410b155e4e35b86d84a5d86f04f7326b5f7a06\" returns successfully" Jan 17 12:14:30.746631 containerd[1571]: time="2025-01-17T12:14:30.744948849Z" level=info msg="shim disconnected" id=f3e4f59d77a77283a43a0624a0410b155e4e35b86d84a5d86f04f7326b5f7a06 namespace=k8s.io Jan 17 12:14:30.746631 containerd[1571]: time="2025-01-17T12:14:30.746617119Z" level=warning msg="cleaning up after shim disconnected" id=f3e4f59d77a77283a43a0624a0410b155e4e35b86d84a5d86f04f7326b5f7a06 namespace=k8s.io Jan 17 12:14:30.746631 containerd[1571]: time="2025-01-17T12:14:30.746628581Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:14:31.459365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3e4f59d77a77283a43a0624a0410b155e4e35b86d84a5d86f04f7326b5f7a06-rootfs.mount: Deactivated successfully. Jan 17 12:14:31.650099 kubelet[2706]: E0117 12:14:31.650068 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:31.650821 containerd[1571]: time="2025-01-17T12:14:31.650789083Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 17 12:14:34.033979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1793320903.mount: Deactivated successfully. Jan 17 12:14:34.582346 containerd[1571]: time="2025-01-17T12:14:34.582278753Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:14:34.583087 containerd[1571]: time="2025-01-17T12:14:34.583048234Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866357" Jan 17 12:14:34.584434 containerd[1571]: time="2025-01-17T12:14:34.584380827Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:14:34.587364 containerd[1571]: time="2025-01-17T12:14:34.587310389Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:14:34.588493 containerd[1571]: time="2025-01-17T12:14:34.588453374Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.937622723s" Jan 17 12:14:34.588572 containerd[1571]: time="2025-01-17T12:14:34.588494241Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 17 12:14:34.590284 containerd[1571]: time="2025-01-17T12:14:34.590251234Z" level=info msg="CreateContainer within sandbox \"d63ce21ea3cba04f2404ab36c856e1fd84c25faa2e604eeb156d751d1ef3c10f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:14:34.603984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3658071396.mount: Deactivated successfully. Jan 17 12:14:34.604856 containerd[1571]: time="2025-01-17T12:14:34.604795323Z" level=info msg="CreateContainer within sandbox \"d63ce21ea3cba04f2404ab36c856e1fd84c25faa2e604eeb156d751d1ef3c10f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fb1b623423c2f597a0e25e8311edf3cd2bf51d5c669deda63904c7f11e19374c\"" Jan 17 12:14:34.605410 containerd[1571]: time="2025-01-17T12:14:34.605371670Z" level=info msg="StartContainer for \"fb1b623423c2f597a0e25e8311edf3cd2bf51d5c669deda63904c7f11e19374c\"" Jan 17 12:14:34.665875 containerd[1571]: time="2025-01-17T12:14:34.665819849Z" level=info msg="StartContainer for \"fb1b623423c2f597a0e25e8311edf3cd2bf51d5c669deda63904c7f11e19374c\" returns successfully" Jan 17 12:14:34.733353 kubelet[2706]: I0117 12:14:34.733228 2706 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:14:34.893460 kubelet[2706]: I0117 12:14:34.893325 2706 topology_manager.go:215] "Topology Admit Handler" podUID="d4600c9f-e95c-4763-90ed-f2af6a70997e" podNamespace="kube-system" podName="coredns-76f75df574-g27qg" Jan 17 12:14:34.893625 kubelet[2706]: I0117 12:14:34.893470 2706 topology_manager.go:215] "Topology Admit Handler" podUID="b739ace3-5406-4334-aad1-2490867f5b3f" podNamespace="kube-system" podName="coredns-76f75df574-fq8md" Jan 17 12:14:34.921276 kubelet[2706]: I0117 12:14:34.921246 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnfrh\" (UniqueName: \"kubernetes.io/projected/b739ace3-5406-4334-aad1-2490867f5b3f-kube-api-access-dnfrh\") pod \"coredns-76f75df574-fq8md\" (UID: \"b739ace3-5406-4334-aad1-2490867f5b3f\") " pod="kube-system/coredns-76f75df574-fq8md" Jan 17 12:14:34.921276 kubelet[2706]: I0117 12:14:34.921285 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4600c9f-e95c-4763-90ed-f2af6a70997e-config-volume\") pod \"coredns-76f75df574-g27qg\" (UID: \"d4600c9f-e95c-4763-90ed-f2af6a70997e\") " pod="kube-system/coredns-76f75df574-g27qg" Jan 17 12:14:34.921448 kubelet[2706]: I0117 12:14:34.921312 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5x52\" (UniqueName: \"kubernetes.io/projected/d4600c9f-e95c-4763-90ed-f2af6a70997e-kube-api-access-z5x52\") pod \"coredns-76f75df574-g27qg\" (UID: \"d4600c9f-e95c-4763-90ed-f2af6a70997e\") " pod="kube-system/coredns-76f75df574-g27qg" Jan 17 12:14:34.921448 kubelet[2706]: I0117 12:14:34.921337 2706 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b739ace3-5406-4334-aad1-2490867f5b3f-config-volume\") pod \"coredns-76f75df574-fq8md\" (UID: \"b739ace3-5406-4334-aad1-2490867f5b3f\") " pod="kube-system/coredns-76f75df574-fq8md" Jan 17 12:14:34.942169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb1b623423c2f597a0e25e8311edf3cd2bf51d5c669deda63904c7f11e19374c-rootfs.mount: Deactivated successfully. Jan 17 12:14:35.440447 kubelet[2706]: E0117 12:14:35.440258 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:35.440447 kubelet[2706]: E0117 12:14:35.440321 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:35.440849 containerd[1571]: time="2025-01-17T12:14:35.440808530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g27qg,Uid:d4600c9f-e95c-4763-90ed-f2af6a70997e,Namespace:kube-system,Attempt:0,}" Jan 17 12:14:35.441243 containerd[1571]: time="2025-01-17T12:14:35.441189397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fq8md,Uid:b739ace3-5406-4334-aad1-2490867f5b3f,Namespace:kube-system,Attempt:0,}" Jan 17 12:14:35.529771 containerd[1571]: time="2025-01-17T12:14:35.529691471Z" level=info msg="shim disconnected" id=fb1b623423c2f597a0e25e8311edf3cd2bf51d5c669deda63904c7f11e19374c namespace=k8s.io Jan 17 12:14:35.529771 containerd[1571]: time="2025-01-17T12:14:35.529747918Z" level=warning msg="cleaning up after shim disconnected" id=fb1b623423c2f597a0e25e8311edf3cd2bf51d5c669deda63904c7f11e19374c namespace=k8s.io Jan 17 12:14:35.529771 containerd[1571]: time="2025-01-17T12:14:35.529758037Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:14:35.903475 kubelet[2706]: E0117 12:14:35.903371 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:35.905740 containerd[1571]: time="2025-01-17T12:14:35.905685223Z" level=info msg="CreateContainer within sandbox \"d63ce21ea3cba04f2404ab36c856e1fd84c25faa2e604eeb156d751d1ef3c10f\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 17 12:14:36.283927 containerd[1571]: time="2025-01-17T12:14:36.283876893Z" level=info msg="CreateContainer within sandbox \"d63ce21ea3cba04f2404ab36c856e1fd84c25faa2e604eeb156d751d1ef3c10f\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"578bec2abaf38ec10d2dc5f1b4a9450643436927dcc25cd56db9de1a538c1745\"" Jan 17 12:14:36.284759 containerd[1571]: time="2025-01-17T12:14:36.284442568Z" level=info msg="StartContainer for \"578bec2abaf38ec10d2dc5f1b4a9450643436927dcc25cd56db9de1a538c1745\"" Jan 17 12:14:36.290054 containerd[1571]: time="2025-01-17T12:14:36.289996211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fq8md,Uid:b739ace3-5406-4334-aad1-2490867f5b3f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"daab36c132488986ce6b8f5b87dfc8de620739972a7745b2fff2f3473331918a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 12:14:36.290247 kubelet[2706]: E0117 12:14:36.290205 2706 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daab36c132488986ce6b8f5b87dfc8de620739972a7745b2fff2f3473331918a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 12:14:36.290363 kubelet[2706]: E0117 12:14:36.290258 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daab36c132488986ce6b8f5b87dfc8de620739972a7745b2fff2f3473331918a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-fq8md" Jan 17 12:14:36.290363 kubelet[2706]: E0117 12:14:36.290278 2706 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"daab36c132488986ce6b8f5b87dfc8de620739972a7745b2fff2f3473331918a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-fq8md" Jan 17 12:14:36.290363 kubelet[2706]: E0117 12:14:36.290332 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-fq8md_kube-system(b739ace3-5406-4334-aad1-2490867f5b3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-fq8md_kube-system(b739ace3-5406-4334-aad1-2490867f5b3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"daab36c132488986ce6b8f5b87dfc8de620739972a7745b2fff2f3473331918a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-fq8md" podUID="b739ace3-5406-4334-aad1-2490867f5b3f" Jan 17 12:14:36.290735 systemd[1]: run-netns-cni\x2db944bc10\x2d6145\x2dcf96\x2dfc23\x2d4c7c3060e2b3.mount: Deactivated successfully. Jan 17 12:14:36.291344 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-daab36c132488986ce6b8f5b87dfc8de620739972a7745b2fff2f3473331918a-shm.mount: Deactivated successfully. Jan 17 12:14:36.293589 containerd[1571]: time="2025-01-17T12:14:36.292915632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g27qg,Uid:d4600c9f-e95c-4763-90ed-f2af6a70997e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"793d88cba4b5a6c67f27972de5561da1158f07f9ead2c448d9f4b24e70671630\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 12:14:36.293716 kubelet[2706]: E0117 12:14:36.293240 2706 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"793d88cba4b5a6c67f27972de5561da1158f07f9ead2c448d9f4b24e70671630\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 17 12:14:36.293716 kubelet[2706]: E0117 12:14:36.293411 2706 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"793d88cba4b5a6c67f27972de5561da1158f07f9ead2c448d9f4b24e70671630\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-g27qg" Jan 17 12:14:36.293716 kubelet[2706]: E0117 12:14:36.293433 2706 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"793d88cba4b5a6c67f27972de5561da1158f07f9ead2c448d9f4b24e70671630\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-g27qg" Jan 17 12:14:36.293716 kubelet[2706]: E0117 12:14:36.293485 2706 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-g27qg_kube-system(d4600c9f-e95c-4763-90ed-f2af6a70997e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-g27qg_kube-system(d4600c9f-e95c-4763-90ed-f2af6a70997e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"793d88cba4b5a6c67f27972de5561da1158f07f9ead2c448d9f4b24e70671630\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-g27qg" podUID="d4600c9f-e95c-4763-90ed-f2af6a70997e" Jan 17 12:14:36.344503 containerd[1571]: time="2025-01-17T12:14:36.344455837Z" level=info msg="StartContainer for \"578bec2abaf38ec10d2dc5f1b4a9450643436927dcc25cd56db9de1a538c1745\" returns successfully" Jan 17 12:14:36.906558 kubelet[2706]: E0117 12:14:36.906506 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:37.266263 systemd[1]: run-netns-cni\x2dcb4fb75a\x2dda41\x2de322\x2d1d6f\x2dd5ded702d9e1.mount: Deactivated successfully. Jan 17 12:14:37.266445 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-793d88cba4b5a6c67f27972de5561da1158f07f9ead2c448d9f4b24e70671630-shm.mount: Deactivated successfully. Jan 17 12:14:37.308767 kubelet[2706]: I0117 12:14:37.308716 2706 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-bfwjp" podStartSLOduration=4.192287503 podStartE2EDuration="10.308670005s" podCreationTimestamp="2025-01-17 12:14:27 +0000 UTC" firstStartedPulling="2025-01-17 12:14:28.47242021 +0000 UTC m=+16.969938629" lastFinishedPulling="2025-01-17 12:14:34.588802722 +0000 UTC m=+23.086321131" observedRunningTime="2025-01-17 12:14:37.308404826 +0000 UTC m=+25.805923265" watchObservedRunningTime="2025-01-17 12:14:37.308670005 +0000 UTC m=+25.806188424" Jan 17 12:14:37.642261 systemd[1]: Started sshd@5-10.0.0.91:22-10.0.0.1:57120.service - OpenSSH per-connection server daemon (10.0.0.1:57120). Jan 17 12:14:37.643968 systemd-networkd[1246]: flannel.1: Link UP Jan 17 12:14:37.643974 systemd-networkd[1246]: flannel.1: Gained carrier Jan 17 12:14:37.692836 sshd[3275]: Accepted publickey for core from 10.0.0.1 port 57120 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:14:37.694647 sshd[3275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:37.699203 systemd-logind[1552]: New session 6 of user core. Jan 17 12:14:37.705367 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:14:37.820655 sshd[3275]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:37.824619 systemd[1]: sshd@5-10.0.0.91:22-10.0.0.1:57120.service: Deactivated successfully. Jan 17 12:14:37.826987 systemd-logind[1552]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:14:37.826996 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:14:37.828211 systemd-logind[1552]: Removed session 6. Jan 17 12:14:37.908457 kubelet[2706]: E0117 12:14:37.908341 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:39.315721 systemd-networkd[1246]: flannel.1: Gained IPv6LL Jan 17 12:14:42.840858 systemd[1]: Started sshd@6-10.0.0.91:22-10.0.0.1:57124.service - OpenSSH per-connection server daemon (10.0.0.1:57124). Jan 17 12:14:42.868607 sshd[3388]: Accepted publickey for core from 10.0.0.1 port 57124 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:14:42.870217 sshd[3388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:42.874496 systemd-logind[1552]: New session 7 of user core. Jan 17 12:14:42.888873 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:14:42.995844 sshd[3388]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:42.999997 systemd[1]: sshd@6-10.0.0.91:22-10.0.0.1:57124.service: Deactivated successfully. Jan 17 12:14:43.002782 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:14:43.003710 systemd-logind[1552]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:14:43.004764 systemd-logind[1552]: Removed session 7. Jan 17 12:14:48.007742 systemd[1]: Started sshd@7-10.0.0.91:22-10.0.0.1:54110.service - OpenSSH per-connection server daemon (10.0.0.1:54110). Jan 17 12:14:48.034500 sshd[3425]: Accepted publickey for core from 10.0.0.1 port 54110 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:14:48.035985 sshd[3425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:48.039675 systemd-logind[1552]: New session 8 of user core. Jan 17 12:14:48.052769 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:14:48.156140 sshd[3425]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:48.165774 systemd[1]: Started sshd@8-10.0.0.91:22-10.0.0.1:54118.service - OpenSSH per-connection server daemon (10.0.0.1:54118). Jan 17 12:14:48.166423 systemd[1]: sshd@7-10.0.0.91:22-10.0.0.1:54110.service: Deactivated successfully. Jan 17 12:14:48.168597 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:14:48.170103 systemd-logind[1552]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:14:48.171000 systemd-logind[1552]: Removed session 8. Jan 17 12:14:48.192221 sshd[3440]: Accepted publickey for core from 10.0.0.1 port 54118 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:14:48.193718 sshd[3440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:48.197917 systemd-logind[1552]: New session 9 of user core. Jan 17 12:14:48.208787 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:14:48.340208 sshd[3440]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:48.349013 systemd[1]: Started sshd@9-10.0.0.91:22-10.0.0.1:54126.service - OpenSSH per-connection server daemon (10.0.0.1:54126). Jan 17 12:14:48.349599 systemd[1]: sshd@8-10.0.0.91:22-10.0.0.1:54118.service: Deactivated successfully. Jan 17 12:14:48.352782 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:14:48.352912 systemd-logind[1552]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:14:48.354335 systemd-logind[1552]: Removed session 9. Jan 17 12:14:48.377644 sshd[3453]: Accepted publickey for core from 10.0.0.1 port 54126 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:14:48.379269 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:48.383374 systemd-logind[1552]: New session 10 of user core. Jan 17 12:14:48.387885 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:14:48.495236 sshd[3453]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:48.499790 systemd[1]: sshd@9-10.0.0.91:22-10.0.0.1:54126.service: Deactivated successfully. Jan 17 12:14:48.502584 systemd-logind[1552]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:14:48.502604 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:14:48.503821 systemd-logind[1552]: Removed session 10. Jan 17 12:14:49.599888 kubelet[2706]: E0117 12:14:49.599822 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:49.600384 containerd[1571]: time="2025-01-17T12:14:49.600301710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fq8md,Uid:b739ace3-5406-4334-aad1-2490867f5b3f,Namespace:kube-system,Attempt:0,}" Jan 17 12:14:49.630753 systemd-networkd[1246]: cni0: Link UP Jan 17 12:14:49.630763 systemd-networkd[1246]: cni0: Gained carrier Jan 17 12:14:49.634524 systemd-networkd[1246]: cni0: Lost carrier Jan 17 12:14:49.638386 systemd-networkd[1246]: veth79dc4de7: Link UP Jan 17 12:14:49.640911 kernel: cni0: port 1(veth79dc4de7) entered blocking state Jan 17 12:14:49.640969 kernel: cni0: port 1(veth79dc4de7) entered disabled state Jan 17 12:14:49.640985 kernel: veth79dc4de7: entered allmulticast mode Jan 17 12:14:49.642404 kernel: veth79dc4de7: entered promiscuous mode Jan 17 12:14:49.643261 kernel: cni0: port 1(veth79dc4de7) entered blocking state Jan 17 12:14:49.643300 kernel: cni0: port 1(veth79dc4de7) entered forwarding state Jan 17 12:14:49.644308 kernel: cni0: port 1(veth79dc4de7) entered disabled state Jan 17 12:14:49.652758 kernel: cni0: port 1(veth79dc4de7) entered blocking state Jan 17 12:14:49.652831 kernel: cni0: port 1(veth79dc4de7) entered forwarding state Jan 17 12:14:49.652868 systemd-networkd[1246]: veth79dc4de7: Gained carrier Jan 17 12:14:49.653517 systemd-networkd[1246]: cni0: Gained carrier Jan 17 12:14:49.703060 containerd[1571]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000ae8e8), "name":"cbr0", "type":"bridge"} Jan 17 12:14:49.703060 containerd[1571]: delegateAdd: netconf sent to delegate plugin: Jan 17 12:14:49.721470 containerd[1571]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-17T12:14:49.721357218Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:14:49.721470 containerd[1571]: time="2025-01-17T12:14:49.721446665Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:14:49.721628 containerd[1571]: time="2025-01-17T12:14:49.721477393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:14:49.721664 containerd[1571]: time="2025-01-17T12:14:49.721617367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:14:49.748301 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:14:49.774191 containerd[1571]: time="2025-01-17T12:14:49.774147256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-fq8md,Uid:b739ace3-5406-4334-aad1-2490867f5b3f,Namespace:kube-system,Attempt:0,} returns sandbox id \"070b355a31ab9d9164c73568f42b5b923e91976adc082adbb8f7d8bf9bbd5ffa\"" Jan 17 12:14:49.775088 kubelet[2706]: E0117 12:14:49.775051 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:49.777010 containerd[1571]: time="2025-01-17T12:14:49.776977484Z" level=info msg="CreateContainer within sandbox \"070b355a31ab9d9164c73568f42b5b923e91976adc082adbb8f7d8bf9bbd5ffa\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:14:49.794056 containerd[1571]: time="2025-01-17T12:14:49.794017164Z" level=info msg="CreateContainer within sandbox \"070b355a31ab9d9164c73568f42b5b923e91976adc082adbb8f7d8bf9bbd5ffa\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a66bd0a414a55f050be3dd6e4d340c496af382733b04a0045228567db2216bb8\"" Jan 17 12:14:49.794621 containerd[1571]: time="2025-01-17T12:14:49.794480494Z" level=info msg="StartContainer for \"a66bd0a414a55f050be3dd6e4d340c496af382733b04a0045228567db2216bb8\"" Jan 17 12:14:49.846196 containerd[1571]: time="2025-01-17T12:14:49.846160357Z" level=info msg="StartContainer for \"a66bd0a414a55f050be3dd6e4d340c496af382733b04a0045228567db2216bb8\" returns successfully" Jan 17 12:14:49.929298 kubelet[2706]: E0117 12:14:49.929192 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:49.937742 kubelet[2706]: I0117 12:14:49.937317 2706 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-fq8md" podStartSLOduration=22.937270554 podStartE2EDuration="22.937270554s" podCreationTimestamp="2025-01-17 12:14:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:14:49.937178241 +0000 UTC m=+38.434696660" watchObservedRunningTime="2025-01-17 12:14:49.937270554 +0000 UTC m=+38.434789003" Jan 17 12:14:50.931342 kubelet[2706]: E0117 12:14:50.931287 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:51.600245 kubelet[2706]: E0117 12:14:51.600205 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:51.600729 containerd[1571]: time="2025-01-17T12:14:51.600676018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g27qg,Uid:d4600c9f-e95c-4763-90ed-f2af6a70997e,Namespace:kube-system,Attempt:0,}" Jan 17 12:14:51.604802 systemd-networkd[1246]: veth79dc4de7: Gained IPv6LL Jan 17 12:14:51.605714 systemd-networkd[1246]: cni0: Gained IPv6LL Jan 17 12:14:51.624320 systemd-networkd[1246]: veth12b2275d: Link UP Jan 17 12:14:51.626701 kernel: cni0: port 2(veth12b2275d) entered blocking state Jan 17 12:14:51.626800 kernel: cni0: port 2(veth12b2275d) entered disabled state Jan 17 12:14:51.626827 kernel: veth12b2275d: entered allmulticast mode Jan 17 12:14:51.628563 kernel: veth12b2275d: entered promiscuous mode Jan 17 12:14:51.635216 kernel: cni0: port 2(veth12b2275d) entered blocking state Jan 17 12:14:51.635298 kernel: cni0: port 2(veth12b2275d) entered forwarding state Jan 17 12:14:51.635263 systemd-networkd[1246]: veth12b2275d: Gained carrier Jan 17 12:14:51.637813 containerd[1571]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000ae8e8), "name":"cbr0", "type":"bridge"} Jan 17 12:14:51.637813 containerd[1571]: delegateAdd: netconf sent to delegate plugin: Jan 17 12:14:51.658135 containerd[1571]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-17T12:14:51.658032742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:14:51.658135 containerd[1571]: time="2025-01-17T12:14:51.658103555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:14:51.658135 containerd[1571]: time="2025-01-17T12:14:51.658117190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:14:51.658905 containerd[1571]: time="2025-01-17T12:14:51.658788531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:14:51.683291 systemd-resolved[1463]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 17 12:14:51.708788 containerd[1571]: time="2025-01-17T12:14:51.708728071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-g27qg,Uid:d4600c9f-e95c-4763-90ed-f2af6a70997e,Namespace:kube-system,Attempt:0,} returns sandbox id \"a440c2129542eb80cedc02fea0cfd8bac7957ded2b3be084ec34b1446fa18aec\"" Jan 17 12:14:51.709467 kubelet[2706]: E0117 12:14:51.709392 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:51.711280 containerd[1571]: time="2025-01-17T12:14:51.711245100Z" level=info msg="CreateContainer within sandbox \"a440c2129542eb80cedc02fea0cfd8bac7957ded2b3be084ec34b1446fa18aec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:14:51.731042 containerd[1571]: time="2025-01-17T12:14:51.730989492Z" level=info msg="CreateContainer within sandbox \"a440c2129542eb80cedc02fea0cfd8bac7957ded2b3be084ec34b1446fa18aec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f4a6d90c0ed93cabb754988f5314366b9eca13de8366dd4208db05b862b45ff7\"" Jan 17 12:14:51.732020 containerd[1571]: time="2025-01-17T12:14:51.731989993Z" level=info msg="StartContainer for \"f4a6d90c0ed93cabb754988f5314366b9eca13de8366dd4208db05b862b45ff7\"" Jan 17 12:14:51.791467 containerd[1571]: time="2025-01-17T12:14:51.791396346Z" level=info msg="StartContainer for \"f4a6d90c0ed93cabb754988f5314366b9eca13de8366dd4208db05b862b45ff7\" returns successfully" Jan 17 12:14:51.933783 kubelet[2706]: E0117 12:14:51.933510 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:51.933783 kubelet[2706]: E0117 12:14:51.933578 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:51.956888 kubelet[2706]: I0117 12:14:51.956841 2706 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-g27qg" podStartSLOduration=24.956800425 podStartE2EDuration="24.956800425s" podCreationTimestamp="2025-01-17 12:14:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:14:51.944230771 +0000 UTC m=+40.441749190" watchObservedRunningTime="2025-01-17 12:14:51.956800425 +0000 UTC m=+40.454318844" Jan 17 12:14:52.934964 kubelet[2706]: E0117 12:14:52.934930 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:53.075774 systemd-networkd[1246]: veth12b2275d: Gained IPv6LL Jan 17 12:14:53.503800 systemd[1]: Started sshd@10-10.0.0.91:22-10.0.0.1:54136.service - OpenSSH per-connection server daemon (10.0.0.1:54136). Jan 17 12:14:53.533102 sshd[3730]: Accepted publickey for core from 10.0.0.1 port 54136 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:14:53.535743 sshd[3730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:53.540502 systemd-logind[1552]: New session 11 of user core. Jan 17 12:14:53.551960 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:14:53.691317 sshd[3730]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:53.695308 systemd[1]: sshd@10-10.0.0.91:22-10.0.0.1:54136.service: Deactivated successfully. Jan 17 12:14:53.697688 systemd-logind[1552]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:14:53.697847 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:14:53.699000 systemd-logind[1552]: Removed session 11. Jan 17 12:14:53.937214 kubelet[2706]: E0117 12:14:53.937176 2706 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 17 12:14:58.705815 systemd[1]: Started sshd@11-10.0.0.91:22-10.0.0.1:34842.service - OpenSSH per-connection server daemon (10.0.0.1:34842). Jan 17 12:14:58.736331 sshd[3766]: Accepted publickey for core from 10.0.0.1 port 34842 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:14:58.738496 sshd[3766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:58.743719 systemd-logind[1552]: New session 12 of user core. Jan 17 12:14:58.752828 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:14:58.862422 sshd[3766]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:58.871824 systemd[1]: Started sshd@12-10.0.0.91:22-10.0.0.1:34858.service - OpenSSH per-connection server daemon (10.0.0.1:34858). Jan 17 12:14:58.872305 systemd[1]: sshd@11-10.0.0.91:22-10.0.0.1:34842.service: Deactivated successfully. Jan 17 12:14:58.875903 systemd-logind[1552]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:14:58.876615 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:14:58.877612 systemd-logind[1552]: Removed session 12. Jan 17 12:14:58.899417 sshd[3779]: Accepted publickey for core from 10.0.0.1 port 34858 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:14:58.901071 sshd[3779]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:58.905796 systemd-logind[1552]: New session 13 of user core. Jan 17 12:14:58.916879 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:14:59.143214 sshd[3779]: pam_unix(sshd:session): session closed for user core Jan 17 12:14:59.155914 systemd[1]: Started sshd@13-10.0.0.91:22-10.0.0.1:34868.service - OpenSSH per-connection server daemon (10.0.0.1:34868). Jan 17 12:14:59.156613 systemd[1]: sshd@12-10.0.0.91:22-10.0.0.1:34858.service: Deactivated successfully. Jan 17 12:14:59.158641 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:14:59.160648 systemd-logind[1552]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:14:59.161973 systemd-logind[1552]: Removed session 13. Jan 17 12:14:59.186065 sshd[3795]: Accepted publickey for core from 10.0.0.1 port 34868 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:14:59.187761 sshd[3795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:14:59.192263 systemd-logind[1552]: New session 14 of user core. Jan 17 12:14:59.202908 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:15:00.477367 sshd[3795]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:00.486726 systemd[1]: Started sshd@14-10.0.0.91:22-10.0.0.1:34884.service - OpenSSH per-connection server daemon (10.0.0.1:34884). Jan 17 12:15:00.488737 systemd[1]: sshd@13-10.0.0.91:22-10.0.0.1:34868.service: Deactivated successfully. Jan 17 12:15:00.496136 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:15:00.501014 systemd-logind[1552]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:15:00.502927 systemd-logind[1552]: Removed session 14. Jan 17 12:15:00.531026 sshd[3816]: Accepted publickey for core from 10.0.0.1 port 34884 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:15:00.532839 sshd[3816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:00.536942 systemd-logind[1552]: New session 15 of user core. Jan 17 12:15:00.547802 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:15:00.827867 sshd[3816]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:00.834921 systemd[1]: Started sshd@15-10.0.0.91:22-10.0.0.1:34898.service - OpenSSH per-connection server daemon (10.0.0.1:34898). Jan 17 12:15:00.835458 systemd[1]: sshd@14-10.0.0.91:22-10.0.0.1:34884.service: Deactivated successfully. Jan 17 12:15:00.840169 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:15:00.841435 systemd-logind[1552]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:15:00.842870 systemd-logind[1552]: Removed session 15. Jan 17 12:15:00.864867 sshd[3831]: Accepted publickey for core from 10.0.0.1 port 34898 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:15:00.866666 sshd[3831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:00.871509 systemd-logind[1552]: New session 16 of user core. Jan 17 12:15:00.881808 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:15:00.994821 sshd[3831]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:00.998771 systemd[1]: sshd@15-10.0.0.91:22-10.0.0.1:34898.service: Deactivated successfully. Jan 17 12:15:01.001845 systemd-logind[1552]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:15:01.001998 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:15:01.002930 systemd-logind[1552]: Removed session 16. Jan 17 12:15:06.012944 systemd[1]: Started sshd@16-10.0.0.91:22-10.0.0.1:34910.service - OpenSSH per-connection server daemon (10.0.0.1:34910). Jan 17 12:15:06.040238 sshd[3871]: Accepted publickey for core from 10.0.0.1 port 34910 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:15:06.041761 sshd[3871]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:06.046282 systemd-logind[1552]: New session 17 of user core. Jan 17 12:15:06.055797 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:15:06.165414 sshd[3871]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:06.169129 systemd[1]: sshd@16-10.0.0.91:22-10.0.0.1:34910.service: Deactivated successfully. Jan 17 12:15:06.171680 systemd-logind[1552]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:15:06.171746 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:15:06.172816 systemd-logind[1552]: Removed session 17. Jan 17 12:15:11.172724 systemd[1]: Started sshd@17-10.0.0.91:22-10.0.0.1:53814.service - OpenSSH per-connection server daemon (10.0.0.1:53814). Jan 17 12:15:11.199696 sshd[3910]: Accepted publickey for core from 10.0.0.1 port 53814 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:15:11.201170 sshd[3910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:11.205138 systemd-logind[1552]: New session 18 of user core. Jan 17 12:15:11.219777 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:15:11.325885 sshd[3910]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:11.329941 systemd[1]: sshd@17-10.0.0.91:22-10.0.0.1:53814.service: Deactivated successfully. Jan 17 12:15:11.332725 systemd-logind[1552]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:15:11.332755 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:15:11.333769 systemd-logind[1552]: Removed session 18. Jan 17 12:15:16.339766 systemd[1]: Started sshd@18-10.0.0.91:22-10.0.0.1:53828.service - OpenSSH per-connection server daemon (10.0.0.1:53828). Jan 17 12:15:16.367107 sshd[3949]: Accepted publickey for core from 10.0.0.1 port 53828 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:15:16.368569 sshd[3949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:16.372524 systemd-logind[1552]: New session 19 of user core. Jan 17 12:15:16.382779 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:15:16.493237 sshd[3949]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:16.498161 systemd[1]: sshd@18-10.0.0.91:22-10.0.0.1:53828.service: Deactivated successfully. Jan 17 12:15:16.500915 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:15:16.501709 systemd-logind[1552]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:15:16.502989 systemd-logind[1552]: Removed session 19. Jan 17 12:15:21.508964 systemd[1]: Started sshd@19-10.0.0.91:22-10.0.0.1:51276.service - OpenSSH per-connection server daemon (10.0.0.1:51276). Jan 17 12:15:21.536896 sshd[3985]: Accepted publickey for core from 10.0.0.1 port 51276 ssh2: RSA SHA256:+651OWiRclMhzyyc0NpHiLMoB9ynZD20kdanNwVyLtE Jan 17 12:15:21.538561 sshd[3985]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:15:21.543203 systemd-logind[1552]: New session 20 of user core. Jan 17 12:15:21.552854 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:15:21.665375 sshd[3985]: pam_unix(sshd:session): session closed for user core Jan 17 12:15:21.670570 systemd[1]: sshd@19-10.0.0.91:22-10.0.0.1:51276.service: Deactivated successfully. Jan 17 12:15:21.673816 systemd-logind[1552]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:15:21.674143 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:15:21.675993 systemd-logind[1552]: Removed session 20.