Dec 13 01:07:56.876559 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:07:56.876580 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:07:56.876591 kernel: BIOS-provided physical RAM map: Dec 13 01:07:56.876598 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 01:07:56.876603 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 01:07:56.876609 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 01:07:56.876617 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Dec 13 01:07:56.876623 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Dec 13 01:07:56.876629 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 01:07:56.876637 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 01:07:56.876644 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 01:07:56.876650 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 01:07:56.876656 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 01:07:56.876662 kernel: NX (Execute Disable) protection: active Dec 13 01:07:56.876669 kernel: APIC: Static calls initialized Dec 13 01:07:56.876678 kernel: SMBIOS 2.8 present. Dec 13 01:07:56.876685 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Dec 13 01:07:56.876692 kernel: Hypervisor detected: KVM Dec 13 01:07:56.876698 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 01:07:56.876705 kernel: kvm-clock: using sched offset of 2174037187 cycles Dec 13 01:07:56.876712 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 01:07:56.876719 kernel: tsc: Detected 2794.748 MHz processor Dec 13 01:07:56.876726 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:07:56.876733 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:07:56.876740 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Dec 13 01:07:56.876749 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 01:07:56.876756 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:07:56.876770 kernel: Using GB pages for direct mapping Dec 13 01:07:56.876777 kernel: ACPI: Early table checksum verification disabled Dec 13 01:07:56.876785 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Dec 13 01:07:56.876792 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:07:56.876799 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:07:56.876806 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:07:56.876815 kernel: ACPI: FACS 0x000000009CFE0000 000040 Dec 13 01:07:56.876822 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:07:56.876828 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:07:56.876835 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:07:56.876842 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 01:07:56.876849 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Dec 13 01:07:56.876856 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Dec 13 01:07:56.876866 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Dec 13 01:07:56.876875 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Dec 13 01:07:56.876882 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Dec 13 01:07:56.876889 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Dec 13 01:07:56.876896 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Dec 13 01:07:56.876903 kernel: No NUMA configuration found Dec 13 01:07:56.876910 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Dec 13 01:07:56.876917 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Dec 13 01:07:56.876926 kernel: Zone ranges: Dec 13 01:07:56.876933 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:07:56.876940 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Dec 13 01:07:56.876947 kernel: Normal empty Dec 13 01:07:56.876954 kernel: Movable zone start for each node Dec 13 01:07:56.876961 kernel: Early memory node ranges Dec 13 01:07:56.876968 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 01:07:56.876975 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Dec 13 01:07:56.876982 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Dec 13 01:07:56.876991 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:07:56.876998 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 01:07:56.877005 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Dec 13 01:07:56.877012 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 01:07:56.877019 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 01:07:56.877026 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:07:56.877033 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 01:07:56.877040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 01:07:56.877047 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:07:56.877057 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 01:07:56.877064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 01:07:56.877071 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:07:56.877078 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 01:07:56.877097 kernel: TSC deadline timer available Dec 13 01:07:56.877107 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 01:07:56.877117 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 01:07:56.877127 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 01:07:56.877137 kernel: kvm-guest: setup PV sched yield Dec 13 01:07:56.877145 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 01:07:56.877155 kernel: Booting paravirtualized kernel on KVM Dec 13 01:07:56.877163 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:07:56.877170 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 01:07:56.877177 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 01:07:56.877185 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 01:07:56.877191 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 01:07:56.877198 kernel: kvm-guest: PV spinlocks enabled Dec 13 01:07:56.877205 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:07:56.877214 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:07:56.877224 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:07:56.877231 kernel: random: crng init done Dec 13 01:07:56.877238 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:07:56.877245 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:07:56.877252 kernel: Fallback order for Node 0: 0 Dec 13 01:07:56.877259 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Dec 13 01:07:56.877266 kernel: Policy zone: DMA32 Dec 13 01:07:56.877273 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:07:56.877283 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Dec 13 01:07:56.877290 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 01:07:56.877297 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:07:56.877304 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:07:56.877311 kernel: Dynamic Preempt: voluntary Dec 13 01:07:56.877318 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:07:56.877326 kernel: rcu: RCU event tracing is enabled. Dec 13 01:07:56.877333 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 01:07:56.877340 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:07:56.877349 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:07:56.877357 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:07:56.877364 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:07:56.877371 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 01:07:56.877378 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 01:07:56.877385 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:07:56.877392 kernel: Console: colour VGA+ 80x25 Dec 13 01:07:56.877399 kernel: printk: console [ttyS0] enabled Dec 13 01:07:56.877407 kernel: ACPI: Core revision 20230628 Dec 13 01:07:56.877416 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 01:07:56.877423 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:07:56.877430 kernel: x2apic enabled Dec 13 01:07:56.877437 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 01:07:56.877444 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 01:07:56.877451 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 01:07:56.877459 kernel: kvm-guest: setup PV IPIs Dec 13 01:07:56.877475 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 01:07:56.877482 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 01:07:56.877490 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 01:07:56.877497 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 01:07:56.877505 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 01:07:56.877515 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 01:07:56.877522 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:07:56.877530 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:07:56.877537 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:07:56.877545 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:07:56.877554 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 01:07:56.877562 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 01:07:56.877569 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 01:07:56.877577 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 01:07:56.877584 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 01:07:56.877592 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 01:07:56.877600 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 01:07:56.877607 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:07:56.877616 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:07:56.877624 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:07:56.877631 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:07:56.877639 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 01:07:56.877646 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:07:56.877653 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:07:56.877661 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:07:56.877668 kernel: landlock: Up and running. Dec 13 01:07:56.877675 kernel: SELinux: Initializing. Dec 13 01:07:56.877685 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:07:56.877693 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:07:56.877700 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 01:07:56.877708 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:07:56.877715 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:07:56.877723 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 01:07:56.877730 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 01:07:56.877737 kernel: ... version: 0 Dec 13 01:07:56.877747 kernel: ... bit width: 48 Dec 13 01:07:56.877754 kernel: ... generic registers: 6 Dec 13 01:07:56.877762 kernel: ... value mask: 0000ffffffffffff Dec 13 01:07:56.877776 kernel: ... max period: 00007fffffffffff Dec 13 01:07:56.877784 kernel: ... fixed-purpose events: 0 Dec 13 01:07:56.877791 kernel: ... event mask: 000000000000003f Dec 13 01:07:56.877799 kernel: signal: max sigframe size: 1776 Dec 13 01:07:56.877806 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:07:56.877813 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:07:56.877821 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:07:56.877830 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:07:56.877838 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 01:07:56.877845 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 01:07:56.877852 kernel: smpboot: Max logical packages: 1 Dec 13 01:07:56.877860 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 01:07:56.877867 kernel: devtmpfs: initialized Dec 13 01:07:56.877874 kernel: x86/mm: Memory block size: 128MB Dec 13 01:07:56.877882 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:07:56.877889 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 01:07:56.877899 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:07:56.877906 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:07:56.877913 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:07:56.877921 kernel: audit: type=2000 audit(1734052077.025:1): state=initialized audit_enabled=0 res=1 Dec 13 01:07:56.877928 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:07:56.877935 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:07:56.877943 kernel: cpuidle: using governor menu Dec 13 01:07:56.877950 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:07:56.877957 kernel: dca service started, version 1.12.1 Dec 13 01:07:56.877967 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 01:07:56.877975 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 01:07:56.877982 kernel: PCI: Using configuration type 1 for base access Dec 13 01:07:56.877989 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:07:56.877997 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:07:56.878004 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:07:56.878012 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:07:56.878019 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:07:56.878027 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:07:56.878036 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:07:56.878043 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:07:56.878051 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:07:56.878059 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:07:56.878066 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:07:56.878073 kernel: ACPI: Interpreter enabled Dec 13 01:07:56.878091 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 01:07:56.878098 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:07:56.878115 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:07:56.878133 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 01:07:56.878141 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 01:07:56.878148 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 01:07:56.878345 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 01:07:56.878475 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 01:07:56.878594 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 01:07:56.878604 kernel: PCI host bridge to bus 0000:00 Dec 13 01:07:56.878731 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 01:07:56.878852 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 01:07:56.878962 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 01:07:56.879071 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Dec 13 01:07:56.879198 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 01:07:56.879322 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Dec 13 01:07:56.879434 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 01:07:56.879575 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 01:07:56.879705 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 01:07:56.879835 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Dec 13 01:07:56.879955 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Dec 13 01:07:56.880073 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Dec 13 01:07:56.880208 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 01:07:56.880344 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 01:07:56.880483 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 01:07:56.880603 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Dec 13 01:07:56.880721 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Dec 13 01:07:56.880867 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 01:07:56.880989 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 01:07:56.881127 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Dec 13 01:07:56.881267 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Dec 13 01:07:56.881403 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 01:07:56.881539 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Dec 13 01:07:56.881661 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Dec 13 01:07:56.881789 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Dec 13 01:07:56.881909 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Dec 13 01:07:56.882035 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 01:07:56.882177 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 01:07:56.882331 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 01:07:56.882456 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Dec 13 01:07:56.882590 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Dec 13 01:07:56.882718 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 01:07:56.882847 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 01:07:56.882857 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 01:07:56.882869 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 01:07:56.882877 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 01:07:56.882884 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 01:07:56.882892 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 01:07:56.882899 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 01:07:56.882906 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 01:07:56.882914 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 01:07:56.882921 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 01:07:56.882929 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 01:07:56.882938 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 01:07:56.882946 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 01:07:56.882953 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 01:07:56.882961 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 01:07:56.882968 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 01:07:56.882976 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 01:07:56.882983 kernel: iommu: Default domain type: Translated Dec 13 01:07:56.882991 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:07:56.882998 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:07:56.883008 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 01:07:56.883015 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 01:07:56.883023 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Dec 13 01:07:56.883168 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 01:07:56.883289 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 01:07:56.883408 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 01:07:56.883418 kernel: vgaarb: loaded Dec 13 01:07:56.883425 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 01:07:56.883437 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 01:07:56.883444 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 01:07:56.883452 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:07:56.883459 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:07:56.883467 kernel: pnp: PnP ACPI init Dec 13 01:07:56.883600 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 01:07:56.883615 kernel: pnp: PnP ACPI: found 6 devices Dec 13 01:07:56.883625 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:07:56.883639 kernel: NET: Registered PF_INET protocol family Dec 13 01:07:56.883650 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:07:56.883660 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:07:56.883669 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:07:56.883676 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:07:56.883684 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:07:56.883691 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:07:56.883699 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:07:56.883706 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:07:56.883717 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:07:56.883724 kernel: NET: Registered PF_XDP protocol family Dec 13 01:07:56.883849 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 01:07:56.883960 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 01:07:56.884069 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 01:07:56.884219 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Dec 13 01:07:56.884327 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 01:07:56.884435 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Dec 13 01:07:56.884448 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:07:56.884456 kernel: Initialise system trusted keyrings Dec 13 01:07:56.884464 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:07:56.884471 kernel: Key type asymmetric registered Dec 13 01:07:56.884478 kernel: Asymmetric key parser 'x509' registered Dec 13 01:07:56.884486 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:07:56.884493 kernel: io scheduler mq-deadline registered Dec 13 01:07:56.884501 kernel: io scheduler kyber registered Dec 13 01:07:56.884508 kernel: io scheduler bfq registered Dec 13 01:07:56.884518 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:07:56.884526 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 01:07:56.884534 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 01:07:56.884541 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 01:07:56.884549 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:07:56.884556 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:07:56.884564 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 01:07:56.884571 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 01:07:56.884579 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 01:07:56.884709 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 01:07:56.884721 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 01:07:56.884840 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 01:07:56.884979 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T01:07:56 UTC (1734052076) Dec 13 01:07:56.885103 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Dec 13 01:07:56.885114 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 01:07:56.885122 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:07:56.885129 kernel: Segment Routing with IPv6 Dec 13 01:07:56.885140 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:07:56.885148 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:07:56.885155 kernel: Key type dns_resolver registered Dec 13 01:07:56.885162 kernel: IPI shorthand broadcast: enabled Dec 13 01:07:56.885170 kernel: sched_clock: Marking stable (554002210, 105179151)->(706423596, -47242235) Dec 13 01:07:56.885177 kernel: registered taskstats version 1 Dec 13 01:07:56.885184 kernel: Loading compiled-in X.509 certificates Dec 13 01:07:56.885192 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:07:56.885200 kernel: Key type .fscrypt registered Dec 13 01:07:56.885209 kernel: Key type fscrypt-provisioning registered Dec 13 01:07:56.885217 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:07:56.885224 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:07:56.885232 kernel: ima: No architecture policies found Dec 13 01:07:56.885239 kernel: clk: Disabling unused clocks Dec 13 01:07:56.885246 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:07:56.885254 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:07:56.885261 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:07:56.885268 kernel: Run /init as init process Dec 13 01:07:56.885278 kernel: with arguments: Dec 13 01:07:56.885285 kernel: /init Dec 13 01:07:56.885293 kernel: with environment: Dec 13 01:07:56.885301 kernel: HOME=/ Dec 13 01:07:56.885308 kernel: TERM=linux Dec 13 01:07:56.885315 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:07:56.885324 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:07:56.885334 systemd[1]: Detected virtualization kvm. Dec 13 01:07:56.885345 systemd[1]: Detected architecture x86-64. Dec 13 01:07:56.885353 systemd[1]: Running in initrd. Dec 13 01:07:56.885360 systemd[1]: No hostname configured, using default hostname. Dec 13 01:07:56.885368 systemd[1]: Hostname set to . Dec 13 01:07:56.885376 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:07:56.885384 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:07:56.885392 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:07:56.885400 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:07:56.885411 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:07:56.885431 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:07:56.885441 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:07:56.885450 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:07:56.885460 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:07:56.885470 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:07:56.885478 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:07:56.885487 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:07:56.885495 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:07:56.885503 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:07:56.885511 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:07:56.885519 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:07:56.885527 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:07:56.885538 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:07:56.885546 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:07:56.885554 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:07:56.885562 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:07:56.885570 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:07:56.885578 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:07:56.885586 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:07:56.885595 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:07:56.885603 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:07:56.885613 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:07:56.885621 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:07:56.885629 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:07:56.885637 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:07:56.885648 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:07:56.885656 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:07:56.885664 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:07:56.885672 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:07:56.885701 systemd-journald[192]: Collecting audit messages is disabled. Dec 13 01:07:56.885721 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:07:56.885732 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:07:56.885741 systemd-journald[192]: Journal started Dec 13 01:07:56.885761 systemd-journald[192]: Runtime Journal (/run/log/journal/992cf4c92993408485d9b41db6682fbc) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:07:56.869528 systemd-modules-load[194]: Inserted module 'overlay' Dec 13 01:07:56.890511 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:07:56.896099 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:07:56.897119 kernel: Bridge firewalling registered Dec 13 01:07:56.897133 systemd-modules-load[194]: Inserted module 'br_netfilter' Dec 13 01:07:56.903215 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:07:56.934211 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:07:56.934937 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:07:56.937715 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:07:56.941301 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:07:56.942772 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:07:56.953686 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:07:56.955244 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:07:56.957255 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:07:56.976219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:07:56.977532 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:07:56.980485 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:07:56.996312 dracut-cmdline[232]: dracut-dracut-053 Dec 13 01:07:56.999433 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:07:57.010370 systemd-resolved[224]: Positive Trust Anchors: Dec 13 01:07:57.010386 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:07:57.010417 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:07:57.012836 systemd-resolved[224]: Defaulting to hostname 'linux'. Dec 13 01:07:57.013849 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:07:57.019435 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:07:57.092282 kernel: SCSI subsystem initialized Dec 13 01:07:57.101098 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:07:57.111101 kernel: iscsi: registered transport (tcp) Dec 13 01:07:57.132100 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:07:57.132129 kernel: QLogic iSCSI HBA Driver Dec 13 01:07:57.183388 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:07:57.195201 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:07:57.223174 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:07:57.223274 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:07:57.223290 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:07:57.268123 kernel: raid6: avx2x4 gen() 29717 MB/s Dec 13 01:07:57.285107 kernel: raid6: avx2x2 gen() 27745 MB/s Dec 13 01:07:57.302211 kernel: raid6: avx2x1 gen() 22081 MB/s Dec 13 01:07:57.302224 kernel: raid6: using algorithm avx2x4 gen() 29717 MB/s Dec 13 01:07:57.320208 kernel: raid6: .... xor() 7166 MB/s, rmw enabled Dec 13 01:07:57.320220 kernel: raid6: using avx2x2 recovery algorithm Dec 13 01:07:57.341104 kernel: xor: automatically using best checksumming function avx Dec 13 01:07:57.495109 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:07:57.507792 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:07:57.517212 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:07:57.529965 systemd-udevd[414]: Using default interface naming scheme 'v255'. Dec 13 01:07:57.534850 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:07:57.545245 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:07:57.558568 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Dec 13 01:07:57.590846 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:07:57.605210 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:07:57.666519 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:07:57.676268 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:07:57.687035 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:07:57.691030 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:07:57.695211 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:07:57.696502 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:07:57.704095 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 01:07:57.727873 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:07:57.727888 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 01:07:57.730263 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:07:57.730299 kernel: AES CTR mode by8 optimization enabled Dec 13 01:07:57.730319 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 01:07:57.730337 kernel: GPT:9289727 != 19775487 Dec 13 01:07:57.730369 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 01:07:57.730391 kernel: GPT:9289727 != 19775487 Dec 13 01:07:57.730407 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 01:07:57.730430 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:07:57.705199 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:07:57.728220 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:07:57.737041 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:07:57.738115 kernel: libata version 3.00 loaded. Dec 13 01:07:57.738336 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:07:57.743276 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:07:57.747720 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:07:57.751797 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/vda3 scanned by (udev-worker) (472) Dec 13 01:07:57.751813 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 01:07:57.770821 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 01:07:57.770843 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 01:07:57.770990 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 01:07:57.771144 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (469) Dec 13 01:07:57.771156 kernel: scsi host0: ahci Dec 13 01:07:57.771305 kernel: scsi host1: ahci Dec 13 01:07:57.771455 kernel: scsi host2: ahci Dec 13 01:07:57.771599 kernel: scsi host3: ahci Dec 13 01:07:57.771737 kernel: scsi host4: ahci Dec 13 01:07:57.771889 kernel: scsi host5: ahci Dec 13 01:07:57.772031 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Dec 13 01:07:57.772042 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Dec 13 01:07:57.772052 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Dec 13 01:07:57.772062 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Dec 13 01:07:57.772072 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Dec 13 01:07:57.772130 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Dec 13 01:07:57.749730 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:07:57.753294 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:07:57.763316 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:07:57.782801 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 01:07:57.814621 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:07:57.823820 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 01:07:57.828025 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 01:07:57.828474 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 01:07:57.833243 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:07:57.845291 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:07:57.846324 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:07:57.859358 disk-uuid[555]: Primary Header is updated. Dec 13 01:07:57.859358 disk-uuid[555]: Secondary Entries is updated. Dec 13 01:07:57.859358 disk-uuid[555]: Secondary Header is updated. Dec 13 01:07:57.864098 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:07:57.869110 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:07:57.870388 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:07:58.078161 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 01:07:58.078235 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 01:07:58.078246 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 01:07:58.079122 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 01:07:58.080113 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 01:07:58.081109 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 01:07:58.082110 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 01:07:58.082140 kernel: ata3.00: applying bridge limits Dec 13 01:07:58.083102 kernel: ata3.00: configured for UDMA/100 Dec 13 01:07:58.085107 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 01:07:58.130713 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 01:07:58.142867 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:07:58.142920 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 01:07:58.869835 disk-uuid[558]: The operation has completed successfully. Dec 13 01:07:58.871523 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 01:07:58.898455 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:07:58.898596 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:07:58.922360 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:07:58.927437 sh[591]: Success Dec 13 01:07:58.940135 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 01:07:58.969513 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:07:58.983641 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:07:58.986309 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:07:59.000591 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:07:59.000623 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:07:59.000634 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:07:59.001606 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:07:59.002342 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:07:59.006294 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:07:59.007474 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:07:59.018211 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:07:59.019143 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:07:59.028496 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:07:59.028523 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:07:59.028534 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:07:59.031106 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:07:59.040978 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:07:59.042620 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:07:59.050545 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:07:59.059258 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:07:59.110298 ignition[686]: Ignition 2.19.0 Dec 13 01:07:59.110313 ignition[686]: Stage: fetch-offline Dec 13 01:07:59.110374 ignition[686]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:07:59.110390 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:07:59.110513 ignition[686]: parsed url from cmdline: "" Dec 13 01:07:59.110518 ignition[686]: no config URL provided Dec 13 01:07:59.110523 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:07:59.110532 ignition[686]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:07:59.110562 ignition[686]: op(1): [started] loading QEMU firmware config module Dec 13 01:07:59.110567 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 01:07:59.119137 ignition[686]: op(1): [finished] loading QEMU firmware config module Dec 13 01:07:59.133246 ignition[686]: parsing config with SHA512: 05276ad41cc94917b8c9b1b618438428ffa5e6b5431fb3e85740fc6bdc6167ca07ed7441a125a0a0e11debe5a1a2cf03d6f66b44a46b3bd928a390b3f8f696c3 Dec 13 01:07:59.135407 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:07:59.136975 unknown[686]: fetched base config from "system" Dec 13 01:07:59.137519 ignition[686]: fetch-offline: fetch-offline passed Dec 13 01:07:59.136983 unknown[686]: fetched user config from "qemu" Dec 13 01:07:59.137582 ignition[686]: Ignition finished successfully Dec 13 01:07:59.145269 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:07:59.147523 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:07:59.172804 systemd-networkd[779]: lo: Link UP Dec 13 01:07:59.172815 systemd-networkd[779]: lo: Gained carrier Dec 13 01:07:59.174864 systemd-networkd[779]: Enumeration completed Dec 13 01:07:59.175360 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:07:59.175365 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:07:59.175713 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:07:59.176246 systemd-networkd[779]: eth0: Link UP Dec 13 01:07:59.176251 systemd-networkd[779]: eth0: Gained carrier Dec 13 01:07:59.176259 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:07:59.184588 systemd[1]: Reached target network.target - Network. Dec 13 01:07:59.186320 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 01:07:59.207219 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:07:59.211151 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:07:59.222047 ignition[782]: Ignition 2.19.0 Dec 13 01:07:59.222057 ignition[782]: Stage: kargs Dec 13 01:07:59.222247 ignition[782]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:07:59.222259 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:07:59.226055 ignition[782]: kargs: kargs passed Dec 13 01:07:59.226112 ignition[782]: Ignition finished successfully Dec 13 01:07:59.230238 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:07:59.251418 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:07:59.265362 ignition[791]: Ignition 2.19.0 Dec 13 01:07:59.265374 ignition[791]: Stage: disks Dec 13 01:07:59.265569 ignition[791]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:07:59.265586 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:07:59.266761 ignition[791]: disks: disks passed Dec 13 01:07:59.266824 ignition[791]: Ignition finished successfully Dec 13 01:07:59.271913 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:07:59.274155 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:07:59.274610 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:07:59.276759 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:07:59.279101 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:07:59.280971 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:07:59.294233 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:07:59.304788 systemd-resolved[224]: Detected conflict on linux IN A 10.0.0.54 Dec 13 01:07:59.304804 systemd-resolved[224]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Dec 13 01:07:59.307510 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 01:07:59.313919 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:07:59.332211 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:07:59.415108 kernel: EXT4-fs (vda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:07:59.415364 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:07:59.416566 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:07:59.430164 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:07:59.431506 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:07:59.433377 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 01:07:59.442626 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (810) Dec 13 01:07:59.442652 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:07:59.442668 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:07:59.442688 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:07:59.433427 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:07:59.446814 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:07:59.433454 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:07:59.439881 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:07:59.443440 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:07:59.448271 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:07:59.478514 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:07:59.483144 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:07:59.486970 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:07:59.490675 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:07:59.570805 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:07:59.579160 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:07:59.580693 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:07:59.588121 kernel: BTRFS info (device vda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:07:59.603210 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:07:59.607535 ignition[924]: INFO : Ignition 2.19.0 Dec 13 01:07:59.607535 ignition[924]: INFO : Stage: mount Dec 13 01:07:59.609210 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:07:59.609210 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:07:59.609210 ignition[924]: INFO : mount: mount passed Dec 13 01:07:59.609210 ignition[924]: INFO : Ignition finished successfully Dec 13 01:07:59.615010 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:07:59.626177 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:07:59.959209 systemd-resolved[224]: Detected conflict on linux2 IN A 10.0.0.54 Dec 13 01:07:59.959225 systemd-resolved[224]: Hostname conflict, changing published hostname from 'linux2' to 'linux7'. Dec 13 01:07:59.999667 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:08:00.009347 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:08:00.016613 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (936) Dec 13 01:08:00.016640 kernel: BTRFS info (device vda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:08:00.016651 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:08:00.018123 kernel: BTRFS info (device vda6): using free space tree Dec 13 01:08:00.021105 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 01:08:00.022021 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:08:00.044092 ignition[953]: INFO : Ignition 2.19.0 Dec 13 01:08:00.044092 ignition[953]: INFO : Stage: files Dec 13 01:08:00.045719 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:08:00.045719 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:08:00.048492 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:08:00.049720 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:08:00.049720 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:08:00.053163 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:08:00.054647 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:08:00.054647 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:08:00.053928 unknown[953]: wrote ssh authorized keys file for user: core Dec 13 01:08:00.058757 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:08:00.058757 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:08:00.099066 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:08:00.202510 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:08:00.204692 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:08:00.204692 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:08:00.204692 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:08:00.204692 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:08:00.204692 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:08:00.204692 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:08:00.204692 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:08:00.204692 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:08:00.204692 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:08:00.204692 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:08:00.204692 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:08:00.204692 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:08:00.204692 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:08:00.204692 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:08:00.567322 systemd-networkd[779]: eth0: Gained IPv6LL Dec 13 01:08:00.585547 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:08:00.869485 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:08:00.869485 ignition[953]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:08:00.873184 ignition[953]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:08:00.875307 ignition[953]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:08:00.875307 ignition[953]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:08:00.875307 ignition[953]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Dec 13 01:08:00.879486 ignition[953]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:08:00.881361 ignition[953]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 01:08:00.881361 ignition[953]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Dec 13 01:08:00.881361 ignition[953]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 01:08:00.903441 ignition[953]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:08:00.909297 ignition[953]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 01:08:00.911026 ignition[953]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 01:08:00.911026 ignition[953]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:08:00.914175 ignition[953]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:08:00.915853 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:08:00.917655 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:08:00.919486 ignition[953]: INFO : files: files passed Dec 13 01:08:00.920350 ignition[953]: INFO : Ignition finished successfully Dec 13 01:08:00.924354 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:08:00.934235 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:08:00.935258 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:08:00.944399 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:08:00.944557 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:08:00.950033 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 01:08:00.954757 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:08:00.954757 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:08:00.959124 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:08:00.961158 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:08:00.962809 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:08:00.975256 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:08:01.001412 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:08:01.001534 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:08:01.003919 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:08:01.005882 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:08:01.007771 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:08:01.008463 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:08:01.026313 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:08:01.027809 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:08:01.041038 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:08:01.041448 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:08:01.043947 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:08:01.046614 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:08:01.046748 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:08:01.050175 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:08:01.050756 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:08:01.051101 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:08:01.054958 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:08:01.057167 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:08:01.059499 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:08:01.061371 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:08:01.063429 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:08:01.065400 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:08:01.067463 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:08:01.069466 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:08:01.069599 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:08:01.072369 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:08:01.072875 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:08:01.075598 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:08:01.078325 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:08:01.080858 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:08:01.080983 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:08:01.083808 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:08:01.083918 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:08:01.086004 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:08:01.087801 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:08:01.092134 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:08:01.092456 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:08:01.095498 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:08:01.096861 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:08:01.096960 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:08:01.098745 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:08:01.098837 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:08:01.100307 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:08:01.100420 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:08:01.102487 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:08:01.102587 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:08:01.116289 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:08:01.116575 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:08:01.116693 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:08:01.119360 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:08:01.120599 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:08:01.120710 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:08:01.121013 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:08:01.121124 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:08:01.127419 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:08:01.127528 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:08:01.138573 ignition[1008]: INFO : Ignition 2.19.0 Dec 13 01:08:01.138573 ignition[1008]: INFO : Stage: umount Dec 13 01:08:01.141696 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:08:01.141696 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 01:08:01.141696 ignition[1008]: INFO : umount: umount passed Dec 13 01:08:01.141696 ignition[1008]: INFO : Ignition finished successfully Dec 13 01:08:01.142588 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:08:01.142715 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:08:01.145152 systemd[1]: Stopped target network.target - Network. Dec 13 01:08:01.146568 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:08:01.146623 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:08:01.148448 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:08:01.148495 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:08:01.150552 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:08:01.150599 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:08:01.152575 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:08:01.152636 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:08:01.154736 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:08:01.156632 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:08:01.159829 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:08:01.166138 systemd-networkd[779]: eth0: DHCPv6 lease lost Dec 13 01:08:01.168394 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:08:01.168528 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:08:01.170611 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:08:01.170759 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:08:01.173604 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:08:01.173669 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:08:01.184839 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:08:01.185769 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:08:01.185822 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:08:01.188118 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:08:01.188165 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:08:01.190129 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:08:01.190175 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:08:01.192573 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:08:01.192620 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:08:01.194852 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:08:01.205351 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:08:01.205497 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:08:01.212764 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:08:01.212935 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:08:01.215189 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:08:01.215237 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:08:01.217255 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:08:01.217293 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:08:01.219225 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:08:01.219273 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:08:01.221633 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:08:01.221679 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:08:01.223617 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:08:01.223663 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:08:01.236322 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:08:01.238644 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:08:01.238743 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:08:01.241034 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:08:01.241108 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:08:01.243824 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:08:01.243965 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:08:01.288496 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:08:01.288671 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:08:01.290668 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:08:01.292287 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:08:01.292347 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:08:01.304229 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:08:01.315002 systemd[1]: Switching root. Dec 13 01:08:01.350046 systemd-journald[192]: Journal stopped Dec 13 01:08:02.384142 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Dec 13 01:08:02.384198 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:08:02.384215 kernel: SELinux: policy capability open_perms=1 Dec 13 01:08:02.384231 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:08:02.384242 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:08:02.384253 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:08:02.384264 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:08:02.384275 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:08:02.384290 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:08:02.384301 kernel: audit: type=1403 audit(1734052081.654:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:08:02.384318 systemd[1]: Successfully loaded SELinux policy in 40.394ms. Dec 13 01:08:02.384346 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.380ms. Dec 13 01:08:02.384361 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:08:02.384377 systemd[1]: Detected virtualization kvm. Dec 13 01:08:02.384389 systemd[1]: Detected architecture x86-64. Dec 13 01:08:02.384400 systemd[1]: Detected first boot. Dec 13 01:08:02.384412 systemd[1]: Initializing machine ID from VM UUID. Dec 13 01:08:02.384426 zram_generator::config[1053]: No configuration found. Dec 13 01:08:02.384439 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:08:02.384451 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:08:02.384463 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:08:02.384475 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:08:02.384487 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:08:02.384499 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:08:02.384511 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:08:02.384526 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:08:02.384538 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:08:02.384550 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:08:02.384562 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:08:02.384574 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:08:02.384586 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:08:02.384603 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:08:02.384615 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:08:02.384628 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:08:02.384643 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:08:02.384657 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:08:02.384669 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:08:02.384681 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:08:02.384693 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:08:02.384704 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:08:02.384729 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:08:02.384744 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:08:02.384755 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:08:02.384767 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:08:02.384779 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:08:02.384791 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:08:02.384803 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:08:02.384815 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:08:02.384827 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:08:02.384839 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:08:02.384851 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:08:02.384865 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:08:02.384876 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:08:02.384888 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:08:02.384901 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:08:02.384913 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:08:02.384925 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:08:02.384936 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:08:02.384948 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:08:02.384963 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:08:02.384974 systemd[1]: Reached target machines.target - Containers. Dec 13 01:08:02.384986 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:08:02.384998 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:08:02.385010 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:08:02.385021 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:08:02.385033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:08:02.385046 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:08:02.385058 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:08:02.385071 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:08:02.385095 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:08:02.385107 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:08:02.385118 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:08:02.385130 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:08:02.385142 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:08:02.385154 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:08:02.385167 kernel: fuse: init (API version 7.39) Dec 13 01:08:02.385181 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:08:02.385193 kernel: loop: module loaded Dec 13 01:08:02.385204 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:08:02.385216 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:08:02.385227 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:08:02.385239 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:08:02.385251 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:08:02.385263 systemd[1]: Stopped verity-setup.service. Dec 13 01:08:02.385275 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:08:02.385306 systemd-journald[1127]: Collecting audit messages is disabled. Dec 13 01:08:02.385327 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:08:02.385340 systemd-journald[1127]: Journal started Dec 13 01:08:02.385361 systemd-journald[1127]: Runtime Journal (/run/log/journal/992cf4c92993408485d9b41db6682fbc) is 6.0M, max 48.4M, 42.3M free. Dec 13 01:08:02.167401 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:08:02.188700 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 01:08:02.189154 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:08:02.387568 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:08:02.388513 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:08:02.390256 kernel: ACPI: bus type drm_connector registered Dec 13 01:08:02.390686 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:08:02.391866 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:08:02.393136 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:08:02.394398 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:08:02.395637 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:08:02.397125 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:08:02.398752 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:08:02.398926 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:08:02.400428 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:08:02.400592 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:08:02.402219 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:08:02.402393 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:08:02.403801 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:08:02.403970 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:08:02.405499 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:08:02.405664 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:08:02.407221 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:08:02.407392 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:08:02.408774 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:08:02.410368 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:08:02.411904 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:08:02.427413 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:08:02.438156 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:08:02.440517 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:08:02.441760 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:08:02.441844 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:08:02.443894 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:08:02.446172 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:08:02.449680 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:08:02.450983 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:08:02.454315 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:08:02.456994 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:08:02.458236 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:08:02.464626 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:08:02.465816 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:08:02.469975 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:08:02.474219 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:08:02.476400 systemd-journald[1127]: Time spent on flushing to /var/log/journal/992cf4c92993408485d9b41db6682fbc is 31.584ms for 953 entries. Dec 13 01:08:02.476400 systemd-journald[1127]: System Journal (/var/log/journal/992cf4c92993408485d9b41db6682fbc) is 8.0M, max 195.6M, 187.6M free. Dec 13 01:08:02.520939 systemd-journald[1127]: Received client request to flush runtime journal. Dec 13 01:08:02.520981 kernel: loop0: detected capacity change from 0 to 142488 Dec 13 01:08:02.480944 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:08:02.484148 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:08:02.485954 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:08:02.487448 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:08:02.489116 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:08:02.490866 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:08:02.499682 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:08:02.508296 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:08:02.515589 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:08:02.518761 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:08:02.524860 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:08:02.531173 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:08:02.539711 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:08:02.540623 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:08:02.546101 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:08:02.547991 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:08:02.556364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:08:02.568147 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 01:08:02.575981 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Dec 13 01:08:02.576305 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Dec 13 01:08:02.582397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:08:02.595810 kernel: loop2: detected capacity change from 0 to 140768 Dec 13 01:08:02.628116 kernel: loop3: detected capacity change from 0 to 142488 Dec 13 01:08:02.639137 kernel: loop4: detected capacity change from 0 to 211296 Dec 13 01:08:02.648102 kernel: loop5: detected capacity change from 0 to 140768 Dec 13 01:08:02.659006 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 01:08:02.659807 (sd-merge)[1192]: Merged extensions into '/usr'. Dec 13 01:08:02.663744 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:08:02.663760 systemd[1]: Reloading... Dec 13 01:08:02.722115 zram_generator::config[1218]: No configuration found. Dec 13 01:08:02.785635 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:08:02.843522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:08:02.892015 systemd[1]: Reloading finished in 227 ms. Dec 13 01:08:02.928298 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:08:02.929940 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:08:02.942223 systemd[1]: Starting ensure-sysext.service... Dec 13 01:08:02.944097 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:08:02.951765 systemd[1]: Reloading requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:08:02.951780 systemd[1]: Reloading... Dec 13 01:08:02.966932 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:08:02.967314 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:08:02.968308 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:08:02.968605 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Dec 13 01:08:02.968685 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Dec 13 01:08:02.972170 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:08:02.972181 systemd-tmpfiles[1256]: Skipping /boot Dec 13 01:08:02.986767 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:08:02.986782 systemd-tmpfiles[1256]: Skipping /boot Dec 13 01:08:03.015525 zram_generator::config[1286]: No configuration found. Dec 13 01:08:03.115997 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:08:03.164489 systemd[1]: Reloading finished in 212 ms. Dec 13 01:08:03.181344 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:08:03.195493 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:08:03.204175 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:08:03.206821 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:08:03.209119 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:08:03.214220 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:08:03.217913 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:08:03.220480 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:08:03.224509 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:08:03.224684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:08:03.226354 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:08:03.230155 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:08:03.232528 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:08:03.233942 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:08:03.236984 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:08:03.238170 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:08:03.239119 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:08:03.239283 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:08:03.240986 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:08:03.241394 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:08:03.243331 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:08:03.243543 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:08:03.252385 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:08:03.254926 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Dec 13 01:08:03.260019 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:08:03.260483 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:08:03.269332 augenrules[1352]: No rules Dec 13 01:08:03.270419 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:08:03.273354 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:08:03.276338 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:08:03.277652 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:08:03.280337 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:08:03.281963 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:08:03.282850 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:08:03.285540 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:08:03.287921 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:08:03.289834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:08:03.290033 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:08:03.292417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:08:03.292592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:08:03.295339 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:08:03.295523 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:08:03.297490 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:08:03.308646 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:08:03.323120 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:08:03.324952 systemd[1]: Finished ensure-sysext.service. Dec 13 01:08:03.330502 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:08:03.330651 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:08:03.335832 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:08:03.341111 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1382) Dec 13 01:08:03.342721 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:08:03.343121 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1382) Dec 13 01:08:03.348326 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:08:03.356322 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:08:03.357506 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:08:03.359520 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:08:03.366215 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 01:08:03.367356 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:08:03.367382 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:08:03.367965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:08:03.368155 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:08:03.371553 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:08:03.371742 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:08:03.373217 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:08:03.373408 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:08:03.375019 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:08:03.375217 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:08:03.376668 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 01:08:03.389372 systemd-resolved[1326]: Positive Trust Anchors: Dec 13 01:08:03.389669 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:08:03.389754 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:08:03.390101 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1379) Dec 13 01:08:03.391455 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:08:03.391533 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:08:03.394118 systemd-resolved[1326]: Defaulting to hostname 'linux'. Dec 13 01:08:03.395873 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:08:03.397257 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:08:03.426064 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 01:08:03.429286 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 01:08:03.430100 kernel: ACPI: button: Power Button [PWRF] Dec 13 01:08:03.439263 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:08:03.455181 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 01:08:03.456916 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:08:03.458104 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 01:08:03.462447 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 01:08:03.462631 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 01:08:03.465157 systemd-networkd[1398]: lo: Link UP Dec 13 01:08:03.465166 systemd-networkd[1398]: lo: Gained carrier Dec 13 01:08:03.467048 systemd-networkd[1398]: Enumeration completed Dec 13 01:08:03.467459 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:08:03.467463 systemd-networkd[1398]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:08:03.468439 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:08:03.468899 systemd-networkd[1398]: eth0: Link UP Dec 13 01:08:03.468951 systemd-networkd[1398]: eth0: Gained carrier Dec 13 01:08:03.469010 systemd-networkd[1398]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:08:03.469801 systemd[1]: Reached target network.target - Network. Dec 13 01:08:03.479358 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:08:03.482211 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:08:03.482290 systemd-networkd[1398]: eth0: DHCPv4 address 10.0.0.54/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 01:08:03.493951 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:08:03.497628 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 01:08:04.613145 systemd-resolved[1326]: Clock change detected. Flushing caches. Dec 13 01:08:04.613196 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 01:08:04.613235 systemd-timesyncd[1401]: Initial clock synchronization to Fri 2024-12-13 01:08:04.613090 UTC. Dec 13 01:08:04.613498 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:08:04.704298 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:08:04.706017 kernel: kvm_amd: TSC scaling supported Dec 13 01:08:04.706074 kernel: kvm_amd: Nested Virtualization enabled Dec 13 01:08:04.706087 kernel: kvm_amd: Nested Paging enabled Dec 13 01:08:04.706645 kernel: kvm_amd: LBR virtualization supported Dec 13 01:08:04.708061 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 01:08:04.708084 kernel: kvm_amd: Virtual GIF supported Dec 13 01:08:04.725359 kernel: EDAC MC: Ver: 3.0.0 Dec 13 01:08:04.770786 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:08:04.781630 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:08:04.791197 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:08:04.823760 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:08:04.825401 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:08:04.826536 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:08:04.827690 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:08:04.828943 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:08:04.830400 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:08:04.831560 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:08:04.832795 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:08:04.834022 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:08:04.834048 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:08:04.834933 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:08:04.836794 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:08:04.839572 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:08:04.845921 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:08:04.848404 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:08:04.849981 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:08:04.851147 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:08:04.852119 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:08:04.853084 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:08:04.853111 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:08:04.854129 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:08:04.856155 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:08:04.859347 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:08:04.859706 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:08:04.863674 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:08:04.865051 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:08:04.867777 jq[1434]: false Dec 13 01:08:04.868129 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:08:04.873775 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:08:04.880502 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:08:04.881725 extend-filesystems[1435]: Found loop3 Dec 13 01:08:04.881725 extend-filesystems[1435]: Found loop4 Dec 13 01:08:04.881725 extend-filesystems[1435]: Found loop5 Dec 13 01:08:04.881725 extend-filesystems[1435]: Found sr0 Dec 13 01:08:04.881725 extend-filesystems[1435]: Found vda Dec 13 01:08:04.881725 extend-filesystems[1435]: Found vda1 Dec 13 01:08:04.881725 extend-filesystems[1435]: Found vda2 Dec 13 01:08:04.881725 extend-filesystems[1435]: Found vda3 Dec 13 01:08:04.881725 extend-filesystems[1435]: Found usr Dec 13 01:08:04.881725 extend-filesystems[1435]: Found vda4 Dec 13 01:08:04.881725 extend-filesystems[1435]: Found vda6 Dec 13 01:08:04.881725 extend-filesystems[1435]: Found vda7 Dec 13 01:08:04.881725 extend-filesystems[1435]: Found vda9 Dec 13 01:08:04.881725 extend-filesystems[1435]: Checking size of /dev/vda9 Dec 13 01:08:04.889530 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:08:04.896057 dbus-daemon[1433]: [system] SELinux support is enabled Dec 13 01:08:04.896628 extend-filesystems[1435]: Resized partition /dev/vda9 Dec 13 01:08:04.899236 extend-filesystems[1450]: resize2fs 1.47.1 (20-May-2024) Dec 13 01:08:04.905489 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (1374) Dec 13 01:08:04.905595 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 01:08:04.913411 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:08:04.914868 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:08:04.915404 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:08:04.920839 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:08:04.922886 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:08:04.924800 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:08:04.927759 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:08:04.931155 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:08:04.931425 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:08:04.931916 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:08:04.936052 jq[1457]: true Dec 13 01:08:04.932191 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:08:04.937827 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:08:04.938063 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:08:04.952958 jq[1460]: true Dec 13 01:08:04.957739 update_engine[1456]: I20241213 01:08:04.957681 1456 main.cc:92] Flatcar Update Engine starting Dec 13 01:08:04.960515 update_engine[1456]: I20241213 01:08:04.959614 1456 update_check_scheduler.cc:74] Next update check in 6m23s Dec 13 01:08:04.964642 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:08:04.969216 tar[1459]: linux-amd64/helm Dec 13 01:08:04.972372 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 01:08:04.974550 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:08:04.980527 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:08:04.980549 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:08:04.981945 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:08:04.981969 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:08:04.989484 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:08:05.007108 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 01:08:05.007135 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:08:05.011546 extend-filesystems[1450]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 01:08:05.011546 extend-filesystems[1450]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 01:08:05.011546 extend-filesystems[1450]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 01:08:05.025063 extend-filesystems[1435]: Resized filesystem in /dev/vda9 Dec 13 01:08:05.012160 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:08:05.012477 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:08:05.012933 systemd-logind[1452]: New seat seat0. Dec 13 01:08:05.027483 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:08:05.034298 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:08:05.059150 bash[1486]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:08:05.061261 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:08:05.063616 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:08:05.196750 containerd[1461]: time="2024-12-13T01:08:05.196635733Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:08:05.224364 containerd[1461]: time="2024-12-13T01:08:05.224125506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:08:05.227702 containerd[1461]: time="2024-12-13T01:08:05.226345960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:08:05.227702 containerd[1461]: time="2024-12-13T01:08:05.226392638Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:08:05.227702 containerd[1461]: time="2024-12-13T01:08:05.226410271Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:08:05.227702 containerd[1461]: time="2024-12-13T01:08:05.226594166Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:08:05.227702 containerd[1461]: time="2024-12-13T01:08:05.226612119Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:08:05.227702 containerd[1461]: time="2024-12-13T01:08:05.226676951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:08:05.227702 containerd[1461]: time="2024-12-13T01:08:05.226687781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:08:05.227702 containerd[1461]: time="2024-12-13T01:08:05.226891062Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:08:05.227702 containerd[1461]: time="2024-12-13T01:08:05.226905369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:08:05.227702 containerd[1461]: time="2024-12-13T01:08:05.226918083Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:08:05.227702 containerd[1461]: time="2024-12-13T01:08:05.226927751Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:08:05.228035 containerd[1461]: time="2024-12-13T01:08:05.227040653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:08:05.228035 containerd[1461]: time="2024-12-13T01:08:05.227275093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:08:05.228035 containerd[1461]: time="2024-12-13T01:08:05.227404115Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:08:05.228035 containerd[1461]: time="2024-12-13T01:08:05.227417800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:08:05.228035 containerd[1461]: time="2024-12-13T01:08:05.227510705Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:08:05.228035 containerd[1461]: time="2024-12-13T01:08:05.227561870Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:08:05.233367 containerd[1461]: time="2024-12-13T01:08:05.233343974Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:08:05.233454 containerd[1461]: time="2024-12-13T01:08:05.233440806Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:08:05.233523 containerd[1461]: time="2024-12-13T01:08:05.233509855Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:08:05.233591 containerd[1461]: time="2024-12-13T01:08:05.233578814Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:08:05.233650 containerd[1461]: time="2024-12-13T01:08:05.233629520Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:08:05.233846 containerd[1461]: time="2024-12-13T01:08:05.233830516Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:08:05.234161 containerd[1461]: time="2024-12-13T01:08:05.234145036Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:08:05.234364 containerd[1461]: time="2024-12-13T01:08:05.234325074Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:08:05.234428 containerd[1461]: time="2024-12-13T01:08:05.234407759Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:08:05.234482 containerd[1461]: time="2024-12-13T01:08:05.234469936Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:08:05.234538 containerd[1461]: time="2024-12-13T01:08:05.234518437Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:08:05.234605 containerd[1461]: time="2024-12-13T01:08:05.234592355Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:08:05.234661 containerd[1461]: time="2024-12-13T01:08:05.234641187Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:08:05.234714 containerd[1461]: time="2024-12-13T01:08:05.234703343Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:08:05.234907 containerd[1461]: time="2024-12-13T01:08:05.234750372Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:08:05.235019 containerd[1461]: time="2024-12-13T01:08:05.234995371Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:08:05.235090 containerd[1461]: time="2024-12-13T01:08:05.235076423Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:08:05.235171 containerd[1461]: time="2024-12-13T01:08:05.235156974Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:08:05.235269 containerd[1461]: time="2024-12-13T01:08:05.235254517Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.235362 containerd[1461]: time="2024-12-13T01:08:05.235327734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.235433 containerd[1461]: time="2024-12-13T01:08:05.235406983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.235516 containerd[1461]: time="2024-12-13T01:08:05.235484158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.235592 containerd[1461]: time="2024-12-13T01:08:05.235560551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.235645 containerd[1461]: time="2024-12-13T01:08:05.235632947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.235712 containerd[1461]: time="2024-12-13T01:08:05.235700383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.235784 containerd[1461]: time="2024-12-13T01:08:05.235771206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.235883 containerd[1461]: time="2024-12-13T01:08:05.235844383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.235964 containerd[1461]: time="2024-12-13T01:08:05.235945353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.236046 containerd[1461]: time="2024-12-13T01:08:05.236028288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.236116 containerd[1461]: time="2024-12-13T01:08:05.236099822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.236195 containerd[1461]: time="2024-12-13T01:08:05.236181786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.236275 containerd[1461]: time="2024-12-13T01:08:05.236262467Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:08:05.236393 containerd[1461]: time="2024-12-13T01:08:05.236376521Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.236471 containerd[1461]: time="2024-12-13T01:08:05.236457743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.236555 containerd[1461]: time="2024-12-13T01:08:05.236540959Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:08:05.236862 containerd[1461]: time="2024-12-13T01:08:05.236845611Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:08:05.236937 containerd[1461]: time="2024-12-13T01:08:05.236921663Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:08:05.236998 containerd[1461]: time="2024-12-13T01:08:05.236985964Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:08:05.237049 containerd[1461]: time="2024-12-13T01:08:05.237036749Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:08:05.237093 containerd[1461]: time="2024-12-13T01:08:05.237081253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.237140 containerd[1461]: time="2024-12-13T01:08:05.237129984Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:08:05.237217 containerd[1461]: time="2024-12-13T01:08:05.237204173Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:08:05.237264 containerd[1461]: time="2024-12-13T01:08:05.237253315Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:08:05.237687 containerd[1461]: time="2024-12-13T01:08:05.237635151Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:08:05.237853 containerd[1461]: time="2024-12-13T01:08:05.237841298Z" level=info msg="Connect containerd service" Dec 13 01:08:05.237926 containerd[1461]: time="2024-12-13T01:08:05.237914145Z" level=info msg="using legacy CRI server" Dec 13 01:08:05.237989 containerd[1461]: time="2024-12-13T01:08:05.237976692Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:08:05.238102 containerd[1461]: time="2024-12-13T01:08:05.238090105Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:08:05.238731 containerd[1461]: time="2024-12-13T01:08:05.238707082Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:08:05.239079 containerd[1461]: time="2024-12-13T01:08:05.238999630Z" level=info msg="Start subscribing containerd event" Dec 13 01:08:05.239146 containerd[1461]: time="2024-12-13T01:08:05.239122290Z" level=info msg="Start recovering state" Dec 13 01:08:05.239291 containerd[1461]: time="2024-12-13T01:08:05.239274015Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:08:05.239419 containerd[1461]: time="2024-12-13T01:08:05.239403057Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:08:05.239477 containerd[1461]: time="2024-12-13T01:08:05.239285927Z" level=info msg="Start event monitor" Dec 13 01:08:05.239534 containerd[1461]: time="2024-12-13T01:08:05.239523944Z" level=info msg="Start snapshots syncer" Dec 13 01:08:05.239585 containerd[1461]: time="2024-12-13T01:08:05.239574949Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:08:05.239648 containerd[1461]: time="2024-12-13T01:08:05.239636855Z" level=info msg="Start streaming server" Dec 13 01:08:05.239746 containerd[1461]: time="2024-12-13T01:08:05.239733887Z" level=info msg="containerd successfully booted in 0.045443s" Dec 13 01:08:05.239883 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:08:05.243857 sshd_keygen[1455]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:08:05.269266 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:08:05.278791 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:08:05.286066 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:08:05.286365 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:08:05.291122 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:08:05.306220 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:08:05.313803 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:08:05.317026 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:08:05.318594 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:08:05.361509 tar[1459]: linux-amd64/LICENSE Dec 13 01:08:05.361642 tar[1459]: linux-amd64/README.md Dec 13 01:08:05.375419 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:08:05.904571 systemd-networkd[1398]: eth0: Gained IPv6LL Dec 13 01:08:05.908266 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:08:05.910425 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:08:05.924809 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 01:08:05.928211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:08:05.931169 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:08:05.949785 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 01:08:05.950081 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 01:08:05.952196 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:08:05.954685 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:08:06.562745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:06.564305 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:08:06.565485 systemd[1]: Startup finished in 684ms (kernel) + 4.964s (initrd) + 3.836s (userspace) = 9.485s. Dec 13 01:08:06.578062 (kubelet)[1546]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:08:07.052321 kubelet[1546]: E1213 01:08:07.052176 1546 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:08:07.056771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:08:07.056980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:08:07.057289 systemd[1]: kubelet.service: Consumed 1.008s CPU time. Dec 13 01:08:14.727535 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:08:14.728771 systemd[1]: Started sshd@0-10.0.0.54:22-10.0.0.1:42998.service - OpenSSH per-connection server daemon (10.0.0.1:42998). Dec 13 01:08:14.770980 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 42998 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:14.772915 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:14.781244 systemd-logind[1452]: New session 1 of user core. Dec 13 01:08:14.782535 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:08:14.803534 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:08:14.815859 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:08:14.833595 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:08:14.836347 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:08:14.932439 systemd[1565]: Queued start job for default target default.target. Dec 13 01:08:14.943541 systemd[1565]: Created slice app.slice - User Application Slice. Dec 13 01:08:14.943565 systemd[1565]: Reached target paths.target - Paths. Dec 13 01:08:14.943578 systemd[1565]: Reached target timers.target - Timers. Dec 13 01:08:14.945000 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:08:14.955766 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:08:14.955931 systemd[1565]: Reached target sockets.target - Sockets. Dec 13 01:08:14.955950 systemd[1565]: Reached target basic.target - Basic System. Dec 13 01:08:14.955993 systemd[1565]: Reached target default.target - Main User Target. Dec 13 01:08:14.956031 systemd[1565]: Startup finished in 113ms. Dec 13 01:08:14.956285 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:08:14.957893 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:08:15.022639 systemd[1]: Started sshd@1-10.0.0.54:22-10.0.0.1:43004.service - OpenSSH per-connection server daemon (10.0.0.1:43004). Dec 13 01:08:15.063588 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 43004 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:15.065232 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:15.069121 systemd-logind[1452]: New session 2 of user core. Dec 13 01:08:15.075452 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:08:15.130241 sshd[1576]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:15.141155 systemd[1]: sshd@1-10.0.0.54:22-10.0.0.1:43004.service: Deactivated successfully. Dec 13 01:08:15.142908 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 01:08:15.144312 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Dec 13 01:08:15.145659 systemd[1]: Started sshd@2-10.0.0.54:22-10.0.0.1:43006.service - OpenSSH per-connection server daemon (10.0.0.1:43006). Dec 13 01:08:15.146627 systemd-logind[1452]: Removed session 2. Dec 13 01:08:15.184132 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 43006 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:15.185651 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:15.189617 systemd-logind[1452]: New session 3 of user core. Dec 13 01:08:15.205455 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:08:15.255283 sshd[1583]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:15.271398 systemd[1]: sshd@2-10.0.0.54:22-10.0.0.1:43006.service: Deactivated successfully. Dec 13 01:08:15.273238 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 01:08:15.274695 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Dec 13 01:08:15.288694 systemd[1]: Started sshd@3-10.0.0.54:22-10.0.0.1:43020.service - OpenSSH per-connection server daemon (10.0.0.1:43020). Dec 13 01:08:15.289839 systemd-logind[1452]: Removed session 3. Dec 13 01:08:15.324974 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 43020 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:15.326872 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:15.330976 systemd-logind[1452]: New session 4 of user core. Dec 13 01:08:15.340558 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:08:15.395734 sshd[1590]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:15.408426 systemd[1]: sshd@3-10.0.0.54:22-10.0.0.1:43020.service: Deactivated successfully. Dec 13 01:08:15.410238 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:08:15.412183 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:08:15.421715 systemd[1]: Started sshd@4-10.0.0.54:22-10.0.0.1:43022.service - OpenSSH per-connection server daemon (10.0.0.1:43022). Dec 13 01:08:15.422909 systemd-logind[1452]: Removed session 4. Dec 13 01:08:15.456288 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 43022 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:08:15.457908 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:15.461812 systemd-logind[1452]: New session 5 of user core. Dec 13 01:08:15.470462 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:08:16.130142 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:08:16.130515 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:08:16.442559 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:08:16.442805 (dockerd)[1618]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:08:16.724173 dockerd[1618]: time="2024-12-13T01:08:16.724010666Z" level=info msg="Starting up" Dec 13 01:08:16.845551 dockerd[1618]: time="2024-12-13T01:08:16.845488330Z" level=info msg="Loading containers: start." Dec 13 01:08:16.959356 kernel: Initializing XFRM netlink socket Dec 13 01:08:17.043278 systemd-networkd[1398]: docker0: Link UP Dec 13 01:08:17.162787 dockerd[1618]: time="2024-12-13T01:08:17.162739619Z" level=info msg="Loading containers: done." Dec 13 01:08:17.176687 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2468685739-merged.mount: Deactivated successfully. Dec 13 01:08:17.177691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:08:17.187498 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:08:17.206357 dockerd[1618]: time="2024-12-13T01:08:17.206298708Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:08:17.206454 dockerd[1618]: time="2024-12-13T01:08:17.206430305Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:08:17.206582 dockerd[1618]: time="2024-12-13T01:08:17.206550390Z" level=info msg="Daemon has completed initialization" Dec 13 01:08:17.330362 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:17.334526 (kubelet)[1739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:08:17.532211 kubelet[1739]: E1213 01:08:17.532135 1739 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:08:17.539889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:08:17.540109 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:08:17.853492 dockerd[1618]: time="2024-12-13T01:08:17.853381259Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:08:17.853605 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:08:18.999057 containerd[1461]: time="2024-12-13T01:08:18.999004140Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:08:20.082581 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2129557355.mount: Deactivated successfully. Dec 13 01:08:21.655010 containerd[1461]: time="2024-12-13T01:08:21.654951600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:21.655868 containerd[1461]: time="2024-12-13T01:08:21.655829286Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Dec 13 01:08:21.672053 containerd[1461]: time="2024-12-13T01:08:21.671973162Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:21.700146 containerd[1461]: time="2024-12-13T01:08:21.700061557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:21.701454 containerd[1461]: time="2024-12-13T01:08:21.701402672Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.702351784s" Dec 13 01:08:21.701454 containerd[1461]: time="2024-12-13T01:08:21.701451945Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:08:21.728847 containerd[1461]: time="2024-12-13T01:08:21.728783571Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:08:24.529041 containerd[1461]: time="2024-12-13T01:08:24.528981833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:24.557061 containerd[1461]: time="2024-12-13T01:08:24.556981582Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Dec 13 01:08:24.581901 containerd[1461]: time="2024-12-13T01:08:24.581865639Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:24.585046 containerd[1461]: time="2024-12-13T01:08:24.584996490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:24.586092 containerd[1461]: time="2024-12-13T01:08:24.586045337Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.857200721s" Dec 13 01:08:24.586092 containerd[1461]: time="2024-12-13T01:08:24.586081525Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:08:24.611003 containerd[1461]: time="2024-12-13T01:08:24.610940474Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:08:26.116490 containerd[1461]: time="2024-12-13T01:08:26.116422440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:26.118130 containerd[1461]: time="2024-12-13T01:08:26.118091220Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Dec 13 01:08:26.119587 containerd[1461]: time="2024-12-13T01:08:26.119555847Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:26.122436 containerd[1461]: time="2024-12-13T01:08:26.122375965Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:26.123354 containerd[1461]: time="2024-12-13T01:08:26.123303465Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.512317545s" Dec 13 01:08:26.123399 containerd[1461]: time="2024-12-13T01:08:26.123361584Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:08:26.167167 containerd[1461]: time="2024-12-13T01:08:26.167103205Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:08:27.598769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798407358.mount: Deactivated successfully. Dec 13 01:08:27.599734 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:08:27.608475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:08:27.748843 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:27.753672 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:08:27.816663 kubelet[1884]: E1213 01:08:27.816529 1884 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:08:27.822073 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:08:27.822298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:08:28.443195 containerd[1461]: time="2024-12-13T01:08:28.443117945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:28.444102 containerd[1461]: time="2024-12-13T01:08:28.444030687Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 01:08:28.445222 containerd[1461]: time="2024-12-13T01:08:28.445187947Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:28.447418 containerd[1461]: time="2024-12-13T01:08:28.447381651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:28.447992 containerd[1461]: time="2024-12-13T01:08:28.447947252Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.280803771s" Dec 13 01:08:28.448021 containerd[1461]: time="2024-12-13T01:08:28.447991515Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:08:28.492883 containerd[1461]: time="2024-12-13T01:08:28.492831165Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:08:29.032170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount43210526.mount: Deactivated successfully. Dec 13 01:08:29.670034 containerd[1461]: time="2024-12-13T01:08:29.669988956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:29.670773 containerd[1461]: time="2024-12-13T01:08:29.670730857Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Dec 13 01:08:29.671984 containerd[1461]: time="2024-12-13T01:08:29.671961636Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:29.674602 containerd[1461]: time="2024-12-13T01:08:29.674567753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:29.675718 containerd[1461]: time="2024-12-13T01:08:29.675690769Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.182818266s" Dec 13 01:08:29.675775 containerd[1461]: time="2024-12-13T01:08:29.675720214Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:08:29.697204 containerd[1461]: time="2024-12-13T01:08:29.697150183Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:08:30.931650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount49517878.mount: Deactivated successfully. Dec 13 01:08:30.938565 containerd[1461]: time="2024-12-13T01:08:30.938487223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:30.939429 containerd[1461]: time="2024-12-13T01:08:30.939355481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Dec 13 01:08:30.940883 containerd[1461]: time="2024-12-13T01:08:30.940834345Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:30.946273 containerd[1461]: time="2024-12-13T01:08:30.946221167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:30.947253 containerd[1461]: time="2024-12-13T01:08:30.947204701Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.250015285s" Dec 13 01:08:30.947321 containerd[1461]: time="2024-12-13T01:08:30.947259344Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:08:30.973635 containerd[1461]: time="2024-12-13T01:08:30.973587839Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:08:31.676254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3811523139.mount: Deactivated successfully. Dec 13 01:08:33.800881 containerd[1461]: time="2024-12-13T01:08:33.800815530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:33.802071 containerd[1461]: time="2024-12-13T01:08:33.802023937Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Dec 13 01:08:33.803821 containerd[1461]: time="2024-12-13T01:08:33.803746076Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:33.806937 containerd[1461]: time="2024-12-13T01:08:33.806901744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:08:33.808215 containerd[1461]: time="2024-12-13T01:08:33.808182526Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.834548841s" Dec 13 01:08:33.808251 containerd[1461]: time="2024-12-13T01:08:33.808219836Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:08:36.909267 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:36.925558 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:08:36.944860 systemd[1]: Reloading requested from client PID 2090 ('systemctl') (unit session-5.scope)... Dec 13 01:08:36.944878 systemd[1]: Reloading... Dec 13 01:08:37.032365 zram_generator::config[2129]: No configuration found. Dec 13 01:08:37.206859 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:08:37.283356 systemd[1]: Reloading finished in 338 ms. Dec 13 01:08:37.332606 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:08:37.332717 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:08:37.332993 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:37.335676 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:08:37.500942 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:37.506471 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:08:37.660682 kubelet[2178]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:08:37.660682 kubelet[2178]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:08:37.660682 kubelet[2178]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:08:37.661132 kubelet[2178]: I1213 01:08:37.660695 2178 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:08:38.186242 kubelet[2178]: I1213 01:08:38.186196 2178 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:08:38.186242 kubelet[2178]: I1213 01:08:38.186228 2178 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:08:38.186566 kubelet[2178]: I1213 01:08:38.186543 2178 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:08:38.206196 kubelet[2178]: E1213 01:08:38.206140 2178 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:38.206770 kubelet[2178]: I1213 01:08:38.206734 2178 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:08:38.219685 kubelet[2178]: I1213 01:08:38.219629 2178 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:08:38.220584 kubelet[2178]: I1213 01:08:38.220547 2178 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:08:38.220828 kubelet[2178]: I1213 01:08:38.220798 2178 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:08:38.221294 kubelet[2178]: I1213 01:08:38.221266 2178 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:08:38.221294 kubelet[2178]: I1213 01:08:38.221293 2178 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:08:38.221492 kubelet[2178]: I1213 01:08:38.221468 2178 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:08:38.221622 kubelet[2178]: I1213 01:08:38.221601 2178 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:08:38.221643 kubelet[2178]: I1213 01:08:38.221625 2178 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:08:38.221686 kubelet[2178]: I1213 01:08:38.221671 2178 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:08:38.221716 kubelet[2178]: I1213 01:08:38.221697 2178 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:08:38.222865 kubelet[2178]: W1213 01:08:38.222814 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:38.222916 kubelet[2178]: W1213 01:08:38.222842 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:38.222942 kubelet[2178]: E1213 01:08:38.222916 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:38.222942 kubelet[2178]: E1213 01:08:38.222870 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:38.223367 kubelet[2178]: I1213 01:08:38.223342 2178 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:08:38.227571 kubelet[2178]: I1213 01:08:38.227373 2178 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:08:38.232125 kubelet[2178]: W1213 01:08:38.232080 2178 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:08:38.234148 kubelet[2178]: I1213 01:08:38.233260 2178 server.go:1256] "Started kubelet" Dec 13 01:08:38.234148 kubelet[2178]: I1213 01:08:38.233647 2178 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:08:38.234148 kubelet[2178]: I1213 01:08:38.233681 2178 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:08:38.234148 kubelet[2178]: I1213 01:08:38.234006 2178 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:08:38.234823 kubelet[2178]: I1213 01:08:38.234793 2178 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:08:38.235181 kubelet[2178]: I1213 01:08:38.235162 2178 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:08:38.236238 kubelet[2178]: E1213 01:08:38.236051 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:38.236238 kubelet[2178]: I1213 01:08:38.236094 2178 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:08:38.236238 kubelet[2178]: I1213 01:08:38.236193 2178 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:08:38.236396 kubelet[2178]: I1213 01:08:38.236269 2178 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:08:38.237931 kubelet[2178]: W1213 01:08:38.237434 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:38.237931 kubelet[2178]: E1213 01:08:38.237482 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:38.237931 kubelet[2178]: E1213 01:08:38.237805 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="200ms" Dec 13 01:08:38.238452 kubelet[2178]: I1213 01:08:38.238433 2178 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:08:38.239244 kubelet[2178]: E1213 01:08:38.238566 2178 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:08:38.239303 kubelet[2178]: E1213 01:08:38.239171 2178 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.54:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.54:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18109736d14e0e07 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 01:08:38.233222663 +0000 UTC m=+0.722404373,LastTimestamp:2024-12-13 01:08:38.233222663 +0000 UTC m=+0.722404373,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 01:08:38.239303 kubelet[2178]: I1213 01:08:38.239257 2178 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:08:38.240161 kubelet[2178]: I1213 01:08:38.240136 2178 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:08:38.253442 kubelet[2178]: I1213 01:08:38.253124 2178 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:08:38.253442 kubelet[2178]: I1213 01:08:38.253143 2178 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:08:38.253442 kubelet[2178]: I1213 01:08:38.253158 2178 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:08:38.255575 kubelet[2178]: I1213 01:08:38.255540 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:08:38.257221 kubelet[2178]: I1213 01:08:38.257088 2178 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:08:38.257221 kubelet[2178]: I1213 01:08:38.257137 2178 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:08:38.257221 kubelet[2178]: I1213 01:08:38.257175 2178 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:08:38.257321 kubelet[2178]: E1213 01:08:38.257244 2178 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:08:38.258494 kubelet[2178]: W1213 01:08:38.257741 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:38.258494 kubelet[2178]: E1213 01:08:38.257777 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:38.337846 kubelet[2178]: I1213 01:08:38.337795 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:08:38.338423 kubelet[2178]: E1213 01:08:38.338382 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Dec 13 01:08:38.357685 kubelet[2178]: E1213 01:08:38.357595 2178 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:08:38.438740 kubelet[2178]: E1213 01:08:38.438609 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="400ms" Dec 13 01:08:38.540206 kubelet[2178]: I1213 01:08:38.540158 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:08:38.540500 kubelet[2178]: E1213 01:08:38.540469 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Dec 13 01:08:38.558729 kubelet[2178]: E1213 01:08:38.558650 2178 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:08:38.839814 kubelet[2178]: E1213 01:08:38.839746 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="800ms" Dec 13 01:08:38.942361 kubelet[2178]: I1213 01:08:38.942317 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:08:38.942771 kubelet[2178]: E1213 01:08:38.942731 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Dec 13 01:08:38.948751 kubelet[2178]: I1213 01:08:38.948712 2178 policy_none.go:49] "None policy: Start" Dec 13 01:08:38.949394 kubelet[2178]: I1213 01:08:38.949372 2178 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:08:38.949465 kubelet[2178]: I1213 01:08:38.949401 2178 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:08:38.959807 kubelet[2178]: E1213 01:08:38.959766 2178 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 01:08:39.049647 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:08:39.064798 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:08:39.068114 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:08:39.077430 kubelet[2178]: I1213 01:08:39.077310 2178 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:08:39.077879 kubelet[2178]: I1213 01:08:39.077623 2178 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:08:39.078542 kubelet[2178]: W1213 01:08:39.078512 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:39.079153 kubelet[2178]: E1213 01:08:39.078546 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.54:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:39.079153 kubelet[2178]: E1213 01:08:39.079027 2178 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 01:08:39.163250 kubelet[2178]: W1213 01:08:39.163080 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:39.163250 kubelet[2178]: E1213 01:08:39.163163 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:39.368753 kubelet[2178]: W1213 01:08:39.368650 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:39.368753 kubelet[2178]: E1213 01:08:39.368768 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:39.546875 kubelet[2178]: W1213 01:08:39.546808 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:39.546875 kubelet[2178]: E1213 01:08:39.546861 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.54:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:39.640714 kubelet[2178]: E1213 01:08:39.640670 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="1.6s" Dec 13 01:08:39.744781 kubelet[2178]: I1213 01:08:39.744729 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:08:39.745168 kubelet[2178]: E1213 01:08:39.745133 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Dec 13 01:08:39.760417 kubelet[2178]: I1213 01:08:39.760307 2178 topology_manager.go:215] "Topology Admit Handler" podUID="e998acaaff6507e71a1422b0ee133599" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:08:39.762194 kubelet[2178]: I1213 01:08:39.762149 2178 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:08:39.763916 kubelet[2178]: I1213 01:08:39.763859 2178 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:08:39.771768 systemd[1]: Created slice kubepods-burstable-pode998acaaff6507e71a1422b0ee133599.slice - libcontainer container kubepods-burstable-pode998acaaff6507e71a1422b0ee133599.slice. Dec 13 01:08:39.792495 systemd[1]: Created slice kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice - libcontainer container kubepods-burstable-podc4144e8f85b2123a6afada0c1705bbba.slice. Dec 13 01:08:39.810826 systemd[1]: Created slice kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice - libcontainer container kubepods-burstable-pod4f8e0d694c07e04969646aa3c152c34a.slice. Dec 13 01:08:39.845363 kubelet[2178]: I1213 01:08:39.845277 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e998acaaff6507e71a1422b0ee133599-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e998acaaff6507e71a1422b0ee133599\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:08:39.845363 kubelet[2178]: I1213 01:08:39.845351 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e998acaaff6507e71a1422b0ee133599-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e998acaaff6507e71a1422b0ee133599\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:08:39.845363 kubelet[2178]: I1213 01:08:39.845376 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e998acaaff6507e71a1422b0ee133599-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e998acaaff6507e71a1422b0ee133599\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:08:39.845875 kubelet[2178]: I1213 01:08:39.845441 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:39.845875 kubelet[2178]: I1213 01:08:39.845485 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:08:39.845875 kubelet[2178]: I1213 01:08:39.845506 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:39.845875 kubelet[2178]: I1213 01:08:39.845525 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:39.845875 kubelet[2178]: I1213 01:08:39.845544 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:39.845999 kubelet[2178]: I1213 01:08:39.845563 2178 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:40.092195 kubelet[2178]: E1213 01:08:40.092040 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:40.092985 containerd[1461]: time="2024-12-13T01:08:40.092942927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e998acaaff6507e71a1422b0ee133599,Namespace:kube-system,Attempt:0,}" Dec 13 01:08:40.108221 kubelet[2178]: E1213 01:08:40.108163 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:40.108913 containerd[1461]: time="2024-12-13T01:08:40.108852064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 01:08:40.115190 kubelet[2178]: E1213 01:08:40.115147 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:40.115693 containerd[1461]: time="2024-12-13T01:08:40.115638149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 01:08:40.334081 kubelet[2178]: E1213 01:08:40.334010 2178 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.54:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:40.993602 kubelet[2178]: W1213 01:08:40.993564 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:40.993602 kubelet[2178]: E1213 01:08:40.993604 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.54:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:41.242188 kubelet[2178]: E1213 01:08:41.242140 2178 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.54:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.54:6443: connect: connection refused" interval="3.2s" Dec 13 01:08:41.283056 kubelet[2178]: W1213 01:08:41.282987 2178 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:41.283056 kubelet[2178]: E1213 01:08:41.283042 2178 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.54:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.54:6443: connect: connection refused Dec 13 01:08:41.347511 kubelet[2178]: I1213 01:08:41.347463 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:08:41.347885 kubelet[2178]: E1213 01:08:41.347860 2178 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.54:6443/api/v1/nodes\": dial tcp 10.0.0.54:6443: connect: connection refused" node="localhost" Dec 13 01:08:41.421868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3381407535.mount: Deactivated successfully. Dec 13 01:08:41.426973 containerd[1461]: time="2024-12-13T01:08:41.426919281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:08:41.428714 containerd[1461]: time="2024-12-13T01:08:41.428647369Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:08:41.429772 containerd[1461]: time="2024-12-13T01:08:41.429732895Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:08:41.430598 containerd[1461]: time="2024-12-13T01:08:41.430555395Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:08:41.432047 containerd[1461]: time="2024-12-13T01:08:41.431958560Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:08:41.432822 containerd[1461]: time="2024-12-13T01:08:41.432770160Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:08:41.433805 containerd[1461]: time="2024-12-13T01:08:41.433717004Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 01:08:41.435343 containerd[1461]: time="2024-12-13T01:08:41.435297804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:08:41.436218 containerd[1461]: time="2024-12-13T01:08:41.436177281Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.320451628s" Dec 13 01:08:41.439302 containerd[1461]: time="2024-12-13T01:08:41.439243051Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.330281479s" Dec 13 01:08:41.441454 containerd[1461]: time="2024-12-13T01:08:41.441413271Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.348390603s" Dec 13 01:08:41.672323 containerd[1461]: time="2024-12-13T01:08:41.672118365Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:41.672323 containerd[1461]: time="2024-12-13T01:08:41.672254601Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:41.672323 containerd[1461]: time="2024-12-13T01:08:41.672291611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:41.672682 containerd[1461]: time="2024-12-13T01:08:41.672601545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:41.672682 containerd[1461]: time="2024-12-13T01:08:41.672645999Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:41.673404 containerd[1461]: time="2024-12-13T01:08:41.673360757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:41.673505 containerd[1461]: time="2024-12-13T01:08:41.673428764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:41.673824 containerd[1461]: time="2024-12-13T01:08:41.673583226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:41.689615 containerd[1461]: time="2024-12-13T01:08:41.689496590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:08:41.689615 containerd[1461]: time="2024-12-13T01:08:41.689555871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:08:41.689818 containerd[1461]: time="2024-12-13T01:08:41.689588734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:41.689818 containerd[1461]: time="2024-12-13T01:08:41.689718268Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:08:41.698641 systemd[1]: Started cri-containerd-738c896db51cb7a33866aacc23f3601cf3bab09d7ca4e421118f9bc7563e8aed.scope - libcontainer container 738c896db51cb7a33866aacc23f3601cf3bab09d7ca4e421118f9bc7563e8aed. Dec 13 01:08:41.705753 systemd[1]: Started cri-containerd-21b47358d02b41dc842445bcc8099f7a8d65d9d7282405b81d1c44d569758b32.scope - libcontainer container 21b47358d02b41dc842445bcc8099f7a8d65d9d7282405b81d1c44d569758b32. Dec 13 01:08:41.765600 systemd[1]: Started cri-containerd-c5246aaae77df1ea5b6686d5c1f1261b52d2f290f29c9784500305844ba854e2.scope - libcontainer container c5246aaae77df1ea5b6686d5c1f1261b52d2f290f29c9784500305844ba854e2. Dec 13 01:08:41.781974 containerd[1461]: time="2024-12-13T01:08:41.781925094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"738c896db51cb7a33866aacc23f3601cf3bab09d7ca4e421118f9bc7563e8aed\"" Dec 13 01:08:41.783344 kubelet[2178]: E1213 01:08:41.783279 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:41.786436 containerd[1461]: time="2024-12-13T01:08:41.786383226Z" level=info msg="CreateContainer within sandbox \"738c896db51cb7a33866aacc23f3601cf3bab09d7ca4e421118f9bc7563e8aed\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:08:41.803850 containerd[1461]: time="2024-12-13T01:08:41.803757193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e998acaaff6507e71a1422b0ee133599,Namespace:kube-system,Attempt:0,} returns sandbox id \"21b47358d02b41dc842445bcc8099f7a8d65d9d7282405b81d1c44d569758b32\"" Dec 13 01:08:41.804952 kubelet[2178]: E1213 01:08:41.804928 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:41.808182 containerd[1461]: time="2024-12-13T01:08:41.808140135Z" level=info msg="CreateContainer within sandbox \"21b47358d02b41dc842445bcc8099f7a8d65d9d7282405b81d1c44d569758b32\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:08:41.816023 containerd[1461]: time="2024-12-13T01:08:41.815919588Z" level=info msg="CreateContainer within sandbox \"738c896db51cb7a33866aacc23f3601cf3bab09d7ca4e421118f9bc7563e8aed\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3f743abc5cc60ab3c457efff2637dcab35c15d35a1083d4f5a4624aadb8310cb\"" Dec 13 01:08:41.816875 containerd[1461]: time="2024-12-13T01:08:41.816539857Z" level=info msg="StartContainer for \"3f743abc5cc60ab3c457efff2637dcab35c15d35a1083d4f5a4624aadb8310cb\"" Dec 13 01:08:41.824130 containerd[1461]: time="2024-12-13T01:08:41.824069849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5246aaae77df1ea5b6686d5c1f1261b52d2f290f29c9784500305844ba854e2\"" Dec 13 01:08:41.825141 kubelet[2178]: E1213 01:08:41.825093 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:41.828478 containerd[1461]: time="2024-12-13T01:08:41.828128670Z" level=info msg="CreateContainer within sandbox \"c5246aaae77df1ea5b6686d5c1f1261b52d2f290f29c9784500305844ba854e2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:08:41.837784 containerd[1461]: time="2024-12-13T01:08:41.835695673Z" level=info msg="CreateContainer within sandbox \"21b47358d02b41dc842445bcc8099f7a8d65d9d7282405b81d1c44d569758b32\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d6a2759ed14d0d430c02d4271745f5e9d8d88ac5f7381cb69fdd8d2332ffca99\"" Dec 13 01:08:41.838549 containerd[1461]: time="2024-12-13T01:08:41.838081300Z" level=info msg="StartContainer for \"d6a2759ed14d0d430c02d4271745f5e9d8d88ac5f7381cb69fdd8d2332ffca99\"" Dec 13 01:08:41.855967 containerd[1461]: time="2024-12-13T01:08:41.855901688Z" level=info msg="CreateContainer within sandbox \"c5246aaae77df1ea5b6686d5c1f1261b52d2f290f29c9784500305844ba854e2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"19da55240ef09c147c9fa1b1dcad2aff6497864128e349447491cc9d11b908ce\"" Dec 13 01:08:41.857073 containerd[1461]: time="2024-12-13T01:08:41.856579165Z" level=info msg="StartContainer for \"19da55240ef09c147c9fa1b1dcad2aff6497864128e349447491cc9d11b908ce\"" Dec 13 01:08:41.858572 systemd[1]: Started cri-containerd-3f743abc5cc60ab3c457efff2637dcab35c15d35a1083d4f5a4624aadb8310cb.scope - libcontainer container 3f743abc5cc60ab3c457efff2637dcab35c15d35a1083d4f5a4624aadb8310cb. Dec 13 01:08:41.882724 systemd[1]: Started cri-containerd-d6a2759ed14d0d430c02d4271745f5e9d8d88ac5f7381cb69fdd8d2332ffca99.scope - libcontainer container d6a2759ed14d0d430c02d4271745f5e9d8d88ac5f7381cb69fdd8d2332ffca99. Dec 13 01:08:41.899492 systemd[1]: Started cri-containerd-19da55240ef09c147c9fa1b1dcad2aff6497864128e349447491cc9d11b908ce.scope - libcontainer container 19da55240ef09c147c9fa1b1dcad2aff6497864128e349447491cc9d11b908ce. Dec 13 01:08:42.064107 containerd[1461]: time="2024-12-13T01:08:42.064054367Z" level=info msg="StartContainer for \"d6a2759ed14d0d430c02d4271745f5e9d8d88ac5f7381cb69fdd8d2332ffca99\" returns successfully" Dec 13 01:08:42.064107 containerd[1461]: time="2024-12-13T01:08:42.064091678Z" level=info msg="StartContainer for \"3f743abc5cc60ab3c457efff2637dcab35c15d35a1083d4f5a4624aadb8310cb\" returns successfully" Dec 13 01:08:42.064270 containerd[1461]: time="2024-12-13T01:08:42.064182138Z" level=info msg="StartContainer for \"19da55240ef09c147c9fa1b1dcad2aff6497864128e349447491cc9d11b908ce\" returns successfully" Dec 13 01:08:42.271691 kubelet[2178]: E1213 01:08:42.271648 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:42.274448 kubelet[2178]: E1213 01:08:42.274427 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:42.275109 kubelet[2178]: E1213 01:08:42.275087 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:43.276664 kubelet[2178]: E1213 01:08:43.276625 2178 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:43.466747 kubelet[2178]: E1213 01:08:43.466707 2178 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 01:08:43.906506 kubelet[2178]: E1213 01:08:43.906468 2178 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 01:08:44.338071 kubelet[2178]: E1213 01:08:44.338034 2178 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Dec 13 01:08:44.445319 kubelet[2178]: E1213 01:08:44.445279 2178 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 01:08:44.549236 kubelet[2178]: I1213 01:08:44.549203 2178 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:08:44.555955 kubelet[2178]: I1213 01:08:44.555934 2178 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:08:44.561137 kubelet[2178]: E1213 01:08:44.561109 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:44.661816 kubelet[2178]: E1213 01:08:44.661675 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:44.762549 kubelet[2178]: E1213 01:08:44.762487 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:44.863028 kubelet[2178]: E1213 01:08:44.862971 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:44.963911 kubelet[2178]: E1213 01:08:44.963809 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:45.064356 kubelet[2178]: E1213 01:08:45.064278 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:45.165166 kubelet[2178]: E1213 01:08:45.165103 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:45.266284 kubelet[2178]: E1213 01:08:45.266229 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:45.366806 kubelet[2178]: E1213 01:08:45.366752 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:45.467390 kubelet[2178]: E1213 01:08:45.467308 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:45.568058 kubelet[2178]: E1213 01:08:45.567935 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:45.604623 systemd[1]: Reloading requested from client PID 2460 ('systemctl') (unit session-5.scope)... Dec 13 01:08:45.604643 systemd[1]: Reloading... Dec 13 01:08:45.668509 kubelet[2178]: E1213 01:08:45.668458 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:45.749379 zram_generator::config[2502]: No configuration found. Dec 13 01:08:45.768646 kubelet[2178]: E1213 01:08:45.768606 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:45.852484 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:08:45.869156 kubelet[2178]: E1213 01:08:45.869117 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:45.941102 systemd[1]: Reloading finished in 336 ms. Dec 13 01:08:45.969761 kubelet[2178]: E1213 01:08:45.969707 2178 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Dec 13 01:08:45.983027 kubelet[2178]: I1213 01:08:45.982866 2178 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:08:45.983025 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:08:46.005788 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:08:46.006078 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:46.006141 systemd[1]: kubelet.service: Consumed 1.211s CPU time, 116.7M memory peak, 0B memory swap peak. Dec 13 01:08:46.016541 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:08:46.157451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:08:46.163526 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:08:46.217118 kubelet[2544]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:08:46.217118 kubelet[2544]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:08:46.217118 kubelet[2544]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:08:46.217607 kubelet[2544]: I1213 01:08:46.217157 2544 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:08:46.222428 kubelet[2544]: I1213 01:08:46.222401 2544 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:08:46.222428 kubelet[2544]: I1213 01:08:46.222423 2544 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:08:46.222624 kubelet[2544]: I1213 01:08:46.222607 2544 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:08:46.224007 kubelet[2544]: I1213 01:08:46.223974 2544 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:08:46.226027 kubelet[2544]: I1213 01:08:46.225972 2544 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:08:46.237350 kubelet[2544]: I1213 01:08:46.237288 2544 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:08:46.237643 kubelet[2544]: I1213 01:08:46.237609 2544 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:08:46.237850 kubelet[2544]: I1213 01:08:46.237829 2544 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:08:46.237958 kubelet[2544]: I1213 01:08:46.237862 2544 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:08:46.237958 kubelet[2544]: I1213 01:08:46.237873 2544 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:08:46.237958 kubelet[2544]: I1213 01:08:46.237906 2544 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:08:46.238044 kubelet[2544]: I1213 01:08:46.238022 2544 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:08:46.238072 kubelet[2544]: I1213 01:08:46.238056 2544 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:08:46.238159 kubelet[2544]: I1213 01:08:46.238118 2544 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:08:46.238159 kubelet[2544]: I1213 01:08:46.238143 2544 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:08:46.239375 kubelet[2544]: I1213 01:08:46.239298 2544 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:08:46.239646 kubelet[2544]: I1213 01:08:46.239624 2544 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:08:46.240324 kubelet[2544]: I1213 01:08:46.240302 2544 server.go:1256] "Started kubelet" Dec 13 01:08:46.241905 kubelet[2544]: I1213 01:08:46.241877 2544 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:08:46.246712 kubelet[2544]: I1213 01:08:46.246684 2544 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:08:46.247953 kubelet[2544]: I1213 01:08:46.247926 2544 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:08:46.249391 kubelet[2544]: I1213 01:08:46.249361 2544 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:08:46.249604 kubelet[2544]: I1213 01:08:46.249582 2544 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:08:46.251013 kubelet[2544]: I1213 01:08:46.250993 2544 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:08:46.251909 kubelet[2544]: I1213 01:08:46.251894 2544 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:08:46.252747 kubelet[2544]: I1213 01:08:46.252731 2544 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:08:46.259634 kubelet[2544]: I1213 01:08:46.259238 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:08:46.260456 kubelet[2544]: I1213 01:08:46.260433 2544 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:08:46.260576 kubelet[2544]: I1213 01:08:46.260546 2544 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:08:46.261531 kubelet[2544]: I1213 01:08:46.261515 2544 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:08:46.262664 kubelet[2544]: I1213 01:08:46.262644 2544 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:08:46.262756 kubelet[2544]: I1213 01:08:46.262747 2544 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:08:46.263149 kubelet[2544]: E1213 01:08:46.262860 2544 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:08:46.266063 kubelet[2544]: I1213 01:08:46.266048 2544 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:08:46.268016 kubelet[2544]: E1213 01:08:46.267987 2544 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:08:46.301067 kubelet[2544]: I1213 01:08:46.300966 2544 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:08:46.301067 kubelet[2544]: I1213 01:08:46.300988 2544 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:08:46.301067 kubelet[2544]: I1213 01:08:46.301008 2544 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:08:46.301295 kubelet[2544]: I1213 01:08:46.301167 2544 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:08:46.301295 kubelet[2544]: I1213 01:08:46.301191 2544 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:08:46.301295 kubelet[2544]: I1213 01:08:46.301200 2544 policy_none.go:49] "None policy: Start" Dec 13 01:08:46.302080 kubelet[2544]: I1213 01:08:46.302032 2544 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:08:46.302080 kubelet[2544]: I1213 01:08:46.302085 2544 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:08:46.302403 kubelet[2544]: I1213 01:08:46.302327 2544 state_mem.go:75] "Updated machine memory state" Dec 13 01:08:46.309040 kubelet[2544]: I1213 01:08:46.308999 2544 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:08:46.310349 kubelet[2544]: I1213 01:08:46.309356 2544 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:08:46.358287 kubelet[2544]: I1213 01:08:46.358244 2544 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 01:08:46.363404 kubelet[2544]: I1213 01:08:46.363371 2544 topology_manager.go:215] "Topology Admit Handler" podUID="e998acaaff6507e71a1422b0ee133599" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 01:08:46.363484 kubelet[2544]: I1213 01:08:46.363466 2544 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 01:08:46.363520 kubelet[2544]: I1213 01:08:46.363506 2544 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 01:08:46.392023 kubelet[2544]: I1213 01:08:46.391970 2544 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 01:08:46.392178 kubelet[2544]: I1213 01:08:46.392082 2544 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 01:08:46.455587 kubelet[2544]: I1213 01:08:46.455293 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e998acaaff6507e71a1422b0ee133599-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e998acaaff6507e71a1422b0ee133599\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:08:46.455587 kubelet[2544]: I1213 01:08:46.455351 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e998acaaff6507e71a1422b0ee133599-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e998acaaff6507e71a1422b0ee133599\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:08:46.455587 kubelet[2544]: I1213 01:08:46.455376 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:46.455587 kubelet[2544]: I1213 01:08:46.455395 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:46.455587 kubelet[2544]: I1213 01:08:46.455422 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:46.456011 kubelet[2544]: I1213 01:08:46.455442 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 01:08:46.456011 kubelet[2544]: I1213 01:08:46.455469 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e998acaaff6507e71a1422b0ee133599-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e998acaaff6507e71a1422b0ee133599\") " pod="kube-system/kube-apiserver-localhost" Dec 13 01:08:46.456011 kubelet[2544]: I1213 01:08:46.455488 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:46.456011 kubelet[2544]: I1213 01:08:46.455510 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 01:08:46.691211 kubelet[2544]: E1213 01:08:46.691164 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:46.691970 kubelet[2544]: E1213 01:08:46.691408 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:46.691970 kubelet[2544]: E1213 01:08:46.691935 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:47.239610 kubelet[2544]: I1213 01:08:47.239548 2544 apiserver.go:52] "Watching apiserver" Dec 13 01:08:47.278995 kubelet[2544]: E1213 01:08:47.278628 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:47.278995 kubelet[2544]: E1213 01:08:47.278905 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:47.310645 kubelet[2544]: E1213 01:08:47.310022 2544 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 01:08:47.310645 kubelet[2544]: E1213 01:08:47.310576 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:47.353240 kubelet[2544]: I1213 01:08:47.353167 2544 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:08:47.441913 kubelet[2544]: I1213 01:08:47.441851 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.441771375 podStartE2EDuration="1.441771375s" podCreationTimestamp="2024-12-13 01:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:08:47.370296053 +0000 UTC m=+1.202176405" watchObservedRunningTime="2024-12-13 01:08:47.441771375 +0000 UTC m=+1.273651707" Dec 13 01:08:47.509235 kubelet[2544]: I1213 01:08:47.508824 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.508788949 podStartE2EDuration="1.508788949s" podCreationTimestamp="2024-12-13 01:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:08:47.508579384 +0000 UTC m=+1.340459716" watchObservedRunningTime="2024-12-13 01:08:47.508788949 +0000 UTC m=+1.340669271" Dec 13 01:08:47.509235 kubelet[2544]: I1213 01:08:47.508892 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5088781359999999 podStartE2EDuration="1.508878136s" podCreationTimestamp="2024-12-13 01:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:08:47.441921918 +0000 UTC m=+1.273802270" watchObservedRunningTime="2024-12-13 01:08:47.508878136 +0000 UTC m=+1.340758468" Dec 13 01:08:47.665563 sudo[1600]: pam_unix(sudo:session): session closed for user root Dec 13 01:08:47.669813 sshd[1597]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:47.674838 systemd[1]: sshd@4-10.0.0.54:22-10.0.0.1:43022.service: Deactivated successfully. Dec 13 01:08:47.677016 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:08:47.677226 systemd[1]: session-5.scope: Consumed 4.765s CPU time, 189.6M memory peak, 0B memory swap peak. Dec 13 01:08:47.677925 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:08:47.679051 systemd-logind[1452]: Removed session 5. Dec 13 01:08:48.279593 kubelet[2544]: E1213 01:08:48.279551 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:49.281355 kubelet[2544]: E1213 01:08:49.281305 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:50.083487 update_engine[1456]: I20241213 01:08:50.083382 1456 update_attempter.cc:509] Updating boot flags... Dec 13 01:08:50.112387 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2619) Dec 13 01:08:50.149365 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2619) Dec 13 01:08:50.184393 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 41 scanned by (udev-worker) (2619) Dec 13 01:08:53.107191 kubelet[2544]: E1213 01:08:53.107152 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:53.287064 kubelet[2544]: E1213 01:08:53.287034 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:55.515975 kubelet[2544]: E1213 01:08:55.515920 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:56.291580 kubelet[2544]: E1213 01:08:56.291543 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:08:58.436084 kubelet[2544]: E1213 01:08:58.436043 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:00.463325 kubelet[2544]: I1213 01:09:00.463288 2544 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:09:00.463822 kubelet[2544]: I1213 01:09:00.463756 2544 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:09:00.463855 containerd[1461]: time="2024-12-13T01:09:00.463590690Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:09:00.951635 kubelet[2544]: I1213 01:09:00.951038 2544 topology_manager.go:215] "Topology Admit Handler" podUID="2f18b0b5-d936-4442-a4c7-5532a149e527" podNamespace="kube-system" podName="kube-proxy-9bmjv" Dec 13 01:09:00.955999 kubelet[2544]: I1213 01:09:00.955932 2544 topology_manager.go:215] "Topology Admit Handler" podUID="2aa77e08-433f-46d3-9638-6b0850c5a749" podNamespace="kube-flannel" podName="kube-flannel-ds-5tqhd" Dec 13 01:09:00.964045 systemd[1]: Created slice kubepods-besteffort-pod2f18b0b5_d936_4442_a4c7_5532a149e527.slice - libcontainer container kubepods-besteffort-pod2f18b0b5_d936_4442_a4c7_5532a149e527.slice. Dec 13 01:09:00.982926 systemd[1]: Created slice kubepods-burstable-pod2aa77e08_433f_46d3_9638_6b0850c5a749.slice - libcontainer container kubepods-burstable-pod2aa77e08_433f_46d3_9638_6b0850c5a749.slice. Dec 13 01:09:01.046806 kubelet[2544]: I1213 01:09:01.046772 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/2aa77e08-433f-46d3-9638-6b0850c5a749-cni\") pod \"kube-flannel-ds-5tqhd\" (UID: \"2aa77e08-433f-46d3-9638-6b0850c5a749\") " pod="kube-flannel/kube-flannel-ds-5tqhd" Dec 13 01:09:01.046806 kubelet[2544]: I1213 01:09:01.046808 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2aa77e08-433f-46d3-9638-6b0850c5a749-run\") pod \"kube-flannel-ds-5tqhd\" (UID: \"2aa77e08-433f-46d3-9638-6b0850c5a749\") " pod="kube-flannel/kube-flannel-ds-5tqhd" Dec 13 01:09:01.046969 kubelet[2544]: I1213 01:09:01.046831 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/2aa77e08-433f-46d3-9638-6b0850c5a749-cni-plugin\") pod \"kube-flannel-ds-5tqhd\" (UID: \"2aa77e08-433f-46d3-9638-6b0850c5a749\") " pod="kube-flannel/kube-flannel-ds-5tqhd" Dec 13 01:09:01.046969 kubelet[2544]: I1213 01:09:01.046864 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/2aa77e08-433f-46d3-9638-6b0850c5a749-flannel-cfg\") pod \"kube-flannel-ds-5tqhd\" (UID: \"2aa77e08-433f-46d3-9638-6b0850c5a749\") " pod="kube-flannel/kube-flannel-ds-5tqhd" Dec 13 01:09:01.046969 kubelet[2544]: I1213 01:09:01.046884 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f18b0b5-d936-4442-a4c7-5532a149e527-kube-proxy\") pod \"kube-proxy-9bmjv\" (UID: \"2f18b0b5-d936-4442-a4c7-5532a149e527\") " pod="kube-system/kube-proxy-9bmjv" Dec 13 01:09:01.048070 kubelet[2544]: I1213 01:09:01.048045 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f18b0b5-d936-4442-a4c7-5532a149e527-xtables-lock\") pod \"kube-proxy-9bmjv\" (UID: \"2f18b0b5-d936-4442-a4c7-5532a149e527\") " pod="kube-system/kube-proxy-9bmjv" Dec 13 01:09:01.048174 kubelet[2544]: I1213 01:09:01.048131 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr75p\" (UniqueName: \"kubernetes.io/projected/2f18b0b5-d936-4442-a4c7-5532a149e527-kube-api-access-lr75p\") pod \"kube-proxy-9bmjv\" (UID: \"2f18b0b5-d936-4442-a4c7-5532a149e527\") " pod="kube-system/kube-proxy-9bmjv" Dec 13 01:09:01.048204 kubelet[2544]: I1213 01:09:01.048186 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2aa77e08-433f-46d3-9638-6b0850c5a749-xtables-lock\") pod \"kube-flannel-ds-5tqhd\" (UID: \"2aa77e08-433f-46d3-9638-6b0850c5a749\") " pod="kube-flannel/kube-flannel-ds-5tqhd" Dec 13 01:09:01.048226 kubelet[2544]: I1213 01:09:01.048211 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6bb4\" (UniqueName: \"kubernetes.io/projected/2aa77e08-433f-46d3-9638-6b0850c5a749-kube-api-access-r6bb4\") pod \"kube-flannel-ds-5tqhd\" (UID: \"2aa77e08-433f-46d3-9638-6b0850c5a749\") " pod="kube-flannel/kube-flannel-ds-5tqhd" Dec 13 01:09:01.048256 kubelet[2544]: I1213 01:09:01.048230 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f18b0b5-d936-4442-a4c7-5532a149e527-lib-modules\") pod \"kube-proxy-9bmjv\" (UID: \"2f18b0b5-d936-4442-a4c7-5532a149e527\") " pod="kube-system/kube-proxy-9bmjv" Dec 13 01:09:01.282313 kubelet[2544]: E1213 01:09:01.282289 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:01.282748 containerd[1461]: time="2024-12-13T01:09:01.282686339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9bmjv,Uid:2f18b0b5-d936-4442-a4c7-5532a149e527,Namespace:kube-system,Attempt:0,}" Dec 13 01:09:01.285372 kubelet[2544]: E1213 01:09:01.285350 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:01.285715 containerd[1461]: time="2024-12-13T01:09:01.285682095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-5tqhd,Uid:2aa77e08-433f-46d3-9638-6b0850c5a749,Namespace:kube-flannel,Attempt:0,}" Dec 13 01:09:01.316668 containerd[1461]: time="2024-12-13T01:09:01.316431481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:09:01.316668 containerd[1461]: time="2024-12-13T01:09:01.316485762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:09:01.316668 containerd[1461]: time="2024-12-13T01:09:01.316499318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:01.316668 containerd[1461]: time="2024-12-13T01:09:01.316582094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:01.318130 containerd[1461]: time="2024-12-13T01:09:01.318045162Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:09:01.318130 containerd[1461]: time="2024-12-13T01:09:01.318100566Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:09:01.318130 containerd[1461]: time="2024-12-13T01:09:01.318114903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:01.318319 containerd[1461]: time="2024-12-13T01:09:01.318193159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:01.337481 systemd[1]: Started cri-containerd-caff8cbbc6324cf44784e7ebeb8b6f04ad723d3c5876f69ea6731769c5184c94.scope - libcontainer container caff8cbbc6324cf44784e7ebeb8b6f04ad723d3c5876f69ea6731769c5184c94. Dec 13 01:09:01.342478 systemd[1]: Started cri-containerd-ee87ab21d813cc0634e30681995c3c511bdb57eb676a560043fb3ebdd82204fe.scope - libcontainer container ee87ab21d813cc0634e30681995c3c511bdb57eb676a560043fb3ebdd82204fe. Dec 13 01:09:01.371541 containerd[1461]: time="2024-12-13T01:09:01.371500538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9bmjv,Uid:2f18b0b5-d936-4442-a4c7-5532a149e527,Namespace:kube-system,Attempt:0,} returns sandbox id \"caff8cbbc6324cf44784e7ebeb8b6f04ad723d3c5876f69ea6731769c5184c94\"" Dec 13 01:09:01.372396 kubelet[2544]: E1213 01:09:01.372375 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:01.375444 containerd[1461]: time="2024-12-13T01:09:01.375413928Z" level=info msg="CreateContainer within sandbox \"caff8cbbc6324cf44784e7ebeb8b6f04ad723d3c5876f69ea6731769c5184c94\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:09:01.387778 containerd[1461]: time="2024-12-13T01:09:01.387742591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-5tqhd,Uid:2aa77e08-433f-46d3-9638-6b0850c5a749,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"ee87ab21d813cc0634e30681995c3c511bdb57eb676a560043fb3ebdd82204fe\"" Dec 13 01:09:01.389562 kubelet[2544]: E1213 01:09:01.389534 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:01.390432 containerd[1461]: time="2024-12-13T01:09:01.390409920Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Dec 13 01:09:01.396668 containerd[1461]: time="2024-12-13T01:09:01.396625783Z" level=info msg="CreateContainer within sandbox \"caff8cbbc6324cf44784e7ebeb8b6f04ad723d3c5876f69ea6731769c5184c94\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c1a68d7934ae9b5e32bc4c99f1a6d1790debb363bc15064959cc78541fd19117\"" Dec 13 01:09:01.397122 containerd[1461]: time="2024-12-13T01:09:01.397104612Z" level=info msg="StartContainer for \"c1a68d7934ae9b5e32bc4c99f1a6d1790debb363bc15064959cc78541fd19117\"" Dec 13 01:09:01.431562 systemd[1]: Started cri-containerd-c1a68d7934ae9b5e32bc4c99f1a6d1790debb363bc15064959cc78541fd19117.scope - libcontainer container c1a68d7934ae9b5e32bc4c99f1a6d1790debb363bc15064959cc78541fd19117. Dec 13 01:09:01.462139 containerd[1461]: time="2024-12-13T01:09:01.462089742Z" level=info msg="StartContainer for \"c1a68d7934ae9b5e32bc4c99f1a6d1790debb363bc15064959cc78541fd19117\" returns successfully" Dec 13 01:09:02.303731 kubelet[2544]: E1213 01:09:02.303670 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:02.313040 kubelet[2544]: I1213 01:09:02.312708 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9bmjv" podStartSLOduration=2.312653282 podStartE2EDuration="2.312653282s" podCreationTimestamp="2024-12-13 01:09:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:09:02.312349712 +0000 UTC m=+16.144230044" watchObservedRunningTime="2024-12-13 01:09:02.312653282 +0000 UTC m=+16.144533614" Dec 13 01:09:03.085369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3925214593.mount: Deactivated successfully. Dec 13 01:09:03.121634 containerd[1461]: time="2024-12-13T01:09:03.121585599Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:03.122385 containerd[1461]: time="2024-12-13T01:09:03.122312915Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Dec 13 01:09:03.123546 containerd[1461]: time="2024-12-13T01:09:03.123511686Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:03.125817 containerd[1461]: time="2024-12-13T01:09:03.125780085Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:03.126558 containerd[1461]: time="2024-12-13T01:09:03.126519824Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.736061193s" Dec 13 01:09:03.126585 containerd[1461]: time="2024-12-13T01:09:03.126561011Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Dec 13 01:09:03.128252 containerd[1461]: time="2024-12-13T01:09:03.128221639Z" level=info msg="CreateContainer within sandbox \"ee87ab21d813cc0634e30681995c3c511bdb57eb676a560043fb3ebdd82204fe\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Dec 13 01:09:03.140796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount829383798.mount: Deactivated successfully. Dec 13 01:09:03.141529 containerd[1461]: time="2024-12-13T01:09:03.141085774Z" level=info msg="CreateContainer within sandbox \"ee87ab21d813cc0634e30681995c3c511bdb57eb676a560043fb3ebdd82204fe\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"f9bc77102c56109affe51387c2fc06a822fd5475877a2f42a8a0fec8f532a70c\"" Dec 13 01:09:03.141881 containerd[1461]: time="2024-12-13T01:09:03.141683636Z" level=info msg="StartContainer for \"f9bc77102c56109affe51387c2fc06a822fd5475877a2f42a8a0fec8f532a70c\"" Dec 13 01:09:03.167020 systemd[1]: run-containerd-runc-k8s.io-f9bc77102c56109affe51387c2fc06a822fd5475877a2f42a8a0fec8f532a70c-runc.fD3FqS.mount: Deactivated successfully. Dec 13 01:09:03.180472 systemd[1]: Started cri-containerd-f9bc77102c56109affe51387c2fc06a822fd5475877a2f42a8a0fec8f532a70c.scope - libcontainer container f9bc77102c56109affe51387c2fc06a822fd5475877a2f42a8a0fec8f532a70c. Dec 13 01:09:03.205958 containerd[1461]: time="2024-12-13T01:09:03.205908779Z" level=info msg="StartContainer for \"f9bc77102c56109affe51387c2fc06a822fd5475877a2f42a8a0fec8f532a70c\" returns successfully" Dec 13 01:09:03.206233 systemd[1]: cri-containerd-f9bc77102c56109affe51387c2fc06a822fd5475877a2f42a8a0fec8f532a70c.scope: Deactivated successfully. Dec 13 01:09:03.226888 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9bc77102c56109affe51387c2fc06a822fd5475877a2f42a8a0fec8f532a70c-rootfs.mount: Deactivated successfully. Dec 13 01:09:03.272679 containerd[1461]: time="2024-12-13T01:09:03.270049272Z" level=info msg="shim disconnected" id=f9bc77102c56109affe51387c2fc06a822fd5475877a2f42a8a0fec8f532a70c namespace=k8s.io Dec 13 01:09:03.272679 containerd[1461]: time="2024-12-13T01:09:03.272674712Z" level=warning msg="cleaning up after shim disconnected" id=f9bc77102c56109affe51387c2fc06a822fd5475877a2f42a8a0fec8f532a70c namespace=k8s.io Dec 13 01:09:03.272679 containerd[1461]: time="2024-12-13T01:09:03.272687746Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:09:03.306240 kubelet[2544]: E1213 01:09:03.306208 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:03.307026 containerd[1461]: time="2024-12-13T01:09:03.306992819Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Dec 13 01:09:05.301773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount655845497.mount: Deactivated successfully. Dec 13 01:09:05.836588 containerd[1461]: time="2024-12-13T01:09:05.836546042Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:05.837254 containerd[1461]: time="2024-12-13T01:09:05.837227743Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Dec 13 01:09:05.838316 containerd[1461]: time="2024-12-13T01:09:05.838290358Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:05.840823 containerd[1461]: time="2024-12-13T01:09:05.840783638Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:09:05.841867 containerd[1461]: time="2024-12-13T01:09:05.841839741Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.534809432s" Dec 13 01:09:05.841913 containerd[1461]: time="2024-12-13T01:09:05.841867964Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Dec 13 01:09:05.848476 containerd[1461]: time="2024-12-13T01:09:05.848444450Z" level=info msg="CreateContainer within sandbox \"ee87ab21d813cc0634e30681995c3c511bdb57eb676a560043fb3ebdd82204fe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:09:05.861080 containerd[1461]: time="2024-12-13T01:09:05.861043584Z" level=info msg="CreateContainer within sandbox \"ee87ab21d813cc0634e30681995c3c511bdb57eb676a560043fb3ebdd82204fe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"422d1b3e291b8f1fae730f4a11143380a59cbe60741e72bdb7ae3078ccb0d02e\"" Dec 13 01:09:05.861501 containerd[1461]: time="2024-12-13T01:09:05.861466558Z" level=info msg="StartContainer for \"422d1b3e291b8f1fae730f4a11143380a59cbe60741e72bdb7ae3078ccb0d02e\"" Dec 13 01:09:05.888467 systemd[1]: Started cri-containerd-422d1b3e291b8f1fae730f4a11143380a59cbe60741e72bdb7ae3078ccb0d02e.scope - libcontainer container 422d1b3e291b8f1fae730f4a11143380a59cbe60741e72bdb7ae3078ccb0d02e. Dec 13 01:09:05.913206 systemd[1]: cri-containerd-422d1b3e291b8f1fae730f4a11143380a59cbe60741e72bdb7ae3078ccb0d02e.scope: Deactivated successfully. Dec 13 01:09:05.913414 containerd[1461]: time="2024-12-13T01:09:05.913374980Z" level=info msg="StartContainer for \"422d1b3e291b8f1fae730f4a11143380a59cbe60741e72bdb7ae3078ccb0d02e\" returns successfully" Dec 13 01:09:05.980468 kubelet[2544]: I1213 01:09:05.980424 2544 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:09:06.149748 containerd[1461]: time="2024-12-13T01:09:06.149099754Z" level=info msg="shim disconnected" id=422d1b3e291b8f1fae730f4a11143380a59cbe60741e72bdb7ae3078ccb0d02e namespace=k8s.io Dec 13 01:09:06.149748 containerd[1461]: time="2024-12-13T01:09:06.149166619Z" level=warning msg="cleaning up after shim disconnected" id=422d1b3e291b8f1fae730f4a11143380a59cbe60741e72bdb7ae3078ccb0d02e namespace=k8s.io Dec 13 01:09:06.149748 containerd[1461]: time="2024-12-13T01:09:06.149176488Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:09:06.149917 kubelet[2544]: I1213 01:09:06.149713 2544 topology_manager.go:215] "Topology Admit Handler" podUID="a17e3075-050a-47a5-a3de-58d62b9abfc5" podNamespace="kube-system" podName="coredns-76f75df574-tgp6k" Dec 13 01:09:06.149917 kubelet[2544]: I1213 01:09:06.149888 2544 topology_manager.go:215] "Topology Admit Handler" podUID="716a9e94-1932-481f-899c-cc1bcc0a88e5" podNamespace="kube-system" podName="coredns-76f75df574-b48hf" Dec 13 01:09:06.160097 systemd[1]: Created slice kubepods-burstable-pod716a9e94_1932_481f_899c_cc1bcc0a88e5.slice - libcontainer container kubepods-burstable-pod716a9e94_1932_481f_899c_cc1bcc0a88e5.slice. Dec 13 01:09:06.165360 systemd[1]: Created slice kubepods-burstable-poda17e3075_050a_47a5_a3de_58d62b9abfc5.slice - libcontainer container kubepods-burstable-poda17e3075_050a_47a5_a3de_58d62b9abfc5.slice. Dec 13 01:09:06.188004 kubelet[2544]: I1213 01:09:06.187960 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a17e3075-050a-47a5-a3de-58d62b9abfc5-config-volume\") pod \"coredns-76f75df574-tgp6k\" (UID: \"a17e3075-050a-47a5-a3de-58d62b9abfc5\") " pod="kube-system/coredns-76f75df574-tgp6k" Dec 13 01:09:06.188004 kubelet[2544]: I1213 01:09:06.188005 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftw4d\" (UniqueName: \"kubernetes.io/projected/716a9e94-1932-481f-899c-cc1bcc0a88e5-kube-api-access-ftw4d\") pod \"coredns-76f75df574-b48hf\" (UID: \"716a9e94-1932-481f-899c-cc1bcc0a88e5\") " pod="kube-system/coredns-76f75df574-b48hf" Dec 13 01:09:06.188184 kubelet[2544]: I1213 01:09:06.188072 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn4jr\" (UniqueName: \"kubernetes.io/projected/a17e3075-050a-47a5-a3de-58d62b9abfc5-kube-api-access-kn4jr\") pod \"coredns-76f75df574-tgp6k\" (UID: \"a17e3075-050a-47a5-a3de-58d62b9abfc5\") " pod="kube-system/coredns-76f75df574-tgp6k" Dec 13 01:09:06.188184 kubelet[2544]: I1213 01:09:06.188101 2544 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/716a9e94-1932-481f-899c-cc1bcc0a88e5-config-volume\") pod \"coredns-76f75df574-b48hf\" (UID: \"716a9e94-1932-481f-899c-cc1bcc0a88e5\") " pod="kube-system/coredns-76f75df574-b48hf" Dec 13 01:09:06.224431 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-422d1b3e291b8f1fae730f4a11143380a59cbe60741e72bdb7ae3078ccb0d02e-rootfs.mount: Deactivated successfully. Dec 13 01:09:06.317100 kubelet[2544]: E1213 01:09:06.317074 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:06.318663 containerd[1461]: time="2024-12-13T01:09:06.318610047Z" level=info msg="CreateContainer within sandbox \"ee87ab21d813cc0634e30681995c3c511bdb57eb676a560043fb3ebdd82204fe\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Dec 13 01:09:06.336445 containerd[1461]: time="2024-12-13T01:09:06.336398811Z" level=info msg="CreateContainer within sandbox \"ee87ab21d813cc0634e30681995c3c511bdb57eb676a560043fb3ebdd82204fe\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"7552640fafeef709a1cc12fae331eefee5726e017bfd02d344c8a2a49d373209\"" Dec 13 01:09:06.336963 containerd[1461]: time="2024-12-13T01:09:06.336938163Z" level=info msg="StartContainer for \"7552640fafeef709a1cc12fae331eefee5726e017bfd02d344c8a2a49d373209\"" Dec 13 01:09:06.366488 systemd[1]: Started cri-containerd-7552640fafeef709a1cc12fae331eefee5726e017bfd02d344c8a2a49d373209.scope - libcontainer container 7552640fafeef709a1cc12fae331eefee5726e017bfd02d344c8a2a49d373209. Dec 13 01:09:06.395006 containerd[1461]: time="2024-12-13T01:09:06.394945558Z" level=info msg="StartContainer for \"7552640fafeef709a1cc12fae331eefee5726e017bfd02d344c8a2a49d373209\" returns successfully" Dec 13 01:09:06.466654 kubelet[2544]: E1213 01:09:06.466491 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:06.467323 containerd[1461]: time="2024-12-13T01:09:06.467111471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b48hf,Uid:716a9e94-1932-481f-899c-cc1bcc0a88e5,Namespace:kube-system,Attempt:0,}" Dec 13 01:09:06.468887 kubelet[2544]: E1213 01:09:06.468864 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:06.469364 containerd[1461]: time="2024-12-13T01:09:06.469308906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tgp6k,Uid:a17e3075-050a-47a5-a3de-58d62b9abfc5,Namespace:kube-system,Attempt:0,}" Dec 13 01:09:06.500254 containerd[1461]: time="2024-12-13T01:09:06.500203673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tgp6k,Uid:a17e3075-050a-47a5-a3de-58d62b9abfc5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c08ab380cf6759e7f58e766a075634310597a5d9c294d4a5586bfb7f0030ff5a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:09:06.500564 kubelet[2544]: E1213 01:09:06.500518 2544 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c08ab380cf6759e7f58e766a075634310597a5d9c294d4a5586bfb7f0030ff5a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:09:06.500624 kubelet[2544]: E1213 01:09:06.500605 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c08ab380cf6759e7f58e766a075634310597a5d9c294d4a5586bfb7f0030ff5a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-tgp6k" Dec 13 01:09:06.500652 kubelet[2544]: E1213 01:09:06.500626 2544 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c08ab380cf6759e7f58e766a075634310597a5d9c294d4a5586bfb7f0030ff5a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-tgp6k" Dec 13 01:09:06.500709 kubelet[2544]: E1213 01:09:06.500695 2544 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tgp6k_kube-system(a17e3075-050a-47a5-a3de-58d62b9abfc5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tgp6k_kube-system(a17e3075-050a-47a5-a3de-58d62b9abfc5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c08ab380cf6759e7f58e766a075634310597a5d9c294d4a5586bfb7f0030ff5a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-tgp6k" podUID="a17e3075-050a-47a5-a3de-58d62b9abfc5" Dec 13 01:09:06.501165 containerd[1461]: time="2024-12-13T01:09:06.501122458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b48hf,Uid:716a9e94-1932-481f-899c-cc1bcc0a88e5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"901ed20cc9fcbfd7507699be9f10f8b11f339fd9746649883fdbaefaf10b21ac\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:09:06.501284 kubelet[2544]: E1213 01:09:06.501267 2544 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"901ed20cc9fcbfd7507699be9f10f8b11f339fd9746649883fdbaefaf10b21ac\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Dec 13 01:09:06.501325 kubelet[2544]: E1213 01:09:06.501294 2544 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"901ed20cc9fcbfd7507699be9f10f8b11f339fd9746649883fdbaefaf10b21ac\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-b48hf" Dec 13 01:09:06.501418 kubelet[2544]: E1213 01:09:06.501309 2544 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"901ed20cc9fcbfd7507699be9f10f8b11f339fd9746649883fdbaefaf10b21ac\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-b48hf" Dec 13 01:09:06.501471 kubelet[2544]: E1213 01:09:06.501446 2544 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-b48hf_kube-system(716a9e94-1932-481f-899c-cc1bcc0a88e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-b48hf_kube-system(716a9e94-1932-481f-899c-cc1bcc0a88e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"901ed20cc9fcbfd7507699be9f10f8b11f339fd9746649883fdbaefaf10b21ac\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-b48hf" podUID="716a9e94-1932-481f-899c-cc1bcc0a88e5" Dec 13 01:09:07.320174 kubelet[2544]: E1213 01:09:07.320141 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:07.327866 kubelet[2544]: I1213 01:09:07.327825 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-5tqhd" podStartSLOduration=2.875608155 podStartE2EDuration="7.32779065s" podCreationTimestamp="2024-12-13 01:09:00 +0000 UTC" firstStartedPulling="2024-12-13 01:09:01.389891957 +0000 UTC m=+15.221772289" lastFinishedPulling="2024-12-13 01:09:05.842074452 +0000 UTC m=+19.673954784" observedRunningTime="2024-12-13 01:09:07.327137574 +0000 UTC m=+21.159017906" watchObservedRunningTime="2024-12-13 01:09:07.32779065 +0000 UTC m=+21.159670972" Dec 13 01:09:07.458411 systemd-networkd[1398]: flannel.1: Link UP Dec 13 01:09:07.458737 systemd-networkd[1398]: flannel.1: Gained carrier Dec 13 01:09:08.321230 kubelet[2544]: E1213 01:09:08.321195 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:09.008527 systemd-networkd[1398]: flannel.1: Gained IPv6LL Dec 13 01:09:14.377813 systemd[1]: Started sshd@5-10.0.0.54:22-10.0.0.1:59224.service - OpenSSH per-connection server daemon (10.0.0.1:59224). Dec 13 01:09:14.417968 sshd[3218]: Accepted publickey for core from 10.0.0.1 port 59224 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:14.419447 sshd[3218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:14.423440 systemd-logind[1452]: New session 6 of user core. Dec 13 01:09:14.430498 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:09:14.547652 sshd[3218]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:14.552629 systemd[1]: sshd@5-10.0.0.54:22-10.0.0.1:59224.service: Deactivated successfully. Dec 13 01:09:14.554698 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:09:14.555287 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:09:14.556266 systemd-logind[1452]: Removed session 6. Dec 13 01:09:18.264116 kubelet[2544]: E1213 01:09:18.264058 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:18.264620 containerd[1461]: time="2024-12-13T01:09:18.264513682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b48hf,Uid:716a9e94-1932-481f-899c-cc1bcc0a88e5,Namespace:kube-system,Attempt:0,}" Dec 13 01:09:18.291220 systemd-networkd[1398]: cni0: Link UP Dec 13 01:09:18.291228 systemd-networkd[1398]: cni0: Gained carrier Dec 13 01:09:18.294650 systemd-networkd[1398]: cni0: Lost carrier Dec 13 01:09:18.299385 systemd-networkd[1398]: veth7f6a21df: Link UP Dec 13 01:09:18.300658 kernel: cni0: port 1(veth7f6a21df) entered blocking state Dec 13 01:09:18.300720 kernel: cni0: port 1(veth7f6a21df) entered disabled state Dec 13 01:09:18.301458 kernel: veth7f6a21df: entered allmulticast mode Dec 13 01:09:18.302461 kernel: veth7f6a21df: entered promiscuous mode Dec 13 01:09:18.304156 kernel: cni0: port 1(veth7f6a21df) entered blocking state Dec 13 01:09:18.304196 kernel: cni0: port 1(veth7f6a21df) entered forwarding state Dec 13 01:09:18.305426 kernel: cni0: port 1(veth7f6a21df) entered disabled state Dec 13 01:09:18.310948 kernel: cni0: port 1(veth7f6a21df) entered blocking state Dec 13 01:09:18.311003 kernel: cni0: port 1(veth7f6a21df) entered forwarding state Dec 13 01:09:18.311032 systemd-networkd[1398]: veth7f6a21df: Gained carrier Dec 13 01:09:18.311504 systemd-networkd[1398]: cni0: Gained carrier Dec 13 01:09:18.318548 containerd[1461]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018938), "name":"cbr0", "type":"bridge"} Dec 13 01:09:18.318548 containerd[1461]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:09:18.338164 containerd[1461]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:09:18.337506945Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:09:18.338348 containerd[1461]: time="2024-12-13T01:09:18.338146554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:09:18.338348 containerd[1461]: time="2024-12-13T01:09:18.338160089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:18.338348 containerd[1461]: time="2024-12-13T01:09:18.338236112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:18.374556 systemd[1]: Started cri-containerd-975368dabeb22094d48fcc9bd990ecdbeb2de60333f63aa8cb385564eb8d1ccb.scope - libcontainer container 975368dabeb22094d48fcc9bd990ecdbeb2de60333f63aa8cb385564eb8d1ccb. Dec 13 01:09:18.386781 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:09:18.409795 containerd[1461]: time="2024-12-13T01:09:18.409761000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-b48hf,Uid:716a9e94-1932-481f-899c-cc1bcc0a88e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"975368dabeb22094d48fcc9bd990ecdbeb2de60333f63aa8cb385564eb8d1ccb\"" Dec 13 01:09:18.410436 kubelet[2544]: E1213 01:09:18.410388 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:18.414387 containerd[1461]: time="2024-12-13T01:09:18.414351863Z" level=info msg="CreateContainer within sandbox \"975368dabeb22094d48fcc9bd990ecdbeb2de60333f63aa8cb385564eb8d1ccb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:09:18.429220 containerd[1461]: time="2024-12-13T01:09:18.429160097Z" level=info msg="CreateContainer within sandbox \"975368dabeb22094d48fcc9bd990ecdbeb2de60333f63aa8cb385564eb8d1ccb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ca87352348f4564d9675aacfadef411bf6451de9e5b39ff270be138a48c2cb9a\"" Dec 13 01:09:18.429710 containerd[1461]: time="2024-12-13T01:09:18.429681044Z" level=info msg="StartContainer for \"ca87352348f4564d9675aacfadef411bf6451de9e5b39ff270be138a48c2cb9a\"" Dec 13 01:09:18.455534 systemd[1]: Started cri-containerd-ca87352348f4564d9675aacfadef411bf6451de9e5b39ff270be138a48c2cb9a.scope - libcontainer container ca87352348f4564d9675aacfadef411bf6451de9e5b39ff270be138a48c2cb9a. Dec 13 01:09:18.551590 containerd[1461]: time="2024-12-13T01:09:18.551470729Z" level=info msg="StartContainer for \"ca87352348f4564d9675aacfadef411bf6451de9e5b39ff270be138a48c2cb9a\" returns successfully" Dec 13 01:09:19.274476 kubelet[2544]: E1213 01:09:19.274420 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:19.279851 containerd[1461]: time="2024-12-13T01:09:19.279808402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tgp6k,Uid:a17e3075-050a-47a5-a3de-58d62b9abfc5,Namespace:kube-system,Attempt:0,}" Dec 13 01:09:19.280071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2740111003.mount: Deactivated successfully. Dec 13 01:09:19.359643 kubelet[2544]: E1213 01:09:19.359608 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:19.415391 kubelet[2544]: I1213 01:09:19.415359 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-b48hf" podStartSLOduration=18.415268979 podStartE2EDuration="18.415268979s" podCreationTimestamp="2024-12-13 01:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:09:19.415002259 +0000 UTC m=+33.246882591" watchObservedRunningTime="2024-12-13 01:09:19.415268979 +0000 UTC m=+33.247149311" Dec 13 01:09:19.481626 systemd-networkd[1398]: vethe834b5c1: Link UP Dec 13 01:09:19.483871 kernel: cni0: port 2(vethe834b5c1) entered blocking state Dec 13 01:09:19.483922 kernel: cni0: port 2(vethe834b5c1) entered disabled state Dec 13 01:09:19.483951 kernel: vethe834b5c1: entered allmulticast mode Dec 13 01:09:19.485371 kernel: vethe834b5c1: entered promiscuous mode Dec 13 01:09:19.490832 kernel: cni0: port 2(vethe834b5c1) entered blocking state Dec 13 01:09:19.490920 kernel: cni0: port 2(vethe834b5c1) entered forwarding state Dec 13 01:09:19.490874 systemd-networkd[1398]: vethe834b5c1: Gained carrier Dec 13 01:09:19.497985 containerd[1461]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00011c8e8), "name":"cbr0", "type":"bridge"} Dec 13 01:09:19.497985 containerd[1461]: delegateAdd: netconf sent to delegate plugin: Dec 13 01:09:19.518785 containerd[1461]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-12-13T01:09:19.518470122Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:09:19.518785 containerd[1461]: time="2024-12-13T01:09:19.518525636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:09:19.518785 containerd[1461]: time="2024-12-13T01:09:19.518552186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:19.518785 containerd[1461]: time="2024-12-13T01:09:19.518678644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:09:19.544497 systemd[1]: Started cri-containerd-d09e707d5469e90bff973a1e219588776ba10d83de1adf3536e6c4345f11493d.scope - libcontainer container d09e707d5469e90bff973a1e219588776ba10d83de1adf3536e6c4345f11493d. Dec 13 01:09:19.554773 systemd[1]: Started sshd@6-10.0.0.54:22-10.0.0.1:56982.service - OpenSSH per-connection server daemon (10.0.0.1:56982). Dec 13 01:09:19.558601 systemd-resolved[1326]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 01:09:19.584570 containerd[1461]: time="2024-12-13T01:09:19.584530834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tgp6k,Uid:a17e3075-050a-47a5-a3de-58d62b9abfc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d09e707d5469e90bff973a1e219588776ba10d83de1adf3536e6c4345f11493d\"" Dec 13 01:09:19.585316 kubelet[2544]: E1213 01:09:19.585290 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:19.587432 containerd[1461]: time="2024-12-13T01:09:19.587307864Z" level=info msg="CreateContainer within sandbox \"d09e707d5469e90bff973a1e219588776ba10d83de1adf3536e6c4345f11493d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:09:19.597192 sshd[3443]: Accepted publickey for core from 10.0.0.1 port 56982 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:19.598902 sshd[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:19.602511 containerd[1461]: time="2024-12-13T01:09:19.602465472Z" level=info msg="CreateContainer within sandbox \"d09e707d5469e90bff973a1e219588776ba10d83de1adf3536e6c4345f11493d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e78399a58ac2dfda86d4fdb485eb48be3dd6041b1c00b7490f51f9551e0047c7\"" Dec 13 01:09:19.603206 systemd-logind[1452]: New session 7 of user core. Dec 13 01:09:19.603838 containerd[1461]: time="2024-12-13T01:09:19.603805817Z" level=info msg="StartContainer for \"e78399a58ac2dfda86d4fdb485eb48be3dd6041b1c00b7490f51f9551e0047c7\"" Dec 13 01:09:19.612634 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:09:19.635545 systemd[1]: Started cri-containerd-e78399a58ac2dfda86d4fdb485eb48be3dd6041b1c00b7490f51f9551e0047c7.scope - libcontainer container e78399a58ac2dfda86d4fdb485eb48be3dd6041b1c00b7490f51f9551e0047c7. Dec 13 01:09:19.671364 containerd[1461]: time="2024-12-13T01:09:19.668880909Z" level=info msg="StartContainer for \"e78399a58ac2dfda86d4fdb485eb48be3dd6041b1c00b7490f51f9551e0047c7\" returns successfully" Dec 13 01:09:19.733833 sshd[3443]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:19.738190 systemd[1]: sshd@6-10.0.0.54:22-10.0.0.1:56982.service: Deactivated successfully. Dec 13 01:09:19.740184 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:09:19.740855 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:09:19.741787 systemd-logind[1452]: Removed session 7. Dec 13 01:09:20.018950 systemd-networkd[1398]: cni0: Gained IPv6LL Dec 13 01:09:20.208488 systemd-networkd[1398]: veth7f6a21df: Gained IPv6LL Dec 13 01:09:20.363366 kubelet[2544]: E1213 01:09:20.363151 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:20.363366 kubelet[2544]: E1213 01:09:20.363185 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:20.374511 kubelet[2544]: I1213 01:09:20.374465 2544 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-tgp6k" podStartSLOduration=19.374422689 podStartE2EDuration="19.374422689s" podCreationTimestamp="2024-12-13 01:09:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:09:20.37425778 +0000 UTC m=+34.206138112" watchObservedRunningTime="2024-12-13 01:09:20.374422689 +0000 UTC m=+34.206303022" Dec 13 01:09:20.656507 systemd-networkd[1398]: vethe834b5c1: Gained IPv6LL Dec 13 01:09:21.364806 kubelet[2544]: E1213 01:09:21.364763 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:24.744901 systemd[1]: Started sshd@7-10.0.0.54:22-10.0.0.1:56984.service - OpenSSH per-connection server daemon (10.0.0.1:56984). Dec 13 01:09:24.783096 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 56984 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:24.784850 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:24.788591 systemd-logind[1452]: New session 8 of user core. Dec 13 01:09:24.798473 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:09:24.923457 sshd[3528]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:24.934234 systemd[1]: sshd@7-10.0.0.54:22-10.0.0.1:56984.service: Deactivated successfully. Dec 13 01:09:24.936165 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:09:24.938056 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:09:24.946577 systemd[1]: Started sshd@8-10.0.0.54:22-10.0.0.1:57000.service - OpenSSH per-connection server daemon (10.0.0.1:57000). Dec 13 01:09:24.947400 systemd-logind[1452]: Removed session 8. Dec 13 01:09:24.980569 sshd[3544]: Accepted publickey for core from 10.0.0.1 port 57000 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:24.982084 sshd[3544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:24.986041 systemd-logind[1452]: New session 9 of user core. Dec 13 01:09:24.993474 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:09:25.230569 sshd[3544]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:25.238222 systemd[1]: sshd@8-10.0.0.54:22-10.0.0.1:57000.service: Deactivated successfully. Dec 13 01:09:25.240413 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:09:25.242651 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:09:25.251707 systemd[1]: Started sshd@9-10.0.0.54:22-10.0.0.1:57010.service - OpenSSH per-connection server daemon (10.0.0.1:57010). Dec 13 01:09:25.252644 systemd-logind[1452]: Removed session 9. Dec 13 01:09:25.286288 sshd[3556]: Accepted publickey for core from 10.0.0.1 port 57010 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:25.287954 sshd[3556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:25.291675 systemd-logind[1452]: New session 10 of user core. Dec 13 01:09:25.301472 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:09:25.527958 sshd[3556]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:25.532226 systemd[1]: sshd@9-10.0.0.54:22-10.0.0.1:57010.service: Deactivated successfully. Dec 13 01:09:25.534411 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:09:25.535148 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:09:25.536083 systemd-logind[1452]: Removed session 10. Dec 13 01:09:26.470240 kubelet[2544]: E1213 01:09:26.470188 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:27.373194 kubelet[2544]: E1213 01:09:27.373146 2544 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 01:09:30.542569 systemd[1]: Started sshd@10-10.0.0.54:22-10.0.0.1:56522.service - OpenSSH per-connection server daemon (10.0.0.1:56522). Dec 13 01:09:30.580190 sshd[3596]: Accepted publickey for core from 10.0.0.1 port 56522 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:30.581636 sshd[3596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:30.585211 systemd-logind[1452]: New session 11 of user core. Dec 13 01:09:30.595463 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:09:30.702163 sshd[3596]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:30.713016 systemd[1]: sshd@10-10.0.0.54:22-10.0.0.1:56522.service: Deactivated successfully. Dec 13 01:09:30.714848 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:09:30.716724 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:09:30.727595 systemd[1]: Started sshd@11-10.0.0.54:22-10.0.0.1:56534.service - OpenSSH per-connection server daemon (10.0.0.1:56534). Dec 13 01:09:30.728509 systemd-logind[1452]: Removed session 11. Dec 13 01:09:30.761583 sshd[3610]: Accepted publickey for core from 10.0.0.1 port 56534 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:30.763050 sshd[3610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:30.766816 systemd-logind[1452]: New session 12 of user core. Dec 13 01:09:30.776463 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:09:30.947857 sshd[3610]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:30.957490 systemd[1]: sshd@11-10.0.0.54:22-10.0.0.1:56534.service: Deactivated successfully. Dec 13 01:09:30.959370 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:09:30.960999 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:09:30.966550 systemd[1]: Started sshd@12-10.0.0.54:22-10.0.0.1:56542.service - OpenSSH per-connection server daemon (10.0.0.1:56542). Dec 13 01:09:30.967487 systemd-logind[1452]: Removed session 12. Dec 13 01:09:31.002479 sshd[3623]: Accepted publickey for core from 10.0.0.1 port 56542 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:31.004246 sshd[3623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:31.009021 systemd-logind[1452]: New session 13 of user core. Dec 13 01:09:31.016569 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:09:32.207813 sshd[3623]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:32.214925 systemd[1]: sshd@12-10.0.0.54:22-10.0.0.1:56542.service: Deactivated successfully. Dec 13 01:09:32.217163 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:09:32.220519 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:09:32.226912 systemd[1]: Started sshd@13-10.0.0.54:22-10.0.0.1:56546.service - OpenSSH per-connection server daemon (10.0.0.1:56546). Dec 13 01:09:32.227875 systemd-logind[1452]: Removed session 13. Dec 13 01:09:32.262187 sshd[3647]: Accepted publickey for core from 10.0.0.1 port 56546 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:32.263832 sshd[3647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:32.268213 systemd-logind[1452]: New session 14 of user core. Dec 13 01:09:32.275514 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:09:32.489577 sshd[3647]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:32.498613 systemd[1]: sshd@13-10.0.0.54:22-10.0.0.1:56546.service: Deactivated successfully. Dec 13 01:09:32.500899 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:09:32.502797 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:09:32.510637 systemd[1]: Started sshd@14-10.0.0.54:22-10.0.0.1:56550.service - OpenSSH per-connection server daemon (10.0.0.1:56550). Dec 13 01:09:32.511669 systemd-logind[1452]: Removed session 14. Dec 13 01:09:32.547427 sshd[3659]: Accepted publickey for core from 10.0.0.1 port 56550 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:32.549415 sshd[3659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:32.553847 systemd-logind[1452]: New session 15 of user core. Dec 13 01:09:32.559558 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:09:32.663294 sshd[3659]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:32.668062 systemd[1]: sshd@14-10.0.0.54:22-10.0.0.1:56550.service: Deactivated successfully. Dec 13 01:09:32.670171 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:09:32.670868 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:09:32.671688 systemd-logind[1452]: Removed session 15. Dec 13 01:09:37.675680 systemd[1]: Started sshd@15-10.0.0.54:22-10.0.0.1:56566.service - OpenSSH per-connection server daemon (10.0.0.1:56566). Dec 13 01:09:37.715551 sshd[3715]: Accepted publickey for core from 10.0.0.1 port 56566 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:37.717234 sshd[3715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:37.721147 systemd-logind[1452]: New session 16 of user core. Dec 13 01:09:37.726460 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:09:37.832534 sshd[3715]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:37.836646 systemd[1]: sshd@15-10.0.0.54:22-10.0.0.1:56566.service: Deactivated successfully. Dec 13 01:09:37.838699 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:09:37.839376 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:09:37.840366 systemd-logind[1452]: Removed session 16. Dec 13 01:09:42.844672 systemd[1]: Started sshd@16-10.0.0.54:22-10.0.0.1:41682.service - OpenSSH per-connection server daemon (10.0.0.1:41682). Dec 13 01:09:42.885883 sshd[3754]: Accepted publickey for core from 10.0.0.1 port 41682 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:42.887622 sshd[3754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:42.892299 systemd-logind[1452]: New session 17 of user core. Dec 13 01:09:42.900632 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:09:43.006733 sshd[3754]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:43.011392 systemd[1]: sshd@16-10.0.0.54:22-10.0.0.1:41682.service: Deactivated successfully. Dec 13 01:09:43.013908 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:09:43.014719 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:09:43.015832 systemd-logind[1452]: Removed session 17. Dec 13 01:09:48.022989 systemd[1]: Started sshd@17-10.0.0.54:22-10.0.0.1:44004.service - OpenSSH per-connection server daemon (10.0.0.1:44004). Dec 13 01:09:48.061532 sshd[3791]: Accepted publickey for core from 10.0.0.1 port 44004 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:48.063042 sshd[3791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:48.066973 systemd-logind[1452]: New session 18 of user core. Dec 13 01:09:48.076472 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:09:48.182270 sshd[3791]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:48.185989 systemd[1]: sshd@17-10.0.0.54:22-10.0.0.1:44004.service: Deactivated successfully. Dec 13 01:09:48.188032 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:09:48.188663 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:09:48.189619 systemd-logind[1452]: Removed session 18. Dec 13 01:09:53.193249 systemd[1]: Started sshd@18-10.0.0.54:22-10.0.0.1:44008.service - OpenSSH per-connection server daemon (10.0.0.1:44008). Dec 13 01:09:53.230692 sshd[3826]: Accepted publickey for core from 10.0.0.1 port 44008 ssh2: RSA SHA256:DNwV47LjbUU5AUWMweQvyJx41+5RNMCo3Oh+Vcjv2YY Dec 13 01:09:53.232485 sshd[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:09:53.236294 systemd-logind[1452]: New session 19 of user core. Dec 13 01:09:53.242477 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:09:53.341513 sshd[3826]: pam_unix(sshd:session): session closed for user core Dec 13 01:09:53.345015 systemd[1]: sshd@18-10.0.0.54:22-10.0.0.1:44008.service: Deactivated successfully. Dec 13 01:09:53.346771 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:09:53.347374 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:09:53.348174 systemd-logind[1452]: Removed session 19.