Jan 29 12:00:19.938435 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:00:19.938457 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:00:19.938470 kernel: BIOS-provided physical RAM map: Jan 29 12:00:19.938477 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 12:00:19.938485 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 12:00:19.938493 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 12:00:19.938502 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 29 12:00:19.938510 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 29 12:00:19.938518 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 29 12:00:19.938529 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 29 12:00:19.938537 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 12:00:19.938544 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 12:00:19.938552 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 29 12:00:19.938560 kernel: NX (Execute Disable) protection: active Jan 29 12:00:19.938570 kernel: APIC: Static calls initialized Jan 29 12:00:19.938581 kernel: SMBIOS 2.8 present. Jan 29 12:00:19.938590 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 29 12:00:19.938599 kernel: Hypervisor detected: KVM Jan 29 12:00:19.938607 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 12:00:19.938615 kernel: kvm-clock: using sched offset of 2263186976 cycles Jan 29 12:00:19.938622 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 12:00:19.938629 kernel: tsc: Detected 2794.750 MHz processor Jan 29 12:00:19.938637 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:00:19.938644 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:00:19.938651 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 29 12:00:19.938661 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 12:00:19.938668 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:00:19.938675 kernel: Using GB pages for direct mapping Jan 29 12:00:19.938682 kernel: ACPI: Early table checksum verification disabled Jan 29 12:00:19.938688 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 29 12:00:19.938695 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:00:19.938702 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:00:19.938709 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:00:19.938718 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 29 12:00:19.938725 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:00:19.938732 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:00:19.938739 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:00:19.938746 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 12:00:19.938752 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 29 12:00:19.938759 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 29 12:00:19.938770 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 29 12:00:19.938779 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 29 12:00:19.938786 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 29 12:00:19.938793 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 29 12:00:19.938801 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 29 12:00:19.938811 kernel: No NUMA configuration found Jan 29 12:00:19.938821 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 29 12:00:19.938831 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 29 12:00:19.938845 kernel: Zone ranges: Jan 29 12:00:19.938855 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:00:19.938865 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 29 12:00:19.938872 kernel: Normal empty Jan 29 12:00:19.938880 kernel: Movable zone start for each node Jan 29 12:00:19.938887 kernel: Early memory node ranges Jan 29 12:00:19.938894 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 12:00:19.938901 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 29 12:00:19.938908 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 29 12:00:19.938918 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:00:19.938925 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 12:00:19.938932 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 29 12:00:19.938940 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 12:00:19.938947 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 12:00:19.938954 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:00:19.938961 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 12:00:19.938968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 12:00:19.938975 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:00:19.938985 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 12:00:19.938992 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 12:00:19.938999 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:00:19.939006 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 12:00:19.939013 kernel: TSC deadline timer available Jan 29 12:00:19.939021 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 29 12:00:19.939028 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 12:00:19.939035 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 29 12:00:19.939042 kernel: kvm-guest: setup PV sched yield Jan 29 12:00:19.939049 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 29 12:00:19.939059 kernel: Booting paravirtualized kernel on KVM Jan 29 12:00:19.939066 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:00:19.939073 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 29 12:00:19.939081 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 29 12:00:19.939088 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 29 12:00:19.939095 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 29 12:00:19.939102 kernel: kvm-guest: PV spinlocks enabled Jan 29 12:00:19.939109 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 12:00:19.939117 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:00:19.939128 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:00:19.939135 kernel: random: crng init done Jan 29 12:00:19.939142 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:00:19.939149 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 12:00:19.939156 kernel: Fallback order for Node 0: 0 Jan 29 12:00:19.939176 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 29 12:00:19.939183 kernel: Policy zone: DMA32 Jan 29 12:00:19.939191 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:00:19.939202 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 29 12:00:19.939209 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 29 12:00:19.939216 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:00:19.939223 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:00:19.939230 kernel: Dynamic Preempt: voluntary Jan 29 12:00:19.939238 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:00:19.939260 kernel: rcu: RCU event tracing is enabled. Jan 29 12:00:19.939268 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 29 12:00:19.939275 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:00:19.939285 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:00:19.939292 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:00:19.939299 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:00:19.939307 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 29 12:00:19.939314 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 29 12:00:19.939321 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:00:19.939328 kernel: Console: colour VGA+ 80x25 Jan 29 12:00:19.939335 kernel: printk: console [ttyS0] enabled Jan 29 12:00:19.939342 kernel: ACPI: Core revision 20230628 Jan 29 12:00:19.939352 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 12:00:19.939359 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:00:19.939366 kernel: x2apic enabled Jan 29 12:00:19.939374 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 12:00:19.939381 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 29 12:00:19.939388 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 29 12:00:19.939398 kernel: kvm-guest: setup PV IPIs Jan 29 12:00:19.939417 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 12:00:19.939425 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 29 12:00:19.939433 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jan 29 12:00:19.939440 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 29 12:00:19.939447 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 29 12:00:19.939457 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 29 12:00:19.939465 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:00:19.939472 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:00:19.939480 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:00:19.939490 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:00:19.939498 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 29 12:00:19.939505 kernel: RETBleed: Mitigation: untrained return thunk Jan 29 12:00:19.939513 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 12:00:19.939520 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 12:00:19.939528 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 29 12:00:19.939536 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 29 12:00:19.939543 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 29 12:00:19.939551 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 12:00:19.939561 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 12:00:19.939568 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 12:00:19.939576 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 12:00:19.939583 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 12:00:19.939591 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:00:19.939598 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:00:19.939606 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:00:19.939613 kernel: landlock: Up and running. Jan 29 12:00:19.939620 kernel: SELinux: Initializing. Jan 29 12:00:19.939630 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:00:19.939638 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 29 12:00:19.939645 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 29 12:00:19.939653 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 12:00:19.939661 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 12:00:19.939668 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 29 12:00:19.939676 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 29 12:00:19.939683 kernel: ... version: 0 Jan 29 12:00:19.939693 kernel: ... bit width: 48 Jan 29 12:00:19.939700 kernel: ... generic registers: 6 Jan 29 12:00:19.939708 kernel: ... value mask: 0000ffffffffffff Jan 29 12:00:19.939715 kernel: ... max period: 00007fffffffffff Jan 29 12:00:19.939723 kernel: ... fixed-purpose events: 0 Jan 29 12:00:19.939730 kernel: ... event mask: 000000000000003f Jan 29 12:00:19.939737 kernel: signal: max sigframe size: 1776 Jan 29 12:00:19.939745 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:00:19.939752 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:00:19.939760 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:00:19.939770 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:00:19.939777 kernel: .... node #0, CPUs: #1 #2 #3 Jan 29 12:00:19.939784 kernel: smp: Brought up 1 node, 4 CPUs Jan 29 12:00:19.939792 kernel: smpboot: Max logical packages: 1 Jan 29 12:00:19.939799 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jan 29 12:00:19.939807 kernel: devtmpfs: initialized Jan 29 12:00:19.939814 kernel: x86/mm: Memory block size: 128MB Jan 29 12:00:19.939822 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:00:19.939831 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 29 12:00:19.939844 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:00:19.939855 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:00:19.939865 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:00:19.939876 kernel: audit: type=2000 audit(1738152018.937:1): state=initialized audit_enabled=0 res=1 Jan 29 12:00:19.939886 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:00:19.939897 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:00:19.939906 kernel: cpuidle: using governor menu Jan 29 12:00:19.939913 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:00:19.939921 kernel: dca service started, version 1.12.1 Jan 29 12:00:19.939932 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 29 12:00:19.939939 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 29 12:00:19.939947 kernel: PCI: Using configuration type 1 for base access Jan 29 12:00:19.939955 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:00:19.939962 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:00:19.939970 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:00:19.939977 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:00:19.939985 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:00:19.939992 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:00:19.940002 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:00:19.940009 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:00:19.940017 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:00:19.940024 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:00:19.940032 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:00:19.940039 kernel: ACPI: Interpreter enabled Jan 29 12:00:19.940046 kernel: ACPI: PM: (supports S0 S3 S5) Jan 29 12:00:19.940054 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:00:19.940061 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:00:19.940071 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 12:00:19.940078 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 29 12:00:19.940086 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 12:00:19.940329 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 12:00:19.940466 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 29 12:00:19.940587 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 29 12:00:19.940597 kernel: PCI host bridge to bus 0000:00 Jan 29 12:00:19.940725 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 12:00:19.940864 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 12:00:19.941005 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 12:00:19.941142 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 29 12:00:19.941289 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 12:00:19.941434 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 29 12:00:19.941578 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 12:00:19.941744 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 29 12:00:19.941878 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 29 12:00:19.942021 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 29 12:00:19.942194 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 29 12:00:19.942345 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 29 12:00:19.942491 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 12:00:19.942644 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 29 12:00:19.942776 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 29 12:00:19.942907 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 29 12:00:19.943036 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 29 12:00:19.943190 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 29 12:00:19.943337 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 12:00:19.943471 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 29 12:00:19.943605 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 29 12:00:19.943744 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 12:00:19.943874 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 29 12:00:19.944002 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 29 12:00:19.944130 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 29 12:00:19.944285 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 29 12:00:19.944428 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 29 12:00:19.944565 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 29 12:00:19.944693 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 25390 usecs Jan 29 12:00:19.944837 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 29 12:00:19.944966 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 29 12:00:19.945093 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 29 12:00:19.945285 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 29 12:00:19.945415 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 29 12:00:19.945431 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 12:00:19.945440 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 12:00:19.945449 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 12:00:19.945458 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 12:00:19.945467 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 29 12:00:19.945476 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 29 12:00:19.945485 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 29 12:00:19.945494 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 29 12:00:19.945506 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 29 12:00:19.945515 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 29 12:00:19.945524 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 29 12:00:19.945533 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 29 12:00:19.945542 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 29 12:00:19.945550 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 29 12:00:19.945559 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 29 12:00:19.945568 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 29 12:00:19.945577 kernel: iommu: Default domain type: Translated Jan 29 12:00:19.945589 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:00:19.945598 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:00:19.945607 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 12:00:19.945615 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 12:00:19.945624 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 29 12:00:19.945750 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 29 12:00:19.945877 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 29 12:00:19.946013 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 12:00:19.946028 kernel: vgaarb: loaded Jan 29 12:00:19.946042 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 12:00:19.946053 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 12:00:19.946063 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 12:00:19.946073 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:00:19.946083 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:00:19.946093 kernel: pnp: PnP ACPI init Jan 29 12:00:19.946279 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 29 12:00:19.946296 kernel: pnp: PnP ACPI: found 6 devices Jan 29 12:00:19.946312 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:00:19.946322 kernel: NET: Registered PF_INET protocol family Jan 29 12:00:19.946333 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:00:19.946343 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 29 12:00:19.946352 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:00:19.946362 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 12:00:19.946373 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 29 12:00:19.946383 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 29 12:00:19.946393 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:00:19.946407 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 29 12:00:19.946417 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:00:19.946428 kernel: NET: Registered PF_XDP protocol family Jan 29 12:00:19.946572 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 12:00:19.946820 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 12:00:19.946960 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 12:00:19.947079 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 29 12:00:19.947205 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 29 12:00:19.947339 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 29 12:00:19.947350 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:00:19.947357 kernel: Initialise system trusted keyrings Jan 29 12:00:19.947365 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 29 12:00:19.947373 kernel: Key type asymmetric registered Jan 29 12:00:19.947380 kernel: Asymmetric key parser 'x509' registered Jan 29 12:00:19.947388 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:00:19.947395 kernel: io scheduler mq-deadline registered Jan 29 12:00:19.947403 kernel: io scheduler kyber registered Jan 29 12:00:19.947414 kernel: io scheduler bfq registered Jan 29 12:00:19.947422 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:00:19.947430 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 29 12:00:19.947438 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 29 12:00:19.947445 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 29 12:00:19.947453 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:00:19.947461 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:00:19.947468 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 12:00:19.947476 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 12:00:19.947486 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 12:00:19.947494 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 12:00:19.947619 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 29 12:00:19.947734 kernel: rtc_cmos 00:04: registered as rtc0 Jan 29 12:00:19.947868 kernel: rtc_cmos 00:04: setting system clock to 2025-01-29T12:00:19 UTC (1738152019) Jan 29 12:00:19.947992 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 29 12:00:19.948002 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 29 12:00:19.948010 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:00:19.948022 kernel: Segment Routing with IPv6 Jan 29 12:00:19.948030 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:00:19.948037 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:00:19.948045 kernel: Key type dns_resolver registered Jan 29 12:00:19.948052 kernel: IPI shorthand broadcast: enabled Jan 29 12:00:19.948060 kernel: sched_clock: Marking stable (718003973, 148672601)->(888120165, -21443591) Jan 29 12:00:19.948071 kernel: registered taskstats version 1 Jan 29 12:00:19.948082 kernel: Loading compiled-in X.509 certificates Jan 29 12:00:19.948093 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:00:19.948107 kernel: Key type .fscrypt registered Jan 29 12:00:19.948117 kernel: Key type fscrypt-provisioning registered Jan 29 12:00:19.948124 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:00:19.948133 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:00:19.948143 kernel: ima: No architecture policies found Jan 29 12:00:19.948154 kernel: clk: Disabling unused clocks Jan 29 12:00:19.948177 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:00:19.948185 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:00:19.948193 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:00:19.948204 kernel: Run /init as init process Jan 29 12:00:19.948211 kernel: with arguments: Jan 29 12:00:19.948219 kernel: /init Jan 29 12:00:19.948226 kernel: with environment: Jan 29 12:00:19.948234 kernel: HOME=/ Jan 29 12:00:19.948241 kernel: TERM=linux Jan 29 12:00:19.948257 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:00:19.948268 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:00:19.948280 systemd[1]: Detected virtualization kvm. Jan 29 12:00:19.948289 systemd[1]: Detected architecture x86-64. Jan 29 12:00:19.948297 systemd[1]: Running in initrd. Jan 29 12:00:19.948305 systemd[1]: No hostname configured, using default hostname. Jan 29 12:00:19.948313 systemd[1]: Hostname set to . Jan 29 12:00:19.948321 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:00:19.948329 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:00:19.948337 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:00:19.948349 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:00:19.948369 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:00:19.948379 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:00:19.948388 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:00:19.948397 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:00:19.948409 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:00:19.948418 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:00:19.948426 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:00:19.948435 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:00:19.948443 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:00:19.948451 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:00:19.948459 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:00:19.948468 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:00:19.948479 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:00:19.948487 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:00:19.948496 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:00:19.948504 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:00:19.948512 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:00:19.948521 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:00:19.948529 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:00:19.948540 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:00:19.948554 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:00:19.948566 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:00:19.948578 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:00:19.948589 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:00:19.948601 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:00:19.948612 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:00:19.948624 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:00:19.948635 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:00:19.948645 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:00:19.948657 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:00:19.948669 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:00:19.948707 systemd-journald[191]: Collecting audit messages is disabled. Jan 29 12:00:19.948735 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:00:19.948747 systemd-journald[191]: Journal started Jan 29 12:00:19.948775 systemd-journald[191]: Runtime Journal (/run/log/journal/88b2888d6f33490a866f0a9cf9585a49) is 6.0M, max 48.4M, 42.3M free. Jan 29 12:00:19.933640 systemd-modules-load[193]: Inserted module 'overlay' Jan 29 12:00:19.977743 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:00:19.977773 kernel: Bridge firewalling registered Jan 29 12:00:19.977795 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:00:19.963414 systemd-modules-load[193]: Inserted module 'br_netfilter' Jan 29 12:00:19.977962 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:00:19.979481 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:00:19.990580 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:00:19.992908 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:00:19.994357 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:00:19.998312 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:00:20.010235 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:00:20.014951 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:00:20.017336 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:00:20.020084 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:00:20.034361 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:00:20.038316 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:00:20.048323 dracut-cmdline[227]: dracut-dracut-053 Jan 29 12:00:20.051320 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:00:20.092036 systemd-resolved[229]: Positive Trust Anchors: Jan 29 12:00:20.092055 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:00:20.092100 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:00:20.097919 systemd-resolved[229]: Defaulting to hostname 'linux'. Jan 29 12:00:20.099026 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:00:20.105284 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:00:20.141197 kernel: SCSI subsystem initialized Jan 29 12:00:20.153194 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:00:20.167195 kernel: iscsi: registered transport (tcp) Jan 29 12:00:20.189194 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:00:20.189221 kernel: QLogic iSCSI HBA Driver Jan 29 12:00:20.241613 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:00:20.254302 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:00:20.278401 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:00:20.278442 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:00:20.279468 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:00:20.321193 kernel: raid6: avx2x4 gen() 30559 MB/s Jan 29 12:00:20.338216 kernel: raid6: avx2x2 gen() 31230 MB/s Jan 29 12:00:20.355329 kernel: raid6: avx2x1 gen() 25535 MB/s Jan 29 12:00:20.355402 kernel: raid6: using algorithm avx2x2 gen() 31230 MB/s Jan 29 12:00:20.373286 kernel: raid6: .... xor() 19715 MB/s, rmw enabled Jan 29 12:00:20.373320 kernel: raid6: using avx2x2 recovery algorithm Jan 29 12:00:20.393187 kernel: xor: automatically using best checksumming function avx Jan 29 12:00:20.544206 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:00:20.558120 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:00:20.566457 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:00:20.577796 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 29 12:00:20.582493 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:00:20.585191 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:00:20.615966 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Jan 29 12:00:20.648099 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:00:20.657370 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:00:20.718224 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:00:20.727335 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:00:20.737562 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:00:20.739064 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:00:20.741501 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:00:20.744113 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:00:20.758774 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 29 12:00:20.784799 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 29 12:00:20.784958 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:00:20.784970 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 12:00:20.784980 kernel: AES CTR mode by8 optimization enabled Jan 29 12:00:20.784990 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 12:00:20.785007 kernel: GPT:9289727 != 19775487 Jan 29 12:00:20.785017 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 12:00:20.785027 kernel: GPT:9289727 != 19775487 Jan 29 12:00:20.785036 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 12:00:20.785046 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:00:20.757379 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:00:20.786398 kernel: libata version 3.00 loaded. Jan 29 12:00:20.770966 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:00:20.790556 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:00:20.792097 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:00:20.795378 kernel: ahci 0000:00:1f.2: version 3.0 Jan 29 12:00:20.822343 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 29 12:00:20.822363 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 29 12:00:20.822513 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 29 12:00:20.822655 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (460) Jan 29 12:00:20.822667 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (471) Jan 29 12:00:20.822677 kernel: scsi host0: ahci Jan 29 12:00:20.822843 kernel: scsi host1: ahci Jan 29 12:00:20.822989 kernel: scsi host2: ahci Jan 29 12:00:20.823133 kernel: scsi host3: ahci Jan 29 12:00:20.823308 kernel: scsi host4: ahci Jan 29 12:00:20.823459 kernel: scsi host5: ahci Jan 29 12:00:20.823601 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 29 12:00:20.823617 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 29 12:00:20.823627 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 29 12:00:20.823637 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 29 12:00:20.823648 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 29 12:00:20.823658 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 29 12:00:20.796464 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:00:20.798967 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:00:20.799131 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:00:20.805331 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:00:20.811872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:00:20.834456 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 12:00:20.861578 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:00:20.873217 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 12:00:20.880592 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:00:20.885497 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 12:00:20.886754 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 12:00:20.902332 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:00:20.904425 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:00:20.912525 disk-uuid[560]: Primary Header is updated. Jan 29 12:00:20.912525 disk-uuid[560]: Secondary Entries is updated. Jan 29 12:00:20.912525 disk-uuid[560]: Secondary Header is updated. Jan 29 12:00:20.917200 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:00:20.923190 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:00:20.928101 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:00:21.138190 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 29 12:00:21.138295 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 29 12:00:21.139209 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 29 12:00:21.140185 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 29 12:00:21.141193 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 29 12:00:21.141209 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 29 12:00:21.142193 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 29 12:00:21.143402 kernel: ata3.00: applying bridge limits Jan 29 12:00:21.143417 kernel: ata3.00: configured for UDMA/100 Jan 29 12:00:21.144191 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 29 12:00:21.194199 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 29 12:00:21.206838 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 12:00:21.206856 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 12:00:21.924213 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 12:00:21.924501 disk-uuid[561]: The operation has completed successfully. Jan 29 12:00:21.951218 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:00:21.951343 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:00:21.977318 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:00:21.982477 sh[593]: Success Jan 29 12:00:21.994234 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 29 12:00:22.025225 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:00:22.037700 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:00:22.040505 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:00:22.056096 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:00:22.056132 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:00:22.056147 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:00:22.056173 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:00:22.056860 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:00:22.061878 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:00:22.063077 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:00:22.068297 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:00:22.070274 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:00:22.080964 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:00:22.080989 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:00:22.080999 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:00:22.084215 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:00:22.093755 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:00:22.095501 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:00:22.106056 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:00:22.112358 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:00:22.165665 ignition[688]: Ignition 2.19.0 Jan 29 12:00:22.165676 ignition[688]: Stage: fetch-offline Jan 29 12:00:22.165713 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:00:22.165723 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:00:22.165813 ignition[688]: parsed url from cmdline: "" Jan 29 12:00:22.165817 ignition[688]: no config URL provided Jan 29 12:00:22.165822 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:00:22.165831 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:00:22.165856 ignition[688]: op(1): [started] loading QEMU firmware config module Jan 29 12:00:22.165862 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 29 12:00:22.173086 ignition[688]: op(1): [finished] loading QEMU firmware config module Jan 29 12:00:22.186768 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:00:22.191299 ignition[688]: parsing config with SHA512: aa8274454812567b2fef4d7bc23ee37dead67b8128387e0aaafd0a41de5b071cd43ac5bfc21958b4d53ffe6d11019909aea43493aaee30411a37330d8c6a993f Jan 29 12:00:22.197436 unknown[688]: fetched base config from "system" Jan 29 12:00:22.197446 unknown[688]: fetched user config from "qemu" Jan 29 12:00:22.197814 ignition[688]: fetch-offline: fetch-offline passed Jan 29 12:00:22.198518 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:00:22.197874 ignition[688]: Ignition finished successfully Jan 29 12:00:22.200490 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:00:22.219580 systemd-networkd[781]: lo: Link UP Jan 29 12:00:22.219592 systemd-networkd[781]: lo: Gained carrier Jan 29 12:00:22.221154 systemd-networkd[781]: Enumeration completed Jan 29 12:00:22.221366 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:00:22.221754 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:00:22.221760 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:00:22.223240 systemd[1]: Reached target network.target - Network. Jan 29 12:00:22.223550 systemd-networkd[781]: eth0: Link UP Jan 29 12:00:22.223554 systemd-networkd[781]: eth0: Gained carrier Jan 29 12:00:22.223561 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:00:22.225142 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 12:00:22.232315 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:00:22.239264 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 12:00:22.245440 ignition[784]: Ignition 2.19.0 Jan 29 12:00:22.245452 ignition[784]: Stage: kargs Jan 29 12:00:22.245619 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:00:22.245630 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:00:22.246434 ignition[784]: kargs: kargs passed Jan 29 12:00:22.246472 ignition[784]: Ignition finished successfully Jan 29 12:00:22.249497 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:00:22.261304 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:00:22.278289 ignition[794]: Ignition 2.19.0 Jan 29 12:00:22.278305 ignition[794]: Stage: disks Jan 29 12:00:22.278524 ignition[794]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:00:22.278539 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:00:22.279757 ignition[794]: disks: disks passed Jan 29 12:00:22.279815 ignition[794]: Ignition finished successfully Jan 29 12:00:22.282417 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:00:22.284343 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:00:22.286149 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:00:22.288489 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:00:22.290674 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:00:22.292647 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:00:22.306344 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:00:22.320795 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 12:00:22.327559 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:00:22.340389 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:00:22.434204 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:00:22.434608 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:00:22.436241 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:00:22.452302 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:00:22.455157 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:00:22.455962 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 12:00:22.461634 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (812) Jan 29 12:00:22.456002 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:00:22.466959 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:00:22.466979 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:00:22.466993 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:00:22.456026 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:00:22.468995 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:00:22.471230 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:00:22.479530 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:00:22.481511 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:00:22.521869 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:00:22.527504 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:00:22.533005 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:00:22.537851 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:00:22.628064 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:00:22.640285 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:00:22.641965 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:00:22.649226 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:00:22.667340 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:00:22.671378 ignition[926]: INFO : Ignition 2.19.0 Jan 29 12:00:22.671378 ignition[926]: INFO : Stage: mount Jan 29 12:00:22.673206 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:00:22.673206 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:00:22.673206 ignition[926]: INFO : mount: mount passed Jan 29 12:00:22.673206 ignition[926]: INFO : Ignition finished successfully Jan 29 12:00:22.679538 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:00:22.689284 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:00:23.054476 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:00:23.067515 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:00:23.076059 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (939) Jan 29 12:00:23.076094 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:00:23.076110 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:00:23.076989 kernel: BTRFS info (device vda6): using free space tree Jan 29 12:00:23.081200 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 12:00:23.082395 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:00:23.115850 ignition[956]: INFO : Ignition 2.19.0 Jan 29 12:00:23.115850 ignition[956]: INFO : Stage: files Jan 29 12:00:23.117860 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:00:23.117860 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:00:23.117860 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:00:23.122817 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:00:23.122817 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:00:23.128638 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:00:23.130572 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:00:23.132653 unknown[956]: wrote ssh authorized keys file for user: core Jan 29 12:00:23.133978 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:00:23.135700 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:00:23.135700 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 29 12:00:23.135700 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:00:23.135700 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:00:23.292974 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 29 12:00:23.358527 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:00:23.361004 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:00:23.363193 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:00:23.365277 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:00:23.367501 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:00:23.369578 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:00:23.371771 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:00:23.373883 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:00:23.376138 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:00:23.378524 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:00:23.380843 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:00:23.383021 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:00:23.386200 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:00:23.389257 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:00:23.391891 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 12:00:23.859340 systemd-networkd[781]: eth0: Gained IPv6LL Jan 29 12:00:23.885965 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 29 12:00:24.292510 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:00:24.292510 ignition[956]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 29 12:00:24.296613 ignition[956]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:00:24.296613 ignition[956]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 29 12:00:24.296613 ignition[956]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 29 12:00:24.296613 ignition[956]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 29 12:00:24.296613 ignition[956]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:00:24.296613 ignition[956]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:00:24.296613 ignition[956]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 29 12:00:24.296613 ignition[956]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 29 12:00:24.296613 ignition[956]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 12:00:24.296613 ignition[956]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 12:00:24.296613 ignition[956]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 29 12:00:24.296613 ignition[956]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 12:00:24.325917 ignition[956]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 12:00:24.330664 ignition[956]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 12:00:24.332266 ignition[956]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 12:00:24.332266 ignition[956]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:00:24.332266 ignition[956]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:00:24.332266 ignition[956]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:00:24.332266 ignition[956]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:00:24.332266 ignition[956]: INFO : files: files passed Jan 29 12:00:24.332266 ignition[956]: INFO : Ignition finished successfully Jan 29 12:00:24.343704 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:00:24.353585 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:00:24.356730 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:00:24.359709 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:00:24.360746 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:00:24.366411 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Jan 29 12:00:24.370368 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:00:24.372250 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:00:24.375534 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:00:24.373409 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:00:24.375629 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:00:24.383289 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:00:24.407970 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:00:24.408097 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:00:24.410470 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:00:24.412689 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:00:24.414745 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:00:24.421314 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:00:24.433411 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:00:24.449321 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:00:24.457625 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:00:24.458915 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:00:24.461122 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:00:24.463201 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:00:24.463312 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:00:24.465450 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:00:24.467189 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:00:24.469242 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:00:24.471264 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:00:24.473296 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:00:24.475464 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:00:24.477575 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:00:24.479884 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:00:24.481862 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:00:24.484076 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:00:24.485845 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:00:24.485951 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:00:24.488132 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:00:24.489739 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:00:24.491825 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:00:24.491926 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:00:24.494004 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:00:24.494108 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:00:24.496320 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:00:24.496425 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:00:24.498440 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:00:24.500340 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:00:24.500457 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:00:24.502951 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:00:24.504759 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:00:24.506829 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:00:24.506915 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:00:24.508925 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:00:24.509010 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:00:24.511101 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:00:24.511228 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:00:24.513171 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:00:24.513273 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:00:24.523329 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:00:24.525350 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:00:24.525483 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:00:24.528312 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:00:24.529356 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:00:24.529581 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:00:24.531804 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:00:24.532034 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:00:24.538675 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:00:24.538839 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:00:24.543519 ignition[1010]: INFO : Ignition 2.19.0 Jan 29 12:00:24.543519 ignition[1010]: INFO : Stage: umount Jan 29 12:00:24.543519 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:00:24.543519 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 29 12:00:24.543519 ignition[1010]: INFO : umount: umount passed Jan 29 12:00:24.543519 ignition[1010]: INFO : Ignition finished successfully Jan 29 12:00:24.545457 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:00:24.545583 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:00:24.547555 systemd[1]: Stopped target network.target - Network. Jan 29 12:00:24.549484 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:00:24.549540 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:00:24.551350 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:00:24.551399 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:00:24.553256 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:00:24.553302 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:00:24.555420 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:00:24.555471 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:00:24.557676 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:00:24.559759 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:00:24.562217 systemd-networkd[781]: eth0: DHCPv6 lease lost Jan 29 12:00:24.562734 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:00:24.564518 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:00:24.564644 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:00:24.566338 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:00:24.566376 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:00:24.578271 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:00:24.579563 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:00:24.579629 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:00:24.582036 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:00:24.584598 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:00:24.584736 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:00:24.589017 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:00:24.589117 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:00:24.590950 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:00:24.591006 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:00:24.593376 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:00:24.593424 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:00:24.598152 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:00:24.598302 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:00:24.600863 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:00:24.601026 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:00:24.603490 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:00:24.603536 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:00:24.605311 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:00:24.605347 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:00:24.607542 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:00:24.607590 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:00:24.609584 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:00:24.609630 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:00:24.611666 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:00:24.611711 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:00:24.625312 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:00:24.627190 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:00:24.627255 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:00:24.629778 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:00:24.629825 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:00:24.632657 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:00:24.632757 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:00:24.741520 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:00:24.741650 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:00:24.744120 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:00:24.746443 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:00:24.746495 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:00:24.761313 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:00:24.767445 systemd[1]: Switching root. Jan 29 12:00:24.798238 systemd-journald[191]: Journal stopped Jan 29 12:00:25.957154 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Jan 29 12:00:25.958970 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:00:25.959065 kernel: SELinux: policy capability open_perms=1 Jan 29 12:00:25.959079 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:00:25.959092 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:00:25.959105 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:00:25.959129 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:00:25.959142 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:00:25.959159 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:00:25.959186 kernel: audit: type=1403 audit(1738152025.217:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:00:25.959219 systemd[1]: Successfully loaded SELinux policy in 41.094ms. Jan 29 12:00:25.959243 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.760ms. Jan 29 12:00:25.959260 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:00:25.959274 systemd[1]: Detected virtualization kvm. Jan 29 12:00:25.959288 systemd[1]: Detected architecture x86-64. Jan 29 12:00:25.959301 systemd[1]: Detected first boot. Jan 29 12:00:25.959321 systemd[1]: Initializing machine ID from VM UUID. Jan 29 12:00:25.959335 zram_generator::config[1072]: No configuration found. Jan 29 12:00:25.959355 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:00:25.959371 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:00:25.959385 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 12:00:25.959402 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:00:25.959417 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:00:25.959430 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:00:25.959444 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:00:25.959458 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:00:25.959472 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:00:25.959486 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:00:25.959502 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:00:25.959517 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:00:25.959531 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:00:25.959545 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:00:25.959559 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:00:25.959573 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:00:25.959587 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:00:25.959601 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:00:25.959615 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:00:25.959631 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:00:25.959645 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:00:25.959658 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:00:25.959672 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:00:25.959686 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:00:25.959701 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:00:25.959716 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:00:25.959730 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:00:25.959748 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:00:25.959765 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:00:25.959778 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:00:25.959792 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:00:25.959805 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:00:25.959819 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:00:25.959833 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:00:25.959847 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:00:25.959861 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:00:25.959901 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:00:25.959916 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:00:25.959930 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:00:25.959944 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:00:25.959958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:00:25.959972 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:00:25.959986 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:00:25.960000 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:00:25.960016 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:00:25.960032 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:00:25.960046 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:00:25.960060 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:00:25.960075 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:00:25.960089 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 29 12:00:25.960104 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 29 12:00:25.960126 kernel: fuse: init (API version 7.39) Jan 29 12:00:25.960142 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:00:25.960156 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:00:25.960197 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:00:25.960212 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:00:25.960226 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:00:25.960241 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:00:25.960254 kernel: ACPI: bus type drm_connector registered Jan 29 12:00:25.960268 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:00:25.960281 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:00:25.960333 systemd-journald[1157]: Collecting audit messages is disabled. Jan 29 12:00:25.960359 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:00:25.960373 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:00:25.960387 systemd-journald[1157]: Journal started Jan 29 12:00:25.960413 systemd-journald[1157]: Runtime Journal (/run/log/journal/88b2888d6f33490a866f0a9cf9585a49) is 6.0M, max 48.4M, 42.3M free. Jan 29 12:00:25.961669 kernel: loop: module loaded Jan 29 12:00:25.961696 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:00:25.965014 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:00:25.966306 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:00:25.967701 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:00:25.969390 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:00:25.971428 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:00:25.971650 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:00:25.973284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:00:25.973493 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:00:25.975057 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:00:25.975286 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:00:25.976678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:00:25.976894 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:00:25.978488 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:00:25.978697 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:00:25.980225 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:00:25.980458 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:00:25.982159 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:00:25.983758 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:00:25.985551 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:00:26.000997 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:00:26.011267 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:00:26.013614 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:00:26.015069 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:00:26.018342 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:00:26.021336 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:00:26.022669 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:00:26.025488 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:00:26.026875 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:00:26.028077 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:00:26.038973 systemd-journald[1157]: Time spent on flushing to /var/log/journal/88b2888d6f33490a866f0a9cf9585a49 is 12.721ms for 939 entries. Jan 29 12:00:26.038973 systemd-journald[1157]: System Journal (/var/log/journal/88b2888d6f33490a866f0a9cf9585a49) is 8.0M, max 195.6M, 187.6M free. Jan 29 12:00:26.318384 systemd-journald[1157]: Received client request to flush runtime journal. Jan 29 12:00:26.033278 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:00:26.038702 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:00:26.041317 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:00:26.058696 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:00:26.070572 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:00:26.080884 udevadm[1216]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 12:00:26.092603 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:00:26.098035 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 29 12:00:26.098049 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Jan 29 12:00:26.104658 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:00:26.119365 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:00:26.153843 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:00:26.170349 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:00:26.172062 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:00:26.173347 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:00:26.187516 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Jan 29 12:00:26.187531 systemd-tmpfiles[1224]: ACLs are not supported, ignoring. Jan 29 12:00:26.192980 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:00:26.320747 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:00:26.837345 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:00:26.847530 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:00:26.879671 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Jan 29 12:00:26.898851 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:00:26.909419 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:00:26.925394 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:00:26.939894 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 29 12:00:26.956235 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1246) Jan 29 12:00:26.994782 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:00:27.014617 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 12:00:27.022196 kernel: ACPI: button: Power Button [PWRF] Jan 29 12:00:27.026081 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 12:00:27.038206 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 29 12:00:27.042396 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 29 12:00:27.042614 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 29 12:00:27.042814 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 12:00:27.063203 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:00:27.075465 systemd-networkd[1242]: lo: Link UP Jan 29 12:00:27.075478 systemd-networkd[1242]: lo: Gained carrier Jan 29 12:00:27.077253 systemd-networkd[1242]: Enumeration completed Jan 29 12:00:27.077395 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:00:27.077663 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:00:27.077674 systemd-networkd[1242]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:00:27.078569 systemd-networkd[1242]: eth0: Link UP Jan 29 12:00:27.078580 systemd-networkd[1242]: eth0: Gained carrier Jan 29 12:00:27.078591 systemd-networkd[1242]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:00:27.096416 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:00:27.101415 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:00:27.140369 systemd-networkd[1242]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 29 12:00:27.166799 kernel: kvm_amd: TSC scaling supported Jan 29 12:00:27.166872 kernel: kvm_amd: Nested Virtualization enabled Jan 29 12:00:27.166905 kernel: kvm_amd: Nested Paging enabled Jan 29 12:00:27.167349 kernel: kvm_amd: LBR virtualization supported Jan 29 12:00:27.168643 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 29 12:00:27.168657 kernel: kvm_amd: Virtual GIF supported Jan 29 12:00:27.199280 kernel: EDAC MC: Ver: 3.0.0 Jan 29 12:00:27.234760 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:00:27.247041 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:00:27.263375 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:00:27.271834 lvm[1284]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:00:27.315501 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:00:27.324309 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:00:27.345502 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:00:27.353689 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:00:27.390130 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:00:27.391813 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:00:27.393371 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:00:27.393405 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:00:27.394624 systemd[1]: Reached target machines.target - Containers. Jan 29 12:00:27.396913 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:00:27.413391 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:00:27.418215 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:00:27.419541 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:00:27.420443 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:00:27.423425 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:00:27.427238 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:00:27.429448 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:00:27.440725 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:00:27.465240 kernel: loop0: detected capacity change from 0 to 142488 Jan 29 12:00:27.489188 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:00:27.515447 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:00:27.516312 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:00:27.522198 kernel: loop1: detected capacity change from 0 to 140768 Jan 29 12:00:27.550215 kernel: loop2: detected capacity change from 0 to 210664 Jan 29 12:00:27.589205 kernel: loop3: detected capacity change from 0 to 142488 Jan 29 12:00:27.601218 kernel: loop4: detected capacity change from 0 to 140768 Jan 29 12:00:27.613192 kernel: loop5: detected capacity change from 0 to 210664 Jan 29 12:00:27.618483 (sd-merge)[1307]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 29 12:00:27.619239 (sd-merge)[1307]: Merged extensions into '/usr'. Jan 29 12:00:27.623848 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:00:27.624022 systemd[1]: Reloading... Jan 29 12:00:27.679441 zram_generator::config[1333]: No configuration found. Jan 29 12:00:27.738202 ldconfig[1291]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:00:27.824532 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:00:27.899296 systemd[1]: Reloading finished in 274 ms. Jan 29 12:00:27.918443 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:00:27.920012 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:00:27.932335 systemd[1]: Starting ensure-sysext.service... Jan 29 12:00:27.934397 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:00:27.940216 systemd[1]: Reloading requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:00:27.940231 systemd[1]: Reloading... Jan 29 12:00:27.959608 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:00:27.960309 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:00:27.961690 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:00:27.961992 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 29 12:00:27.962092 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Jan 29 12:00:27.970854 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:00:27.970873 systemd-tmpfiles[1380]: Skipping /boot Jan 29 12:00:27.983391 zram_generator::config[1409]: No configuration found. Jan 29 12:00:27.984981 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:00:27.985001 systemd-tmpfiles[1380]: Skipping /boot Jan 29 12:00:28.100046 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:00:28.164425 systemd[1]: Reloading finished in 223 ms. Jan 29 12:00:28.182759 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:00:28.200957 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:00:28.203787 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:00:28.206916 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:00:28.211350 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:00:28.218649 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:00:28.224485 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:00:28.224654 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:00:28.225947 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:00:28.230421 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:00:28.234409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:00:28.236955 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:00:28.237085 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:00:28.238692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:00:28.238947 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:00:28.243314 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:00:28.243593 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:00:28.246744 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:00:28.247187 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:00:28.249489 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:00:28.258011 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:00:28.258352 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:00:28.266443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:00:28.266555 augenrules[1489]: No rules Jan 29 12:00:28.270748 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:00:28.274406 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:00:28.275716 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:00:28.278392 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:00:28.280334 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:00:28.282969 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:00:28.286127 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:00:28.287986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:00:28.288349 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:00:28.289964 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:00:28.290198 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:00:28.291891 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:00:28.293696 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:00:28.293909 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:00:28.303418 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:00:28.303691 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:00:28.309437 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:00:28.311843 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:00:28.313925 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:00:28.318419 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:00:28.319595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:00:28.319780 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:00:28.319911 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:00:28.322565 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:00:28.323086 systemd-resolved[1459]: Positive Trust Anchors: Jan 29 12:00:28.323424 systemd-resolved[1459]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:00:28.323517 systemd-resolved[1459]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:00:28.324237 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:00:28.324447 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:00:28.326232 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:00:28.326433 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:00:28.327992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:00:28.328214 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:00:28.330304 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:00:28.330591 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:00:28.330867 systemd-resolved[1459]: Defaulting to hostname 'linux'. Jan 29 12:00:28.333996 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:00:28.335889 systemd[1]: Finished ensure-sysext.service. Jan 29 12:00:28.340909 systemd[1]: Reached target network.target - Network. Jan 29 12:00:28.341833 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:00:28.343009 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:00:28.343082 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:00:28.353323 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 12:00:28.414871 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 12:00:28.416461 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:00:28.417610 systemd-timesyncd[1526]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 29 12:00:28.417639 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:00:28.417651 systemd-timesyncd[1526]: Initial clock synchronization to Wed 2025-01-29 12:00:28.616375 UTC. Jan 29 12:00:28.418887 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:00:28.420186 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:00:28.421523 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:00:28.421565 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:00:28.422457 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:00:28.423614 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:00:28.424852 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:00:28.426092 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:00:28.427636 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:00:28.430728 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:00:28.433146 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:00:28.443281 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:00:28.444441 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:00:28.445441 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:00:28.446552 systemd[1]: System is tainted: cgroupsv1 Jan 29 12:00:28.446589 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:00:28.446613 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:00:28.447877 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:00:28.450037 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:00:28.452078 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:00:28.454310 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:00:28.454735 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:00:28.458324 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:00:28.462532 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:00:28.466121 jq[1532]: false Jan 29 12:00:28.467305 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:00:28.485674 extend-filesystems[1533]: Found loop3 Jan 29 12:00:28.489111 extend-filesystems[1533]: Found loop4 Jan 29 12:00:28.489111 extend-filesystems[1533]: Found loop5 Jan 29 12:00:28.489111 extend-filesystems[1533]: Found sr0 Jan 29 12:00:28.489111 extend-filesystems[1533]: Found vda Jan 29 12:00:28.489111 extend-filesystems[1533]: Found vda1 Jan 29 12:00:28.489111 extend-filesystems[1533]: Found vda2 Jan 29 12:00:28.489111 extend-filesystems[1533]: Found vda3 Jan 29 12:00:28.489111 extend-filesystems[1533]: Found usr Jan 29 12:00:28.489111 extend-filesystems[1533]: Found vda4 Jan 29 12:00:28.489111 extend-filesystems[1533]: Found vda6 Jan 29 12:00:28.489111 extend-filesystems[1533]: Found vda7 Jan 29 12:00:28.489111 extend-filesystems[1533]: Found vda9 Jan 29 12:00:28.489111 extend-filesystems[1533]: Checking size of /dev/vda9 Jan 29 12:00:28.488594 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:00:28.490331 dbus-daemon[1531]: [system] SELinux support is enabled Jan 29 12:00:28.524062 extend-filesystems[1533]: Resized partition /dev/vda9 Jan 29 12:00:28.496348 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:00:28.526159 extend-filesystems[1557]: resize2fs 1.47.1 (20-May-2024) Jan 29 12:00:28.529544 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 29 12:00:28.498004 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:00:28.500860 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:00:28.503495 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:00:28.530064 jq[1558]: true Jan 29 12:00:28.505950 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:00:28.520687 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:00:28.521074 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:00:28.521485 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:00:28.521846 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:00:28.525469 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:00:28.528414 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:00:28.538782 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1255) Jan 29 12:00:28.552534 update_engine[1556]: I20250129 12:00:28.552451 1556 main.cc:92] Flatcar Update Engine starting Jan 29 12:00:28.559390 update_engine[1556]: I20250129 12:00:28.554630 1556 update_check_scheduler.cc:74] Next update check in 8m52s Jan 29 12:00:28.558551 (ntainerd)[1565]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:00:28.568757 jq[1564]: true Jan 29 12:00:28.576200 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 29 12:00:28.585924 tar[1561]: linux-amd64/helm Jan 29 12:00:28.592740 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:00:28.596339 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:00:28.596365 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:00:28.598304 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:00:28.598321 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:00:28.600553 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:00:28.602931 systemd-logind[1551]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 12:00:28.602950 systemd-logind[1551]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:00:28.604619 systemd-logind[1551]: New seat seat0. Jan 29 12:00:28.609451 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:00:28.611899 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:00:28.614681 extend-filesystems[1557]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 12:00:28.614681 extend-filesystems[1557]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 29 12:00:28.614681 extend-filesystems[1557]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 29 12:00:28.624879 extend-filesystems[1533]: Resized filesystem in /dev/vda9 Jan 29 12:00:28.625077 bash[1591]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:00:28.617790 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:00:28.618207 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:00:28.624087 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:00:28.633609 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 12:00:28.636185 sshd_keygen[1566]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:00:28.666915 locksmithd[1592]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:00:28.667031 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:00:28.723430 systemd-networkd[1242]: eth0: Gained IPv6LL Jan 29 12:00:28.783526 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:00:28.786058 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:00:28.789513 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:00:28.793247 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 29 12:00:28.798429 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:28.803137 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:00:28.806021 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:00:28.806382 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:00:28.831721 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:00:28.876062 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:00:28.900622 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 12:00:28.900987 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 29 12:00:28.903840 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:00:28.914608 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:00:28.918963 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:00:28.929458 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:00:28.930787 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:00:29.037667 containerd[1565]: time="2025-01-29T12:00:29.037539778Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:00:29.067022 containerd[1565]: time="2025-01-29T12:00:29.066963701Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:00:29.070842 containerd[1565]: time="2025-01-29T12:00:29.070455433Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:00:29.070842 containerd[1565]: time="2025-01-29T12:00:29.070513672Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:00:29.070842 containerd[1565]: time="2025-01-29T12:00:29.070541179Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:00:29.070842 containerd[1565]: time="2025-01-29T12:00:29.070779229Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:00:29.070842 containerd[1565]: time="2025-01-29T12:00:29.070800339Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:00:29.071012 containerd[1565]: time="2025-01-29T12:00:29.070887071Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:00:29.071012 containerd[1565]: time="2025-01-29T12:00:29.070902637Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:00:29.071305 containerd[1565]: time="2025-01-29T12:00:29.071277823Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:00:29.071330 containerd[1565]: time="2025-01-29T12:00:29.071302845Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:00:29.071330 containerd[1565]: time="2025-01-29T12:00:29.071321707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:00:29.071373 containerd[1565]: time="2025-01-29T12:00:29.071336575Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:00:29.071473 containerd[1565]: time="2025-01-29T12:00:29.071455189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:00:29.071804 containerd[1565]: time="2025-01-29T12:00:29.071770247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:00:29.072018 containerd[1565]: time="2025-01-29T12:00:29.071993808Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:00:29.072040 containerd[1565]: time="2025-01-29T12:00:29.072021448Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:00:29.072160 containerd[1565]: time="2025-01-29T12:00:29.072137967Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:00:29.072246 containerd[1565]: time="2025-01-29T12:00:29.072226661Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:00:29.078956 containerd[1565]: time="2025-01-29T12:00:29.078920183Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:00:29.079006 containerd[1565]: time="2025-01-29T12:00:29.078974694Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:00:29.079006 containerd[1565]: time="2025-01-29T12:00:29.078991564Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:00:29.079042 containerd[1565]: time="2025-01-29T12:00:29.079013948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:00:29.079042 containerd[1565]: time="2025-01-29T12:00:29.079028046Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:00:29.079233 containerd[1565]: time="2025-01-29T12:00:29.079210526Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:00:29.081387 containerd[1565]: time="2025-01-29T12:00:29.081357545Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:00:29.081752 containerd[1565]: time="2025-01-29T12:00:29.081519171Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:00:29.081752 containerd[1565]: time="2025-01-29T12:00:29.081543259Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:00:29.081752 containerd[1565]: time="2025-01-29T12:00:29.081561494Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:00:29.081752 containerd[1565]: time="2025-01-29T12:00:29.081579350Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:00:29.081752 containerd[1565]: time="2025-01-29T12:00:29.081596343Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:00:29.081752 containerd[1565]: time="2025-01-29T12:00:29.081613275Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:00:29.081752 containerd[1565]: time="2025-01-29T12:00:29.081632938Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:00:29.081752 containerd[1565]: time="2025-01-29T12:00:29.081652826Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:00:29.081752 containerd[1565]: time="2025-01-29T12:00:29.081669408Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:00:29.081752 containerd[1565]: time="2025-01-29T12:00:29.081682561Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:00:29.081752 containerd[1565]: time="2025-01-29T12:00:29.081693733Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:00:29.081752 containerd[1565]: time="2025-01-29T12:00:29.081714782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.081752 containerd[1565]: time="2025-01-29T12:00:29.081727945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.081752 containerd[1565]: time="2025-01-29T12:00:29.081740637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.082062 containerd[1565]: time="2025-01-29T12:00:29.081753225Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.082062 containerd[1565]: time="2025-01-29T12:00:29.081765146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.082062 containerd[1565]: time="2025-01-29T12:00:29.081785815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.082062 containerd[1565]: time="2025-01-29T12:00:29.081801925Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.082062 containerd[1565]: time="2025-01-29T12:00:29.081814842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.082062 containerd[1565]: time="2025-01-29T12:00:29.081828129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.082062 containerd[1565]: time="2025-01-29T12:00:29.081842812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.082062 containerd[1565]: time="2025-01-29T12:00:29.081854271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.082062 containerd[1565]: time="2025-01-29T12:00:29.081867403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.082062 containerd[1565]: time="2025-01-29T12:00:29.081879704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.082062 containerd[1565]: time="2025-01-29T12:00:29.081899059Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:00:29.082062 containerd[1565]: time="2025-01-29T12:00:29.081922500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.082062 containerd[1565]: time="2025-01-29T12:00:29.081935366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.082062 containerd[1565]: time="2025-01-29T12:00:29.081951250Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:00:29.082325 containerd[1565]: time="2025-01-29T12:00:29.081998092Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:00:29.082325 containerd[1565]: time="2025-01-29T12:00:29.082014335Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:00:29.082325 containerd[1565]: time="2025-01-29T12:00:29.082039276Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:00:29.082325 containerd[1565]: time="2025-01-29T12:00:29.082054401Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:00:29.082325 containerd[1565]: time="2025-01-29T12:00:29.082065356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.082325 containerd[1565]: time="2025-01-29T12:00:29.082097391Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:00:29.082325 containerd[1565]: time="2025-01-29T12:00:29.082107690Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:00:29.082325 containerd[1565]: time="2025-01-29T12:00:29.082119940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:00:29.082519 containerd[1565]: time="2025-01-29T12:00:29.082447790Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:00:29.082519 containerd[1565]: time="2025-01-29T12:00:29.082512376Z" level=info msg="Connect containerd service" Jan 29 12:00:29.082713 containerd[1565]: time="2025-01-29T12:00:29.082644358Z" level=info msg="using legacy CRI server" Jan 29 12:00:29.082713 containerd[1565]: time="2025-01-29T12:00:29.082707782Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:00:29.083016 containerd[1565]: time="2025-01-29T12:00:29.082946139Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:00:29.083823 containerd[1565]: time="2025-01-29T12:00:29.083785665Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:00:29.084210 containerd[1565]: time="2025-01-29T12:00:29.083969018Z" level=info msg="Start subscribing containerd event" Jan 29 12:00:29.084210 containerd[1565]: time="2025-01-29T12:00:29.084036950Z" level=info msg="Start recovering state" Jan 29 12:00:29.084210 containerd[1565]: time="2025-01-29T12:00:29.084135490Z" level=info msg="Start event monitor" Jan 29 12:00:29.084310 containerd[1565]: time="2025-01-29T12:00:29.084296776Z" level=info msg="Start snapshots syncer" Jan 29 12:00:29.084389 containerd[1565]: time="2025-01-29T12:00:29.084375777Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:00:29.084461 containerd[1565]: time="2025-01-29T12:00:29.084414836Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:00:29.084547 containerd[1565]: time="2025-01-29T12:00:29.084518993Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:00:29.084573 containerd[1565]: time="2025-01-29T12:00:29.084424159Z" level=info msg="Start streaming server" Jan 29 12:00:29.084686 containerd[1565]: time="2025-01-29T12:00:29.084656304Z" level=info msg="containerd successfully booted in 0.053040s" Jan 29 12:00:29.084815 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:00:29.226617 tar[1561]: linux-amd64/LICENSE Jan 29 12:00:29.226617 tar[1561]: linux-amd64/README.md Jan 29 12:00:29.249689 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:00:29.704007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:29.705721 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:00:29.707198 systemd[1]: Startup finished in 6.369s (kernel) + 4.528s (userspace) = 10.898s. Jan 29 12:00:29.728734 (kubelet)[1666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:30.257492 kubelet[1666]: E0129 12:00:30.257378 1666 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:30.262265 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:30.262530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:37.493725 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:00:37.500387 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:60128.service - OpenSSH per-connection server daemon (10.0.0.1:60128). Jan 29 12:00:37.536782 sshd[1680]: Accepted publickey for core from 10.0.0.1 port 60128 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:00:37.538850 sshd[1680]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:37.549955 systemd-logind[1551]: New session 1 of user core. Jan 29 12:00:37.551350 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:00:37.566417 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:00:37.580781 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:00:37.588652 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:00:37.592921 (systemd)[1686]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:00:37.726517 systemd[1686]: Queued start job for default target default.target. Jan 29 12:00:37.726978 systemd[1686]: Created slice app.slice - User Application Slice. Jan 29 12:00:37.727007 systemd[1686]: Reached target paths.target - Paths. Jan 29 12:00:37.727023 systemd[1686]: Reached target timers.target - Timers. Jan 29 12:00:37.738279 systemd[1686]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:00:37.745777 systemd[1686]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:00:37.745861 systemd[1686]: Reached target sockets.target - Sockets. Jan 29 12:00:37.745884 systemd[1686]: Reached target basic.target - Basic System. Jan 29 12:00:37.745930 systemd[1686]: Reached target default.target - Main User Target. Jan 29 12:00:37.745970 systemd[1686]: Startup finished in 145ms. Jan 29 12:00:37.746792 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:00:37.748637 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:00:37.807443 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:60138.service - OpenSSH per-connection server daemon (10.0.0.1:60138). Jan 29 12:00:37.838443 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 60138 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:00:37.840467 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:37.845747 systemd-logind[1551]: New session 2 of user core. Jan 29 12:00:37.855710 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:00:37.909901 sshd[1698]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:37.922515 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:60140.service - OpenSSH per-connection server daemon (10.0.0.1:60140). Jan 29 12:00:37.923013 systemd[1]: sshd@1-10.0.0.115:22-10.0.0.1:60138.service: Deactivated successfully. Jan 29 12:00:37.925351 systemd-logind[1551]: Session 2 logged out. Waiting for processes to exit. Jan 29 12:00:37.927251 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 12:00:37.928408 systemd-logind[1551]: Removed session 2. Jan 29 12:00:37.954790 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 60140 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:00:37.956815 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:37.961431 systemd-logind[1551]: New session 3 of user core. Jan 29 12:00:37.971437 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:00:38.021991 sshd[1703]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:38.031462 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:60150.service - OpenSSH per-connection server daemon (10.0.0.1:60150). Jan 29 12:00:38.032053 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:60140.service: Deactivated successfully. Jan 29 12:00:38.034636 systemd-logind[1551]: Session 3 logged out. Waiting for processes to exit. Jan 29 12:00:38.035910 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 12:00:38.036981 systemd-logind[1551]: Removed session 3. Jan 29 12:00:38.059220 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 60150 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:00:38.060914 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:38.065378 systemd-logind[1551]: New session 4 of user core. Jan 29 12:00:38.075436 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:00:38.131091 sshd[1711]: pam_unix(sshd:session): session closed for user core Jan 29 12:00:38.139456 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:60160.service - OpenSSH per-connection server daemon (10.0.0.1:60160). Jan 29 12:00:38.139942 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:60150.service: Deactivated successfully. Jan 29 12:00:38.142420 systemd-logind[1551]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:00:38.143236 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:00:38.144541 systemd-logind[1551]: Removed session 4. Jan 29 12:00:38.169974 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 60160 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:00:38.171681 sshd[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:00:38.176162 systemd-logind[1551]: New session 5 of user core. Jan 29 12:00:38.184417 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:00:38.243885 sudo[1726]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:00:38.244305 sudo[1726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:00:38.546391 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:00:38.546637 (dockerd)[1744]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:00:38.851122 dockerd[1744]: time="2025-01-29T12:00:38.850959994Z" level=info msg="Starting up" Jan 29 12:00:40.265134 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:00:40.276363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:40.402020 dockerd[1744]: time="2025-01-29T12:00:40.401701254Z" level=info msg="Loading containers: start." Jan 29 12:00:40.418544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:40.422839 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:40.571301 kubelet[1779]: E0129 12:00:40.571111 1779 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:40.578058 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:40.578372 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:40.668206 kernel: Initializing XFRM netlink socket Jan 29 12:00:40.744782 systemd-networkd[1242]: docker0: Link UP Jan 29 12:00:40.769598 dockerd[1744]: time="2025-01-29T12:00:40.769565155Z" level=info msg="Loading containers: done." Jan 29 12:00:40.786058 dockerd[1744]: time="2025-01-29T12:00:40.786011721Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:00:40.786238 dockerd[1744]: time="2025-01-29T12:00:40.786124161Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:00:40.786311 dockerd[1744]: time="2025-01-29T12:00:40.786283796Z" level=info msg="Daemon has completed initialization" Jan 29 12:00:40.820667 dockerd[1744]: time="2025-01-29T12:00:40.820610305Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:00:40.820808 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:00:41.523147 containerd[1565]: time="2025-01-29T12:00:41.523093652Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 12:00:42.133573 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1833741111.mount: Deactivated successfully. Jan 29 12:00:43.106512 containerd[1565]: time="2025-01-29T12:00:43.106441360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:43.107232 containerd[1565]: time="2025-01-29T12:00:43.107185461Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 29 12:00:43.108406 containerd[1565]: time="2025-01-29T12:00:43.108367684Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:43.111305 containerd[1565]: time="2025-01-29T12:00:43.111270911Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:43.112132 containerd[1565]: time="2025-01-29T12:00:43.112097552Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.588944599s" Jan 29 12:00:43.112165 containerd[1565]: time="2025-01-29T12:00:43.112137107Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 12:00:43.133924 containerd[1565]: time="2025-01-29T12:00:43.133884416Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 12:00:44.576474 containerd[1565]: time="2025-01-29T12:00:44.576393341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:44.577267 containerd[1565]: time="2025-01-29T12:00:44.577207685Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 29 12:00:44.578496 containerd[1565]: time="2025-01-29T12:00:44.578457125Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:44.581409 containerd[1565]: time="2025-01-29T12:00:44.581360576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:44.582603 containerd[1565]: time="2025-01-29T12:00:44.582566471Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.448636512s" Jan 29 12:00:44.582655 containerd[1565]: time="2025-01-29T12:00:44.582605263Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 12:00:44.605301 containerd[1565]: time="2025-01-29T12:00:44.605250797Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 12:00:45.699785 containerd[1565]: time="2025-01-29T12:00:45.699704675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:45.700587 containerd[1565]: time="2025-01-29T12:00:45.700538042Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 29 12:00:45.702282 containerd[1565]: time="2025-01-29T12:00:45.702249107Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:45.705487 containerd[1565]: time="2025-01-29T12:00:45.705437538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:45.706844 containerd[1565]: time="2025-01-29T12:00:45.706790375Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.101490738s" Jan 29 12:00:45.706921 containerd[1565]: time="2025-01-29T12:00:45.706843902Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 12:00:45.730308 containerd[1565]: time="2025-01-29T12:00:45.730269631Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:00:48.108868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3829012424.mount: Deactivated successfully. Jan 29 12:00:48.812970 containerd[1565]: time="2025-01-29T12:00:48.812866247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:48.813897 containerd[1565]: time="2025-01-29T12:00:48.813854836Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 29 12:00:48.815176 containerd[1565]: time="2025-01-29T12:00:48.815123621Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:48.817990 containerd[1565]: time="2025-01-29T12:00:48.817955717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:48.818634 containerd[1565]: time="2025-01-29T12:00:48.818602928Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 3.088294338s" Jan 29 12:00:48.818634 containerd[1565]: time="2025-01-29T12:00:48.818631388Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 12:00:48.844882 containerd[1565]: time="2025-01-29T12:00:48.844834426Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:00:49.436972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1071376055.mount: Deactivated successfully. Jan 29 12:00:50.828705 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:00:50.838321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:50.986920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:50.991858 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:00:51.033551 kubelet[2075]: E0129 12:00:51.033442 2075 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:00:51.038327 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:00:51.038642 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:00:52.083752 containerd[1565]: time="2025-01-29T12:00:52.083678861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:52.126611 containerd[1565]: time="2025-01-29T12:00:52.126537193Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 29 12:00:52.177326 containerd[1565]: time="2025-01-29T12:00:52.177256063Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:52.234364 containerd[1565]: time="2025-01-29T12:00:52.234304945Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:52.235861 containerd[1565]: time="2025-01-29T12:00:52.235810387Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.390923561s" Jan 29 12:00:52.235926 containerd[1565]: time="2025-01-29T12:00:52.235861773Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 12:00:52.257349 containerd[1565]: time="2025-01-29T12:00:52.257304854Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 12:00:53.068056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3514499633.mount: Deactivated successfully. Jan 29 12:00:53.073619 containerd[1565]: time="2025-01-29T12:00:53.073571499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:53.074459 containerd[1565]: time="2025-01-29T12:00:53.074365838Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 29 12:00:53.075723 containerd[1565]: time="2025-01-29T12:00:53.075675118Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:53.078171 containerd[1565]: time="2025-01-29T12:00:53.078143258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:53.078939 containerd[1565]: time="2025-01-29T12:00:53.078905765Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 821.555314ms" Jan 29 12:00:53.079018 containerd[1565]: time="2025-01-29T12:00:53.078943553Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 12:00:53.109572 containerd[1565]: time="2025-01-29T12:00:53.109510309Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 12:00:54.584798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount203689265.mount: Deactivated successfully. Jan 29 12:00:56.059188 containerd[1565]: time="2025-01-29T12:00:56.059097470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:56.059889 containerd[1565]: time="2025-01-29T12:00:56.059832854Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 29 12:00:56.061072 containerd[1565]: time="2025-01-29T12:00:56.061036702Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:56.063999 containerd[1565]: time="2025-01-29T12:00:56.063957567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:00:56.065684 containerd[1565]: time="2025-01-29T12:00:56.065601657Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.956027428s" Jan 29 12:00:56.065684 containerd[1565]: time="2025-01-29T12:00:56.065682794Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 12:00:58.638311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:58.658403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:58.675829 systemd[1]: Reloading requested from client PID 2230 ('systemctl') (unit session-5.scope)... Jan 29 12:00:58.675851 systemd[1]: Reloading... Jan 29 12:00:58.745246 zram_generator::config[2272]: No configuration found. Jan 29 12:00:58.936933 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:00:59.013471 systemd[1]: Reloading finished in 337 ms. Jan 29 12:00:59.067491 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:59.070518 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:00:59.070875 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:59.072628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:00:59.220311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:00:59.226724 (kubelet)[2332]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:00:59.720686 kubelet[2332]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:00:59.720686 kubelet[2332]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:00:59.720686 kubelet[2332]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:00:59.721845 kubelet[2332]: I0129 12:00:59.721806 2332 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:01:00.048355 kubelet[2332]: I0129 12:01:00.048226 2332 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:01:00.048355 kubelet[2332]: I0129 12:01:00.048275 2332 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:01:00.048585 kubelet[2332]: I0129 12:01:00.048557 2332 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:01:00.065495 kubelet[2332]: I0129 12:01:00.065439 2332 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:01:00.066184 kubelet[2332]: E0129 12:01:00.066099 2332 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:00.076349 kubelet[2332]: I0129 12:01:00.076287 2332 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:01:00.078350 kubelet[2332]: I0129 12:01:00.078297 2332 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:01:00.078577 kubelet[2332]: I0129 12:01:00.078336 2332 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:01:00.079037 kubelet[2332]: I0129 12:01:00.078998 2332 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:01:00.079037 kubelet[2332]: I0129 12:01:00.079020 2332 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:01:00.079224 kubelet[2332]: I0129 12:01:00.079203 2332 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:00.079844 kubelet[2332]: I0129 12:01:00.079813 2332 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:01:00.079844 kubelet[2332]: I0129 12:01:00.079831 2332 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:01:00.079904 kubelet[2332]: I0129 12:01:00.079857 2332 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:01:00.079904 kubelet[2332]: I0129 12:01:00.079877 2332 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:01:00.081464 kubelet[2332]: W0129 12:01:00.081216 2332 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:00.081464 kubelet[2332]: E0129 12:01:00.081300 2332 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:00.082426 kubelet[2332]: W0129 12:01:00.082387 2332 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:00.082496 kubelet[2332]: E0129 12:01:00.082457 2332 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:00.084108 kubelet[2332]: I0129 12:01:00.084082 2332 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:01:00.085321 kubelet[2332]: I0129 12:01:00.085299 2332 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:01:00.085392 kubelet[2332]: W0129 12:01:00.085369 2332 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:01:00.086179 kubelet[2332]: I0129 12:01:00.086018 2332 server.go:1264] "Started kubelet" Jan 29 12:01:00.087075 kubelet[2332]: I0129 12:01:00.086938 2332 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:01:00.087343 kubelet[2332]: I0129 12:01:00.087311 2332 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:01:00.087391 kubelet[2332]: I0129 12:01:00.087382 2332 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:01:00.087444 kubelet[2332]: I0129 12:01:00.087418 2332 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:01:00.088330 kubelet[2332]: I0129 12:01:00.088307 2332 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:01:00.095720 kubelet[2332]: I0129 12:01:00.095693 2332 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:01:00.096390 kubelet[2332]: I0129 12:01:00.096061 2332 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:01:00.096390 kubelet[2332]: I0129 12:01:00.096156 2332 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:01:00.096561 kubelet[2332]: E0129 12:01:00.096500 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:01:00.096881 kubelet[2332]: W0129 12:01:00.096812 2332 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:00.096881 kubelet[2332]: E0129 12:01:00.096866 2332 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:00.097511 kubelet[2332]: E0129 12:01:00.097471 2332 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:01:00.097622 kubelet[2332]: E0129 12:01:00.097590 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="200ms" Jan 29 12:01:00.098039 kubelet[2332]: E0129 12:01:00.097922 2332 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f2816eb0a012e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 12:01:00.085993774 +0000 UTC m=+0.854422255,LastTimestamp:2025-01-29 12:01:00.085993774 +0000 UTC m=+0.854422255,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 12:01:00.099491 kubelet[2332]: I0129 12:01:00.098028 2332 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:01:00.099491 kubelet[2332]: I0129 12:01:00.098222 2332 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:01:00.099491 kubelet[2332]: I0129 12:01:00.099085 2332 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:01:00.112067 kubelet[2332]: I0129 12:01:00.111985 2332 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:01:00.113641 kubelet[2332]: I0129 12:01:00.113613 2332 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:01:00.114179 kubelet[2332]: I0129 12:01:00.113745 2332 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:01:00.114179 kubelet[2332]: I0129 12:01:00.113792 2332 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:01:00.114179 kubelet[2332]: E0129 12:01:00.113852 2332 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:01:00.115117 kubelet[2332]: W0129 12:01:00.115067 2332 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:00.115117 kubelet[2332]: E0129 12:01:00.115109 2332 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:00.123821 kubelet[2332]: I0129 12:01:00.123766 2332 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:01:00.123821 kubelet[2332]: I0129 12:01:00.123798 2332 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:01:00.123821 kubelet[2332]: I0129 12:01:00.123826 2332 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:00.198355 kubelet[2332]: I0129 12:01:00.198305 2332 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:01:00.198846 kubelet[2332]: E0129 12:01:00.198799 2332 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 29 12:01:00.204730 kubelet[2332]: I0129 12:01:00.204689 2332 policy_none.go:49] "None policy: Start" Jan 29 12:01:00.205526 kubelet[2332]: I0129 12:01:00.205492 2332 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:01:00.205581 kubelet[2332]: I0129 12:01:00.205537 2332 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:01:00.214694 kubelet[2332]: E0129 12:01:00.214653 2332 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 12:01:00.271258 kubelet[2332]: I0129 12:01:00.271220 2332 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:01:00.271516 kubelet[2332]: I0129 12:01:00.271467 2332 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:01:00.272191 kubelet[2332]: I0129 12:01:00.271610 2332 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:01:00.273137 kubelet[2332]: E0129 12:01:00.273103 2332 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 12:01:00.298959 kubelet[2332]: E0129 12:01:00.298854 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="400ms" Jan 29 12:01:00.400828 kubelet[2332]: I0129 12:01:00.400798 2332 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:01:00.401152 kubelet[2332]: E0129 12:01:00.401132 2332 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 29 12:01:00.415446 kubelet[2332]: I0129 12:01:00.415353 2332 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 12:01:00.416639 kubelet[2332]: I0129 12:01:00.416609 2332 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 12:01:00.417361 kubelet[2332]: I0129 12:01:00.417345 2332 topology_manager.go:215] "Topology Admit Handler" podUID="cda33c7518c1e65b2ab9c24236d21c44" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 12:01:00.499541 kubelet[2332]: I0129 12:01:00.499473 2332 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 12:01:00.499541 kubelet[2332]: I0129 12:01:00.499533 2332 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:01:00.499752 kubelet[2332]: I0129 12:01:00.499563 2332 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:01:00.499752 kubelet[2332]: I0129 12:01:00.499587 2332 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:01:00.499752 kubelet[2332]: I0129 12:01:00.499625 2332 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:01:00.499752 kubelet[2332]: I0129 12:01:00.499644 2332 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cda33c7518c1e65b2ab9c24236d21c44-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda33c7518c1e65b2ab9c24236d21c44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:01:00.499752 kubelet[2332]: I0129 12:01:00.499664 2332 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cda33c7518c1e65b2ab9c24236d21c44-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda33c7518c1e65b2ab9c24236d21c44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:01:00.499939 kubelet[2332]: I0129 12:01:00.499685 2332 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cda33c7518c1e65b2ab9c24236d21c44-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cda33c7518c1e65b2ab9c24236d21c44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:01:00.499939 kubelet[2332]: I0129 12:01:00.499703 2332 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:01:00.699591 kubelet[2332]: E0129 12:01:00.699528 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="800ms" Jan 29 12:01:00.722188 kubelet[2332]: E0129 12:01:00.722105 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:00.722755 kubelet[2332]: E0129 12:01:00.722660 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:00.723025 containerd[1565]: time="2025-01-29T12:01:00.722972177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:00.723391 containerd[1565]: time="2025-01-29T12:01:00.723189029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:00.724566 kubelet[2332]: E0129 12:01:00.724548 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:00.724920 containerd[1565]: time="2025-01-29T12:01:00.724891213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cda33c7518c1e65b2ab9c24236d21c44,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:00.802252 kubelet[2332]: I0129 12:01:00.802212 2332 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:01:00.802655 kubelet[2332]: E0129 12:01:00.802604 2332 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 29 12:01:01.006731 kubelet[2332]: W0129 12:01:01.006599 2332 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:01.006731 kubelet[2332]: E0129 12:01:01.006658 2332 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.115:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:01.230909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1482045399.mount: Deactivated successfully. Jan 29 12:01:01.240556 containerd[1565]: time="2025-01-29T12:01:01.240479438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:01.241599 containerd[1565]: time="2025-01-29T12:01:01.241559147Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:01.242693 containerd[1565]: time="2025-01-29T12:01:01.242647705Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 12:01:01.243629 containerd[1565]: time="2025-01-29T12:01:01.243591431Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:01.244631 containerd[1565]: time="2025-01-29T12:01:01.244596377Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:01:01.245653 containerd[1565]: time="2025-01-29T12:01:01.245615195Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:01.246692 containerd[1565]: time="2025-01-29T12:01:01.246617446Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:01:01.249603 containerd[1565]: time="2025-01-29T12:01:01.249553143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:01:01.251076 containerd[1565]: time="2025-01-29T12:01:01.251032980Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 527.96027ms" Jan 29 12:01:01.252687 containerd[1565]: time="2025-01-29T12:01:01.252658169Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 527.691051ms" Jan 29 12:01:01.253432 containerd[1565]: time="2025-01-29T12:01:01.253389278Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 530.115752ms" Jan 29 12:01:01.466467 kubelet[2332]: W0129 12:01:01.466373 2332 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:01.466467 kubelet[2332]: E0129 12:01:01.466459 2332 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.115:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:01.501203 kubelet[2332]: E0129 12:01:01.501104 2332 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="1.6s" Jan 29 12:01:01.537081 kubelet[2332]: W0129 12:01:01.537011 2332 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:01.537081 kubelet[2332]: E0129 12:01:01.537056 2332 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.115:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:01.593212 kubelet[2332]: W0129 12:01:01.593112 2332 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:01.593212 kubelet[2332]: E0129 12:01:01.593202 2332 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.115:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:01.636131 kubelet[2332]: I0129 12:01:01.636083 2332 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:01:01.636558 kubelet[2332]: E0129 12:01:01.636531 2332 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Jan 29 12:01:02.168510 kubelet[2332]: E0129 12:01:02.168474 2332 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.115:6443: connect: connection refused Jan 29 12:01:02.206872 containerd[1565]: time="2025-01-29T12:01:02.205488499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:02.206872 containerd[1565]: time="2025-01-29T12:01:02.206194091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:02.206872 containerd[1565]: time="2025-01-29T12:01:02.206225473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:02.206872 containerd[1565]: time="2025-01-29T12:01:02.206344072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:02.210854 containerd[1565]: time="2025-01-29T12:01:02.210623775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:02.210854 containerd[1565]: time="2025-01-29T12:01:02.210680183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:02.210854 containerd[1565]: time="2025-01-29T12:01:02.210694576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:02.210854 containerd[1565]: time="2025-01-29T12:01:02.210768955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:02.231524 containerd[1565]: time="2025-01-29T12:01:02.231410938Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:02.231763 containerd[1565]: time="2025-01-29T12:01:02.231731577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:02.231862 containerd[1565]: time="2025-01-29T12:01:02.231838139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:02.232066 containerd[1565]: time="2025-01-29T12:01:02.232025113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:02.323130 containerd[1565]: time="2025-01-29T12:01:02.323084701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:cda33c7518c1e65b2ab9c24236d21c44,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e1e92ea0bf62c28792d666589726a4d2c7e4cf7562665538f190d5b22b8d9bd\"" Jan 29 12:01:02.324653 kubelet[2332]: E0129 12:01:02.324604 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:02.328300 containerd[1565]: time="2025-01-29T12:01:02.328262013Z" level=info msg="CreateContainer within sandbox \"3e1e92ea0bf62c28792d666589726a4d2c7e4cf7562665538f190d5b22b8d9bd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:01:02.336009 containerd[1565]: time="2025-01-29T12:01:02.335070610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa768cfa496cfc9e3116a993aa375fe975b16fd6ce1f9fb592debd15611c4a39\"" Jan 29 12:01:02.336148 kubelet[2332]: E0129 12:01:02.336082 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:02.337679 containerd[1565]: time="2025-01-29T12:01:02.337650501Z" level=info msg="CreateContainer within sandbox \"aa768cfa496cfc9e3116a993aa375fe975b16fd6ce1f9fb592debd15611c4a39\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:01:02.357059 containerd[1565]: time="2025-01-29T12:01:02.356996782Z" level=info msg="CreateContainer within sandbox \"3e1e92ea0bf62c28792d666589726a4d2c7e4cf7562665538f190d5b22b8d9bd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5e9d31c2ea92be267d55f4b332debad4878a48755052e177f72befbd20ee2d87\"" Jan 29 12:01:02.357663 containerd[1565]: time="2025-01-29T12:01:02.357628537Z" level=info msg="StartContainer for \"5e9d31c2ea92be267d55f4b332debad4878a48755052e177f72befbd20ee2d87\"" Jan 29 12:01:02.360963 containerd[1565]: time="2025-01-29T12:01:02.360914131Z" level=info msg="CreateContainer within sandbox \"aa768cfa496cfc9e3116a993aa375fe975b16fd6ce1f9fb592debd15611c4a39\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a7d04952de67e8ae0813e66b63c86adfd3d266b140f156e553d7f89ed127a270\"" Jan 29 12:01:02.361430 containerd[1565]: time="2025-01-29T12:01:02.361400586Z" level=info msg="StartContainer for \"a7d04952de67e8ae0813e66b63c86adfd3d266b140f156e553d7f89ed127a270\"" Jan 29 12:01:02.380742 containerd[1565]: time="2025-01-29T12:01:02.380536649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"440b6c80a9e649d1e3feb027dbd1f1ca12d67b42fcc82a9e421078864148e205\"" Jan 29 12:01:02.381701 kubelet[2332]: E0129 12:01:02.381676 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:02.383771 containerd[1565]: time="2025-01-29T12:01:02.383720462Z" level=info msg="CreateContainer within sandbox \"440b6c80a9e649d1e3feb027dbd1f1ca12d67b42fcc82a9e421078864148e205\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:01:02.410328 containerd[1565]: time="2025-01-29T12:01:02.410274256Z" level=info msg="CreateContainer within sandbox \"440b6c80a9e649d1e3feb027dbd1f1ca12d67b42fcc82a9e421078864148e205\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a321507bda77d43923e9a8ace492adcc5789a78e560ea30e1250c32888845af8\"" Jan 29 12:01:02.410959 containerd[1565]: time="2025-01-29T12:01:02.410932571Z" level=info msg="StartContainer for \"a321507bda77d43923e9a8ace492adcc5789a78e560ea30e1250c32888845af8\"" Jan 29 12:01:02.439311 containerd[1565]: time="2025-01-29T12:01:02.439138680Z" level=info msg="StartContainer for \"a7d04952de67e8ae0813e66b63c86adfd3d266b140f156e553d7f89ed127a270\" returns successfully" Jan 29 12:01:02.446529 containerd[1565]: time="2025-01-29T12:01:02.446401349Z" level=info msg="StartContainer for \"5e9d31c2ea92be267d55f4b332debad4878a48755052e177f72befbd20ee2d87\" returns successfully" Jan 29 12:01:02.487114 containerd[1565]: time="2025-01-29T12:01:02.486052587Z" level=info msg="StartContainer for \"a321507bda77d43923e9a8ace492adcc5789a78e560ea30e1250c32888845af8\" returns successfully" Jan 29 12:01:03.125033 kubelet[2332]: E0129 12:01:03.124909 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:03.128003 kubelet[2332]: E0129 12:01:03.127902 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:03.131186 kubelet[2332]: E0129 12:01:03.131049 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:03.239333 kubelet[2332]: I0129 12:01:03.239302 2332 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:01:03.399939 kubelet[2332]: E0129 12:01:03.399811 2332 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 12:01:03.536507 kubelet[2332]: E0129 12:01:03.536386 2332 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f2816eb0a012e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 12:01:00.085993774 +0000 UTC m=+0.854422255,LastTimestamp:2025-01-29 12:01:00.085993774 +0000 UTC m=+0.854422255,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 12:01:03.592997 kubelet[2332]: I0129 12:01:03.592851 2332 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 12:01:03.595870 kubelet[2332]: E0129 12:01:03.595784 2332 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f2816ebb8d1b5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 12:01:00.097450421 +0000 UTC m=+0.865878902,LastTimestamp:2025-01-29 12:01:00.097450421 +0000 UTC m=+0.865878902,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 12:01:03.659576 kubelet[2332]: E0129 12:01:03.659430 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:01:03.759927 kubelet[2332]: E0129 12:01:03.759828 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:01:03.763012 kubelet[2332]: E0129 12:01:03.762895 2332 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f2816ed3d21c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node localhost status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 12:01:00.122898881 +0000 UTC m=+0.891327363,LastTimestamp:2025-01-29 12:01:00.122898881 +0000 UTC m=+0.891327363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 12:01:03.860893 kubelet[2332]: E0129 12:01:03.860834 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:01:03.963438 kubelet[2332]: E0129 12:01:03.962703 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:01:03.963438 kubelet[2332]: E0129 12:01:03.962821 2332 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f2816ed3d53d5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node localhost status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 12:01:00.122911701 +0000 UTC m=+0.891340172,LastTimestamp:2025-01-29 12:01:00.122911701 +0000 UTC m=+0.891340172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 12:01:04.063181 kubelet[2332]: E0129 12:01:04.063092 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:01:04.132755 kubelet[2332]: E0129 12:01:04.132719 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:04.132962 kubelet[2332]: E0129 12:01:04.132932 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:04.163728 kubelet[2332]: E0129 12:01:04.163680 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:01:04.264285 kubelet[2332]: E0129 12:01:04.264124 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:01:04.365034 kubelet[2332]: E0129 12:01:04.364965 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:01:04.466176 kubelet[2332]: E0129 12:01:04.466101 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:01:04.566927 kubelet[2332]: E0129 12:01:04.566794 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:01:04.654243 kubelet[2332]: E0129 12:01:04.653622 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:04.668039 kubelet[2332]: E0129 12:01:04.667842 2332 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 12:01:05.084143 kubelet[2332]: I0129 12:01:05.084091 2332 apiserver.go:52] "Watching apiserver" Jan 29 12:01:05.096666 kubelet[2332]: I0129 12:01:05.096597 2332 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:01:05.145926 kubelet[2332]: E0129 12:01:05.145851 2332 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:05.660545 systemd[1]: Reloading requested from client PID 2613 ('systemctl') (unit session-5.scope)... Jan 29 12:01:05.660564 systemd[1]: Reloading... Jan 29 12:01:05.750196 zram_generator::config[2658]: No configuration found. Jan 29 12:01:05.858846 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:05.942497 systemd[1]: Reloading finished in 281 ms. Jan 29 12:01:05.977650 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:05.992746 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:01:05.993270 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:06.003468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:06.151393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:06.157968 (kubelet)[2707]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:01:06.203282 kubelet[2707]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:01:06.204180 kubelet[2707]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:01:06.204180 kubelet[2707]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:01:06.204180 kubelet[2707]: I0129 12:01:06.203700 2707 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:01:06.209874 kubelet[2707]: I0129 12:01:06.209828 2707 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:01:06.209874 kubelet[2707]: I0129 12:01:06.209861 2707 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:01:06.210225 kubelet[2707]: I0129 12:01:06.210097 2707 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:01:06.212464 kubelet[2707]: I0129 12:01:06.212426 2707 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:01:06.215588 kubelet[2707]: I0129 12:01:06.215547 2707 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:01:06.224376 kubelet[2707]: I0129 12:01:06.224332 2707 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:01:06.225029 kubelet[2707]: I0129 12:01:06.224983 2707 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:01:06.225252 kubelet[2707]: I0129 12:01:06.225019 2707 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:01:06.225376 kubelet[2707]: I0129 12:01:06.225270 2707 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:01:06.225376 kubelet[2707]: I0129 12:01:06.225312 2707 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:01:06.225376 kubelet[2707]: I0129 12:01:06.225366 2707 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:06.225513 kubelet[2707]: I0129 12:01:06.225490 2707 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:01:06.225513 kubelet[2707]: I0129 12:01:06.225507 2707 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:01:06.225584 kubelet[2707]: I0129 12:01:06.225533 2707 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:01:06.225584 kubelet[2707]: I0129 12:01:06.225556 2707 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:01:06.227033 kubelet[2707]: I0129 12:01:06.227012 2707 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:01:06.229186 kubelet[2707]: I0129 12:01:06.227368 2707 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:01:06.229186 kubelet[2707]: I0129 12:01:06.227954 2707 server.go:1264] "Started kubelet" Jan 29 12:01:06.229186 kubelet[2707]: I0129 12:01:06.228414 2707 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:01:06.229626 kubelet[2707]: I0129 12:01:06.229602 2707 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:01:06.230420 kubelet[2707]: I0129 12:01:06.230394 2707 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:01:06.234310 kubelet[2707]: I0129 12:01:06.234255 2707 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:01:06.234509 kubelet[2707]: I0129 12:01:06.234487 2707 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:01:06.238648 kubelet[2707]: I0129 12:01:06.238313 2707 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:01:06.238648 kubelet[2707]: I0129 12:01:06.238477 2707 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:01:06.238648 kubelet[2707]: I0129 12:01:06.238623 2707 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:01:06.241106 kubelet[2707]: I0129 12:01:06.241048 2707 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:01:06.243249 kubelet[2707]: I0129 12:01:06.243225 2707 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:01:06.243249 kubelet[2707]: I0129 12:01:06.243246 2707 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:01:06.251797 kubelet[2707]: I0129 12:01:06.251751 2707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:01:06.253278 kubelet[2707]: I0129 12:01:06.253237 2707 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:01:06.253321 kubelet[2707]: I0129 12:01:06.253298 2707 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:01:06.253321 kubelet[2707]: I0129 12:01:06.253316 2707 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:01:06.253430 kubelet[2707]: E0129 12:01:06.253382 2707 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:01:06.295726 kubelet[2707]: I0129 12:01:06.295679 2707 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:01:06.295726 kubelet[2707]: I0129 12:01:06.295698 2707 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:01:06.295726 kubelet[2707]: I0129 12:01:06.295715 2707 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:01:06.296019 kubelet[2707]: I0129 12:01:06.295844 2707 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:01:06.296019 kubelet[2707]: I0129 12:01:06.295860 2707 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:01:06.296019 kubelet[2707]: I0129 12:01:06.295880 2707 policy_none.go:49] "None policy: Start" Jan 29 12:01:06.296450 kubelet[2707]: I0129 12:01:06.296425 2707 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:01:06.296450 kubelet[2707]: I0129 12:01:06.296460 2707 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:01:06.296671 kubelet[2707]: I0129 12:01:06.296660 2707 state_mem.go:75] "Updated machine memory state" Jan 29 12:01:06.298315 kubelet[2707]: I0129 12:01:06.298283 2707 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:01:06.298517 kubelet[2707]: I0129 12:01:06.298457 2707 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:01:06.299813 kubelet[2707]: I0129 12:01:06.299225 2707 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:01:06.353852 kubelet[2707]: I0129 12:01:06.353780 2707 topology_manager.go:215] "Topology Admit Handler" podUID="cda33c7518c1e65b2ab9c24236d21c44" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 29 12:01:06.353964 kubelet[2707]: I0129 12:01:06.353908 2707 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 29 12:01:06.353964 kubelet[2707]: I0129 12:01:06.353956 2707 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 29 12:01:06.360623 kubelet[2707]: E0129 12:01:06.360558 2707 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 12:01:06.404752 kubelet[2707]: I0129 12:01:06.404716 2707 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 29 12:01:06.410678 kubelet[2707]: I0129 12:01:06.410632 2707 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 29 12:01:06.410802 kubelet[2707]: I0129 12:01:06.410749 2707 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 29 12:01:06.540160 kubelet[2707]: I0129 12:01:06.540019 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cda33c7518c1e65b2ab9c24236d21c44-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"cda33c7518c1e65b2ab9c24236d21c44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:01:06.540160 kubelet[2707]: I0129 12:01:06.540075 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:01:06.540160 kubelet[2707]: I0129 12:01:06.540104 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:01:06.540160 kubelet[2707]: I0129 12:01:06.540132 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 29 12:01:06.540160 kubelet[2707]: I0129 12:01:06.540152 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cda33c7518c1e65b2ab9c24236d21c44-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda33c7518c1e65b2ab9c24236d21c44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:01:06.540420 kubelet[2707]: I0129 12:01:06.540195 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cda33c7518c1e65b2ab9c24236d21c44-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"cda33c7518c1e65b2ab9c24236d21c44\") " pod="kube-system/kube-apiserver-localhost" Jan 29 12:01:06.540420 kubelet[2707]: I0129 12:01:06.540239 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:01:06.540420 kubelet[2707]: I0129 12:01:06.540259 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:01:06.540420 kubelet[2707]: I0129 12:01:06.540281 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 12:01:06.659543 kubelet[2707]: E0129 12:01:06.659484 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:06.661215 kubelet[2707]: E0129 12:01:06.661154 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:06.661386 kubelet[2707]: E0129 12:01:06.661358 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:07.227714 kubelet[2707]: I0129 12:01:07.227672 2707 apiserver.go:52] "Watching apiserver" Jan 29 12:01:07.238838 kubelet[2707]: I0129 12:01:07.238802 2707 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:01:07.264819 kubelet[2707]: E0129 12:01:07.264789 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:07.264953 kubelet[2707]: E0129 12:01:07.264888 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:07.272074 kubelet[2707]: E0129 12:01:07.272026 2707 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 12:01:07.272501 kubelet[2707]: E0129 12:01:07.272474 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:07.283728 kubelet[2707]: I0129 12:01:07.283664 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.28364747 podStartE2EDuration="1.28364747s" podCreationTimestamp="2025-01-29 12:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:07.283590877 +0000 UTC m=+1.120565096" watchObservedRunningTime="2025-01-29 12:01:07.28364747 +0000 UTC m=+1.120621689" Jan 29 12:01:07.290265 kubelet[2707]: I0129 12:01:07.290191 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.290154794 podStartE2EDuration="2.290154794s" podCreationTimestamp="2025-01-29 12:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:07.290120721 +0000 UTC m=+1.127094950" watchObservedRunningTime="2025-01-29 12:01:07.290154794 +0000 UTC m=+1.127129013" Jan 29 12:01:07.297568 kubelet[2707]: I0129 12:01:07.297403 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.297382997 podStartE2EDuration="1.297382997s" podCreationTimestamp="2025-01-29 12:01:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:07.297383629 +0000 UTC m=+1.134357858" watchObservedRunningTime="2025-01-29 12:01:07.297382997 +0000 UTC m=+1.134357216" Jan 29 12:01:07.497200 sudo[1726]: pam_unix(sudo:session): session closed for user root Jan 29 12:01:07.731769 sshd[1719]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:07.736947 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:60160.service: Deactivated successfully. Jan 29 12:01:07.741390 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:01:07.742234 systemd-logind[1551]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:01:07.743256 systemd-logind[1551]: Removed session 5. Jan 29 12:01:08.266220 kubelet[2707]: E0129 12:01:08.266188 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:08.979677 kubelet[2707]: E0129 12:01:08.979640 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:09.392912 kubelet[2707]: E0129 12:01:09.392842 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:13.616891 kubelet[2707]: E0129 12:01:13.616848 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:13.810252 update_engine[1556]: I20250129 12:01:13.810137 1556 update_attempter.cc:509] Updating boot flags... Jan 29 12:01:13.838202 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2785) Jan 29 12:01:13.874937 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2786) Jan 29 12:01:13.904615 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2786) Jan 29 12:01:14.276387 kubelet[2707]: E0129 12:01:14.276260 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:18.984903 kubelet[2707]: E0129 12:01:18.984859 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:19.397395 kubelet[2707]: E0129 12:01:19.397359 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:21.180418 kubelet[2707]: I0129 12:01:21.180386 2707 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:01:21.180840 containerd[1565]: time="2025-01-29T12:01:21.180798501Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:01:21.181110 kubelet[2707]: I0129 12:01:21.180991 2707 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:01:21.627197 kubelet[2707]: I0129 12:01:21.624542 2707 topology_manager.go:215] "Topology Admit Handler" podUID="179eda3f-be19-4e50-aec4-02baeea35a99" podNamespace="kube-system" podName="kube-proxy-np99t" Jan 29 12:01:21.628011 kubelet[2707]: I0129 12:01:21.627966 2707 topology_manager.go:215] "Topology Admit Handler" podUID="f1284613-b244-443b-a493-bbc2a937901f" podNamespace="kube-flannel" podName="kube-flannel-ds-nx4rj" Jan 29 12:01:21.719372 kubelet[2707]: I0129 12:01:21.719333 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/179eda3f-be19-4e50-aec4-02baeea35a99-kube-proxy\") pod \"kube-proxy-np99t\" (UID: \"179eda3f-be19-4e50-aec4-02baeea35a99\") " pod="kube-system/kube-proxy-np99t" Jan 29 12:01:21.719372 kubelet[2707]: I0129 12:01:21.719369 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/f1284613-b244-443b-a493-bbc2a937901f-cni-plugin\") pod \"kube-flannel-ds-nx4rj\" (UID: \"f1284613-b244-443b-a493-bbc2a937901f\") " pod="kube-flannel/kube-flannel-ds-nx4rj" Jan 29 12:01:21.719372 kubelet[2707]: I0129 12:01:21.719389 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/f1284613-b244-443b-a493-bbc2a937901f-flannel-cfg\") pod \"kube-flannel-ds-nx4rj\" (UID: \"f1284613-b244-443b-a493-bbc2a937901f\") " pod="kube-flannel/kube-flannel-ds-nx4rj" Jan 29 12:01:21.719558 kubelet[2707]: I0129 12:01:21.719409 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/179eda3f-be19-4e50-aec4-02baeea35a99-xtables-lock\") pod \"kube-proxy-np99t\" (UID: \"179eda3f-be19-4e50-aec4-02baeea35a99\") " pod="kube-system/kube-proxy-np99t" Jan 29 12:01:21.719558 kubelet[2707]: I0129 12:01:21.719424 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/179eda3f-be19-4e50-aec4-02baeea35a99-lib-modules\") pod \"kube-proxy-np99t\" (UID: \"179eda3f-be19-4e50-aec4-02baeea35a99\") " pod="kube-system/kube-proxy-np99t" Jan 29 12:01:21.719558 kubelet[2707]: I0129 12:01:21.719443 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq6qq\" (UniqueName: \"kubernetes.io/projected/179eda3f-be19-4e50-aec4-02baeea35a99-kube-api-access-vq6qq\") pod \"kube-proxy-np99t\" (UID: \"179eda3f-be19-4e50-aec4-02baeea35a99\") " pod="kube-system/kube-proxy-np99t" Jan 29 12:01:21.719558 kubelet[2707]: I0129 12:01:21.719490 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/f1284613-b244-443b-a493-bbc2a937901f-cni\") pod \"kube-flannel-ds-nx4rj\" (UID: \"f1284613-b244-443b-a493-bbc2a937901f\") " pod="kube-flannel/kube-flannel-ds-nx4rj" Jan 29 12:01:21.719558 kubelet[2707]: I0129 12:01:21.719530 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1284613-b244-443b-a493-bbc2a937901f-xtables-lock\") pod \"kube-flannel-ds-nx4rj\" (UID: \"f1284613-b244-443b-a493-bbc2a937901f\") " pod="kube-flannel/kube-flannel-ds-nx4rj" Jan 29 12:01:21.719688 kubelet[2707]: I0129 12:01:21.719547 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f1284613-b244-443b-a493-bbc2a937901f-run\") pod \"kube-flannel-ds-nx4rj\" (UID: \"f1284613-b244-443b-a493-bbc2a937901f\") " pod="kube-flannel/kube-flannel-ds-nx4rj" Jan 29 12:01:21.719688 kubelet[2707]: I0129 12:01:21.719624 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g9h7\" (UniqueName: \"kubernetes.io/projected/f1284613-b244-443b-a493-bbc2a937901f-kube-api-access-7g9h7\") pod \"kube-flannel-ds-nx4rj\" (UID: \"f1284613-b244-443b-a493-bbc2a937901f\") " pod="kube-flannel/kube-flannel-ds-nx4rj" Jan 29 12:01:21.824619 kubelet[2707]: E0129 12:01:21.824568 2707 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 12:01:21.824619 kubelet[2707]: E0129 12:01:21.824621 2707 projected.go:200] Error preparing data for projected volume kube-api-access-vq6qq for pod kube-system/kube-proxy-np99t: configmap "kube-root-ca.crt" not found Jan 29 12:01:21.824792 kubelet[2707]: E0129 12:01:21.824568 2707 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 12:01:21.824792 kubelet[2707]: E0129 12:01:21.824664 2707 projected.go:200] Error preparing data for projected volume kube-api-access-7g9h7 for pod kube-flannel/kube-flannel-ds-nx4rj: configmap "kube-root-ca.crt" not found Jan 29 12:01:21.824792 kubelet[2707]: E0129 12:01:21.824693 2707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/179eda3f-be19-4e50-aec4-02baeea35a99-kube-api-access-vq6qq podName:179eda3f-be19-4e50-aec4-02baeea35a99 nodeName:}" failed. No retries permitted until 2025-01-29 12:01:22.324667392 +0000 UTC m=+16.161641611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vq6qq" (UniqueName: "kubernetes.io/projected/179eda3f-be19-4e50-aec4-02baeea35a99-kube-api-access-vq6qq") pod "kube-proxy-np99t" (UID: "179eda3f-be19-4e50-aec4-02baeea35a99") : configmap "kube-root-ca.crt" not found Jan 29 12:01:21.824792 kubelet[2707]: E0129 12:01:21.824718 2707 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f1284613-b244-443b-a493-bbc2a937901f-kube-api-access-7g9h7 podName:f1284613-b244-443b-a493-bbc2a937901f nodeName:}" failed. No retries permitted until 2025-01-29 12:01:22.324702002 +0000 UTC m=+16.161676221 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7g9h7" (UniqueName: "kubernetes.io/projected/f1284613-b244-443b-a493-bbc2a937901f-kube-api-access-7g9h7") pod "kube-flannel-ds-nx4rj" (UID: "f1284613-b244-443b-a493-bbc2a937901f") : configmap "kube-root-ca.crt" not found Jan 29 12:01:22.531671 kubelet[2707]: E0129 12:01:22.531632 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:22.532145 containerd[1565]: time="2025-01-29T12:01:22.532037747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-np99t,Uid:179eda3f-be19-4e50-aec4-02baeea35a99,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:22.533695 kubelet[2707]: E0129 12:01:22.533656 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:22.533943 containerd[1565]: time="2025-01-29T12:01:22.533918592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-nx4rj,Uid:f1284613-b244-443b-a493-bbc2a937901f,Namespace:kube-flannel,Attempt:0,}" Jan 29 12:01:22.577315 containerd[1565]: time="2025-01-29T12:01:22.577240606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:22.577452 containerd[1565]: time="2025-01-29T12:01:22.577286178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:22.577452 containerd[1565]: time="2025-01-29T12:01:22.577296228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:22.577452 containerd[1565]: time="2025-01-29T12:01:22.577395890Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:22.584173 containerd[1565]: time="2025-01-29T12:01:22.583860666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:22.584173 containerd[1565]: time="2025-01-29T12:01:22.583937090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:22.584173 containerd[1565]: time="2025-01-29T12:01:22.583968503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:22.584173 containerd[1565]: time="2025-01-29T12:01:22.584064828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:22.615333 containerd[1565]: time="2025-01-29T12:01:22.615295462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-np99t,Uid:179eda3f-be19-4e50-aec4-02baeea35a99,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bf6e6d2d114fe27e99bc0c11497bd82428565edc0188689c569d334a6be66fb\"" Jan 29 12:01:22.616020 kubelet[2707]: E0129 12:01:22.615815 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:22.618248 containerd[1565]: time="2025-01-29T12:01:22.618155363Z" level=info msg="CreateContainer within sandbox \"1bf6e6d2d114fe27e99bc0c11497bd82428565edc0188689c569d334a6be66fb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:01:22.631658 containerd[1565]: time="2025-01-29T12:01:22.631612369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-nx4rj,Uid:f1284613-b244-443b-a493-bbc2a937901f,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"932d30def81f366c77afcd723328dd6b93947ec84a55ea1931e76b2099335c7a\"" Jan 29 12:01:22.632354 kubelet[2707]: E0129 12:01:22.632320 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:22.633364 containerd[1565]: time="2025-01-29T12:01:22.633336859Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 29 12:01:22.641800 containerd[1565]: time="2025-01-29T12:01:22.641677731Z" level=info msg="CreateContainer within sandbox \"1bf6e6d2d114fe27e99bc0c11497bd82428565edc0188689c569d334a6be66fb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7d7225461baec7a11105c15a405b3592ba9f3628b619d6406d248ee80c4aa3f1\"" Jan 29 12:01:22.642540 containerd[1565]: time="2025-01-29T12:01:22.642501063Z" level=info msg="StartContainer for \"7d7225461baec7a11105c15a405b3592ba9f3628b619d6406d248ee80c4aa3f1\"" Jan 29 12:01:22.704429 containerd[1565]: time="2025-01-29T12:01:22.704374925Z" level=info msg="StartContainer for \"7d7225461baec7a11105c15a405b3592ba9f3628b619d6406d248ee80c4aa3f1\" returns successfully" Jan 29 12:01:23.293682 kubelet[2707]: E0129 12:01:23.293651 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:23.303957 kubelet[2707]: I0129 12:01:23.303762 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-np99t" podStartSLOduration=2.303744881 podStartE2EDuration="2.303744881s" podCreationTimestamp="2025-01-29 12:01:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:23.303684219 +0000 UTC m=+17.140658438" watchObservedRunningTime="2025-01-29 12:01:23.303744881 +0000 UTC m=+17.140719100" Jan 29 12:01:24.332124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1730111561.mount: Deactivated successfully. Jan 29 12:01:24.370225 containerd[1565]: time="2025-01-29T12:01:24.370157775Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:24.371072 containerd[1565]: time="2025-01-29T12:01:24.371033161Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 29 12:01:24.372449 containerd[1565]: time="2025-01-29T12:01:24.372406396Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:24.374887 containerd[1565]: time="2025-01-29T12:01:24.374855179Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:24.375682 containerd[1565]: time="2025-01-29T12:01:24.375610155Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.742237454s" Jan 29 12:01:24.375682 containerd[1565]: time="2025-01-29T12:01:24.375675215Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 29 12:01:24.377479 containerd[1565]: time="2025-01-29T12:01:24.377455417Z" level=info msg="CreateContainer within sandbox \"932d30def81f366c77afcd723328dd6b93947ec84a55ea1931e76b2099335c7a\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 29 12:01:24.389895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1643802867.mount: Deactivated successfully. Jan 29 12:01:24.390653 containerd[1565]: time="2025-01-29T12:01:24.390607534Z" level=info msg="CreateContainer within sandbox \"932d30def81f366c77afcd723328dd6b93947ec84a55ea1931e76b2099335c7a\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"fe1ea71b0e78f1db95444d4e9e82160f53b65ff1f12098baba9e009695985114\"" Jan 29 12:01:24.391194 containerd[1565]: time="2025-01-29T12:01:24.391054030Z" level=info msg="StartContainer for \"fe1ea71b0e78f1db95444d4e9e82160f53b65ff1f12098baba9e009695985114\"" Jan 29 12:01:24.442900 containerd[1565]: time="2025-01-29T12:01:24.442822977Z" level=info msg="StartContainer for \"fe1ea71b0e78f1db95444d4e9e82160f53b65ff1f12098baba9e009695985114\" returns successfully" Jan 29 12:01:24.460212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe1ea71b0e78f1db95444d4e9e82160f53b65ff1f12098baba9e009695985114-rootfs.mount: Deactivated successfully. Jan 29 12:01:24.495632 containerd[1565]: time="2025-01-29T12:01:24.494019745Z" level=info msg="shim disconnected" id=fe1ea71b0e78f1db95444d4e9e82160f53b65ff1f12098baba9e009695985114 namespace=k8s.io Jan 29 12:01:24.495632 containerd[1565]: time="2025-01-29T12:01:24.495619946Z" level=warning msg="cleaning up after shim disconnected" id=fe1ea71b0e78f1db95444d4e9e82160f53b65ff1f12098baba9e009695985114 namespace=k8s.io Jan 29 12:01:24.495632 containerd[1565]: time="2025-01-29T12:01:24.495630036Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:01:25.298000 kubelet[2707]: E0129 12:01:25.297966 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:25.298655 containerd[1565]: time="2025-01-29T12:01:25.298612627Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 29 12:01:26.974323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682102113.mount: Deactivated successfully. Jan 29 12:01:27.472637 containerd[1565]: time="2025-01-29T12:01:27.472561885Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:27.473320 containerd[1565]: time="2025-01-29T12:01:27.473272400Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 29 12:01:27.474420 containerd[1565]: time="2025-01-29T12:01:27.474386327Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:27.476820 containerd[1565]: time="2025-01-29T12:01:27.476798702Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:01:27.478007 containerd[1565]: time="2025-01-29T12:01:27.477971848Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.179315324s" Jan 29 12:01:27.478066 containerd[1565]: time="2025-01-29T12:01:27.478007509Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 29 12:01:27.479953 containerd[1565]: time="2025-01-29T12:01:27.479929907Z" level=info msg="CreateContainer within sandbox \"932d30def81f366c77afcd723328dd6b93947ec84a55ea1931e76b2099335c7a\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:01:27.492564 containerd[1565]: time="2025-01-29T12:01:27.492531262Z" level=info msg="CreateContainer within sandbox \"932d30def81f366c77afcd723328dd6b93947ec84a55ea1931e76b2099335c7a\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6c276bddebc66958e987e35c44f36f2fa1031e79aedba0e91d48ea422e1397f9\"" Jan 29 12:01:27.492815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2264673936.mount: Deactivated successfully. Jan 29 12:01:27.493687 containerd[1565]: time="2025-01-29T12:01:27.492997040Z" level=info msg="StartContainer for \"6c276bddebc66958e987e35c44f36f2fa1031e79aedba0e91d48ea422e1397f9\"" Jan 29 12:01:27.541314 containerd[1565]: time="2025-01-29T12:01:27.541276185Z" level=info msg="StartContainer for \"6c276bddebc66958e987e35c44f36f2fa1031e79aedba0e91d48ea422e1397f9\" returns successfully" Jan 29 12:01:27.628633 kubelet[2707]: I0129 12:01:27.628584 2707 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:01:27.799334 containerd[1565]: time="2025-01-29T12:01:27.799193154Z" level=info msg="shim disconnected" id=6c276bddebc66958e987e35c44f36f2fa1031e79aedba0e91d48ea422e1397f9 namespace=k8s.io Jan 29 12:01:27.799334 containerd[1565]: time="2025-01-29T12:01:27.799262171Z" level=warning msg="cleaning up after shim disconnected" id=6c276bddebc66958e987e35c44f36f2fa1031e79aedba0e91d48ea422e1397f9 namespace=k8s.io Jan 29 12:01:27.799334 containerd[1565]: time="2025-01-29T12:01:27.799275648Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:01:27.810130 kubelet[2707]: I0129 12:01:27.810010 2707 topology_manager.go:215] "Topology Admit Handler" podUID="30517739-6e65-4d78-b6f3-895a92816d56" podNamespace="kube-system" podName="coredns-7db6d8ff4d-r7qfb" Jan 29 12:01:27.812180 kubelet[2707]: I0129 12:01:27.811833 2707 topology_manager.go:215] "Topology Admit Handler" podUID="d21b1dc6-9243-4dbd-93f7-df19f07c67ec" podNamespace="kube-system" podName="coredns-7db6d8ff4d-458dl" Jan 29 12:01:27.860854 kubelet[2707]: I0129 12:01:27.860752 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcvm8\" (UniqueName: \"kubernetes.io/projected/30517739-6e65-4d78-b6f3-895a92816d56-kube-api-access-hcvm8\") pod \"coredns-7db6d8ff4d-r7qfb\" (UID: \"30517739-6e65-4d78-b6f3-895a92816d56\") " pod="kube-system/coredns-7db6d8ff4d-r7qfb" Jan 29 12:01:27.860854 kubelet[2707]: I0129 12:01:27.860847 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l8zt\" (UniqueName: \"kubernetes.io/projected/d21b1dc6-9243-4dbd-93f7-df19f07c67ec-kube-api-access-2l8zt\") pod \"coredns-7db6d8ff4d-458dl\" (UID: \"d21b1dc6-9243-4dbd-93f7-df19f07c67ec\") " pod="kube-system/coredns-7db6d8ff4d-458dl" Jan 29 12:01:27.861079 kubelet[2707]: I0129 12:01:27.860874 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d21b1dc6-9243-4dbd-93f7-df19f07c67ec-config-volume\") pod \"coredns-7db6d8ff4d-458dl\" (UID: \"d21b1dc6-9243-4dbd-93f7-df19f07c67ec\") " pod="kube-system/coredns-7db6d8ff4d-458dl" Jan 29 12:01:27.861079 kubelet[2707]: I0129 12:01:27.861002 2707 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30517739-6e65-4d78-b6f3-895a92816d56-config-volume\") pod \"coredns-7db6d8ff4d-r7qfb\" (UID: \"30517739-6e65-4d78-b6f3-895a92816d56\") " pod="kube-system/coredns-7db6d8ff4d-r7qfb" Jan 29 12:01:27.893480 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c276bddebc66958e987e35c44f36f2fa1031e79aedba0e91d48ea422e1397f9-rootfs.mount: Deactivated successfully. Jan 29 12:01:28.114012 kubelet[2707]: E0129 12:01:28.113970 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:28.114535 containerd[1565]: time="2025-01-29T12:01:28.114502437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r7qfb,Uid:30517739-6e65-4d78-b6f3-895a92816d56,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:28.116797 kubelet[2707]: E0129 12:01:28.116763 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:28.117143 containerd[1565]: time="2025-01-29T12:01:28.117111000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-458dl,Uid:d21b1dc6-9243-4dbd-93f7-df19f07c67ec,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:28.153903 containerd[1565]: time="2025-01-29T12:01:28.153836396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-458dl,Uid:d21b1dc6-9243-4dbd-93f7-df19f07c67ec,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5f1ce4767283de04c93758b2da33ec680de8623b5ca60bf6b0474d846997a73\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:01:28.154104 kubelet[2707]: E0129 12:01:28.154060 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f1ce4767283de04c93758b2da33ec680de8623b5ca60bf6b0474d846997a73\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:01:28.154175 kubelet[2707]: E0129 12:01:28.154134 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f1ce4767283de04c93758b2da33ec680de8623b5ca60bf6b0474d846997a73\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-458dl" Jan 29 12:01:28.154175 kubelet[2707]: E0129 12:01:28.154153 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5f1ce4767283de04c93758b2da33ec680de8623b5ca60bf6b0474d846997a73\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-458dl" Jan 29 12:01:28.154259 kubelet[2707]: E0129 12:01:28.154211 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-458dl_kube-system(d21b1dc6-9243-4dbd-93f7-df19f07c67ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-458dl_kube-system(d21b1dc6-9243-4dbd-93f7-df19f07c67ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5f1ce4767283de04c93758b2da33ec680de8623b5ca60bf6b0474d846997a73\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-458dl" podUID="d21b1dc6-9243-4dbd-93f7-df19f07c67ec" Jan 29 12:01:28.154996 containerd[1565]: time="2025-01-29T12:01:28.154968636Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r7qfb,Uid:30517739-6e65-4d78-b6f3-895a92816d56,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a98f939dcd0a38ec8c0739870f7909c3a7e8f589b0f1842753db97af4f47d67\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:01:28.155127 kubelet[2707]: E0129 12:01:28.155106 2707 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a98f939dcd0a38ec8c0739870f7909c3a7e8f589b0f1842753db97af4f47d67\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 29 12:01:28.155200 kubelet[2707]: E0129 12:01:28.155136 2707 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a98f939dcd0a38ec8c0739870f7909c3a7e8f589b0f1842753db97af4f47d67\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-r7qfb" Jan 29 12:01:28.155200 kubelet[2707]: E0129 12:01:28.155182 2707 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a98f939dcd0a38ec8c0739870f7909c3a7e8f589b0f1842753db97af4f47d67\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-r7qfb" Jan 29 12:01:28.155267 kubelet[2707]: E0129 12:01:28.155209 2707 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-r7qfb_kube-system(30517739-6e65-4d78-b6f3-895a92816d56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-r7qfb_kube-system(30517739-6e65-4d78-b6f3-895a92816d56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a98f939dcd0a38ec8c0739870f7909c3a7e8f589b0f1842753db97af4f47d67\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-r7qfb" podUID="30517739-6e65-4d78-b6f3-895a92816d56" Jan 29 12:01:28.304628 kubelet[2707]: E0129 12:01:28.304579 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:28.307265 containerd[1565]: time="2025-01-29T12:01:28.307140225Z" level=info msg="CreateContainer within sandbox \"932d30def81f366c77afcd723328dd6b93947ec84a55ea1931e76b2099335c7a\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 29 12:01:28.321591 containerd[1565]: time="2025-01-29T12:01:28.321528165Z" level=info msg="CreateContainer within sandbox \"932d30def81f366c77afcd723328dd6b93947ec84a55ea1931e76b2099335c7a\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"c35d21322eb7046f7f4251a7eb178775f5ff463e4089a57cf5709e168bcef95f\"" Jan 29 12:01:28.322109 containerd[1565]: time="2025-01-29T12:01:28.322082437Z" level=info msg="StartContainer for \"c35d21322eb7046f7f4251a7eb178775f5ff463e4089a57cf5709e168bcef95f\"" Jan 29 12:01:28.380204 containerd[1565]: time="2025-01-29T12:01:28.379852240Z" level=info msg="StartContainer for \"c35d21322eb7046f7f4251a7eb178775f5ff463e4089a57cf5709e168bcef95f\" returns successfully" Jan 29 12:01:28.895454 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5f1ce4767283de04c93758b2da33ec680de8623b5ca60bf6b0474d846997a73-shm.mount: Deactivated successfully. Jan 29 12:01:28.895687 systemd[1]: run-netns-cni\x2d57f69e3d\x2d333a\x2dbc77\x2d4b37\x2d75e2d748ef9e.mount: Deactivated successfully. Jan 29 12:01:28.895878 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a98f939dcd0a38ec8c0739870f7909c3a7e8f589b0f1842753db97af4f47d67-shm.mount: Deactivated successfully. Jan 29 12:01:29.309011 kubelet[2707]: E0129 12:01:29.308868 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:29.422685 systemd-networkd[1242]: flannel.1: Link UP Jan 29 12:01:29.422693 systemd-networkd[1242]: flannel.1: Gained carrier Jan 29 12:01:30.309945 kubelet[2707]: E0129 12:01:30.309901 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:31.123353 systemd-networkd[1242]: flannel.1: Gained IPv6LL Jan 29 12:01:31.696519 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:33154.service - OpenSSH per-connection server daemon (10.0.0.1:33154). Jan 29 12:01:31.728242 sshd[3359]: Accepted publickey for core from 10.0.0.1 port 33154 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:01:31.730235 sshd[3359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:31.734592 systemd-logind[1551]: New session 6 of user core. Jan 29 12:01:31.745418 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:01:31.866220 sshd[3359]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:31.870765 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:33154.service: Deactivated successfully. Jan 29 12:01:31.873067 systemd-logind[1551]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:01:31.873132 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:01:31.874298 systemd-logind[1551]: Removed session 6. Jan 29 12:01:36.881409 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:33166.service - OpenSSH per-connection server daemon (10.0.0.1:33166). Jan 29 12:01:36.909915 sshd[3396]: Accepted publickey for core from 10.0.0.1 port 33166 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:01:36.911621 sshd[3396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:36.915484 systemd-logind[1551]: New session 7 of user core. Jan 29 12:01:36.924466 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:01:37.032002 sshd[3396]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:37.036732 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:33166.service: Deactivated successfully. Jan 29 12:01:37.039293 systemd-logind[1551]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:01:37.039408 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:01:37.040628 systemd-logind[1551]: Removed session 7. Jan 29 12:01:41.254134 kubelet[2707]: E0129 12:01:41.254100 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:41.254557 kubelet[2707]: E0129 12:01:41.254286 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:41.254619 containerd[1565]: time="2025-01-29T12:01:41.254489589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-458dl,Uid:d21b1dc6-9243-4dbd-93f7-df19f07c67ec,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:41.254936 containerd[1565]: time="2025-01-29T12:01:41.254884949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r7qfb,Uid:30517739-6e65-4d78-b6f3-895a92816d56,Namespace:kube-system,Attempt:0,}" Jan 29 12:01:41.413058 systemd-networkd[1242]: cni0: Link UP Jan 29 12:01:41.413071 systemd-networkd[1242]: cni0: Gained carrier Jan 29 12:01:41.419731 systemd-networkd[1242]: cni0: Lost carrier Jan 29 12:01:41.419980 systemd-networkd[1242]: vethf74e7393: Link UP Jan 29 12:01:41.423451 kernel: cni0: port 1(vethf74e7393) entered blocking state Jan 29 12:01:41.423503 kernel: cni0: port 1(vethf74e7393) entered disabled state Jan 29 12:01:41.423518 kernel: vethf74e7393: entered allmulticast mode Jan 29 12:01:41.423546 kernel: vethf74e7393: entered promiscuous mode Jan 29 12:01:41.426288 kernel: cni0: port 2(veth185ed8d2) entered blocking state Jan 29 12:01:41.426331 kernel: cni0: port 2(veth185ed8d2) entered disabled state Jan 29 12:01:41.425117 systemd-networkd[1242]: veth185ed8d2: Link UP Jan 29 12:01:41.427198 kernel: veth185ed8d2: entered allmulticast mode Jan 29 12:01:41.428878 kernel: veth185ed8d2: entered promiscuous mode Jan 29 12:01:41.428943 kernel: cni0: port 2(veth185ed8d2) entered blocking state Jan 29 12:01:41.428958 kernel: cni0: port 2(veth185ed8d2) entered forwarding state Jan 29 12:01:41.430226 kernel: cni0: port 2(veth185ed8d2) entered disabled state Jan 29 12:01:41.434702 kernel: cni0: port 1(vethf74e7393) entered blocking state Jan 29 12:01:41.434777 kernel: cni0: port 1(vethf74e7393) entered forwarding state Jan 29 12:01:41.435989 systemd-networkd[1242]: vethf74e7393: Gained carrier Jan 29 12:01:41.440213 systemd-networkd[1242]: cni0: Gained carrier Jan 29 12:01:41.443267 containerd[1565]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a938), "name":"cbr0", "type":"bridge"} Jan 29 12:01:41.443267 containerd[1565]: delegateAdd: netconf sent to delegate plugin: Jan 29 12:01:41.450696 kernel: cni0: port 2(veth185ed8d2) entered blocking state Jan 29 12:01:41.451307 kernel: cni0: port 2(veth185ed8d2) entered forwarding state Jan 29 12:01:41.451001 systemd-networkd[1242]: veth185ed8d2: Gained carrier Jan 29 12:01:41.452592 containerd[1565]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Jan 29 12:01:41.452592 containerd[1565]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c938), "name":"cbr0", "type":"bridge"} Jan 29 12:01:41.452592 containerd[1565]: delegateAdd: netconf sent to delegate plugin: Jan 29 12:01:41.472445 containerd[1565]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-29T12:01:41.472101302Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:41.472445 containerd[1565]: time="2025-01-29T12:01:41.472227497Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:41.472445 containerd[1565]: time="2025-01-29T12:01:41.472243930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:41.472445 containerd[1565]: time="2025-01-29T12:01:41.472350508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:41.472997 containerd[1565]: time="2025-01-29T12:01:41.472723155Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:01:41.472997 containerd[1565]: time="2025-01-29T12:01:41.472799202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:01:41.472997 containerd[1565]: time="2025-01-29T12:01:41.472814432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:41.472997 containerd[1565]: time="2025-01-29T12:01:41.472928354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:01:41.502837 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:01:41.503697 systemd-resolved[1459]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 12:01:41.533870 containerd[1565]: time="2025-01-29T12:01:41.533808077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-r7qfb,Uid:30517739-6e65-4d78-b6f3-895a92816d56,Namespace:kube-system,Attempt:0,} returns sandbox id \"ceadad3daff256481c622efe0aecc57a8a9ff6cce59bdd6a43276bd4689347d2\"" Jan 29 12:01:41.535379 kubelet[2707]: E0129 12:01:41.535287 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:41.537931 containerd[1565]: time="2025-01-29T12:01:41.537898694Z" level=info msg="CreateContainer within sandbox \"ceadad3daff256481c622efe0aecc57a8a9ff6cce59bdd6a43276bd4689347d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:01:41.538393 containerd[1565]: time="2025-01-29T12:01:41.538300267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-458dl,Uid:d21b1dc6-9243-4dbd-93f7-df19f07c67ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"55bc11fc727df22507ceef811a2bccca9b5910161d5994e241583487a215b037\"" Jan 29 12:01:41.539054 kubelet[2707]: E0129 12:01:41.539010 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:41.540781 containerd[1565]: time="2025-01-29T12:01:41.540739262Z" level=info msg="CreateContainer within sandbox \"55bc11fc727df22507ceef811a2bccca9b5910161d5994e241583487a215b037\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:01:41.580478 containerd[1565]: time="2025-01-29T12:01:41.580417624Z" level=info msg="CreateContainer within sandbox \"ceadad3daff256481c622efe0aecc57a8a9ff6cce59bdd6a43276bd4689347d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"431645517a43d41ab3110c8c386b341a4a74ea39d5eec788ecf32805ed290b3c\"" Jan 29 12:01:41.581153 containerd[1565]: time="2025-01-29T12:01:41.581095886Z" level=info msg="StartContainer for \"431645517a43d41ab3110c8c386b341a4a74ea39d5eec788ecf32805ed290b3c\"" Jan 29 12:01:41.588480 containerd[1565]: time="2025-01-29T12:01:41.588430950Z" level=info msg="CreateContainer within sandbox \"55bc11fc727df22507ceef811a2bccca9b5910161d5994e241583487a215b037\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6783ed5f617be68f613be019d5e8bf16f4161ab8f7d639a8bcc159ea134379a3\"" Jan 29 12:01:41.589201 containerd[1565]: time="2025-01-29T12:01:41.589132578Z" level=info msg="StartContainer for \"6783ed5f617be68f613be019d5e8bf16f4161ab8f7d639a8bcc159ea134379a3\"" Jan 29 12:01:41.647577 containerd[1565]: time="2025-01-29T12:01:41.647444454Z" level=info msg="StartContainer for \"6783ed5f617be68f613be019d5e8bf16f4161ab8f7d639a8bcc159ea134379a3\" returns successfully" Jan 29 12:01:41.654410 containerd[1565]: time="2025-01-29T12:01:41.654358776Z" level=info msg="StartContainer for \"431645517a43d41ab3110c8c386b341a4a74ea39d5eec788ecf32805ed290b3c\" returns successfully" Jan 29 12:01:42.042388 systemd[1]: Started sshd@7-10.0.0.115:22-10.0.0.1:49642.service - OpenSSH per-connection server daemon (10.0.0.1:49642). Jan 29 12:01:42.073237 sshd[3661]: Accepted publickey for core from 10.0.0.1 port 49642 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:01:42.074986 sshd[3661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:42.078864 systemd-logind[1551]: New session 8 of user core. Jan 29 12:01:42.086419 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:01:42.189226 sshd[3661]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:42.197390 systemd[1]: Started sshd@8-10.0.0.115:22-10.0.0.1:49652.service - OpenSSH per-connection server daemon (10.0.0.1:49652). Jan 29 12:01:42.198307 systemd[1]: sshd@7-10.0.0.115:22-10.0.0.1:49642.service: Deactivated successfully. Jan 29 12:01:42.201619 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:01:42.202668 systemd-logind[1551]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:01:42.203602 systemd-logind[1551]: Removed session 8. Jan 29 12:01:42.225299 sshd[3674]: Accepted publickey for core from 10.0.0.1 port 49652 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:01:42.226762 sshd[3674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:42.230611 systemd-logind[1551]: New session 9 of user core. Jan 29 12:01:42.239424 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:01:42.330081 kubelet[2707]: E0129 12:01:42.329360 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:42.332659 kubelet[2707]: E0129 12:01:42.332455 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:42.346074 kubelet[2707]: I0129 12:01:42.344344 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-r7qfb" podStartSLOduration=20.344324093 podStartE2EDuration="20.344324093s" podCreationTimestamp="2025-01-29 12:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:01:42.343567359 +0000 UTC m=+36.180541578" watchObservedRunningTime="2025-01-29 12:01:42.344324093 +0000 UTC m=+36.181298312" Jan 29 12:01:42.346074 kubelet[2707]: I0129 12:01:42.344675 2707 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-nx4rj" podStartSLOduration=16.498748005 podStartE2EDuration="21.344666961s" podCreationTimestamp="2025-01-29 12:01:21 +0000 UTC" firstStartedPulling="2025-01-29 12:01:22.6329156 +0000 UTC m=+16.469889819" lastFinishedPulling="2025-01-29 12:01:27.478834556 +0000 UTC m=+21.315808775" observedRunningTime="2025-01-29 12:01:29.323082521 +0000 UTC m=+23.160056740" watchObservedRunningTime="2025-01-29 12:01:42.344666961 +0000 UTC m=+36.181641180" Jan 29 12:01:42.386657 sshd[3674]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:42.396460 systemd[1]: Started sshd@9-10.0.0.115:22-10.0.0.1:49664.service - OpenSSH per-connection server daemon (10.0.0.1:49664). Jan 29 12:01:42.404532 systemd[1]: sshd@8-10.0.0.115:22-10.0.0.1:49652.service: Deactivated successfully. Jan 29 12:01:42.408877 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:01:42.413943 systemd-logind[1551]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:01:42.416792 systemd-logind[1551]: Removed session 9. Jan 29 12:01:42.452399 sshd[3695]: Accepted publickey for core from 10.0.0.1 port 49664 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:01:42.454096 sshd[3695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:42.458434 systemd-logind[1551]: New session 10 of user core. Jan 29 12:01:42.465414 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:01:42.569906 sshd[3695]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:42.574095 systemd[1]: sshd@9-10.0.0.115:22-10.0.0.1:49664.service: Deactivated successfully. Jan 29 12:01:42.576676 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:01:42.577382 systemd-logind[1551]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:01:42.578250 systemd-logind[1551]: Removed session 10. Jan 29 12:01:42.835316 systemd-networkd[1242]: vethf74e7393: Gained IPv6LL Jan 29 12:01:43.334493 kubelet[2707]: E0129 12:01:43.334466 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:43.335015 kubelet[2707]: E0129 12:01:43.334667 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:43.411320 systemd-networkd[1242]: veth185ed8d2: Gained IPv6LL Jan 29 12:01:43.411812 systemd-networkd[1242]: cni0: Gained IPv6LL Jan 29 12:01:44.335881 kubelet[2707]: E0129 12:01:44.335835 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:44.335881 kubelet[2707]: E0129 12:01:44.335889 2707 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 29 12:01:47.584366 systemd[1]: Started sshd@10-10.0.0.115:22-10.0.0.1:49680.service - OpenSSH per-connection server daemon (10.0.0.1:49680). Jan 29 12:01:47.611697 sshd[3735]: Accepted publickey for core from 10.0.0.1 port 49680 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:01:47.613294 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:47.616991 systemd-logind[1551]: New session 11 of user core. Jan 29 12:01:47.626409 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:01:47.731976 sshd[3735]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:47.736320 systemd[1]: sshd@10-10.0.0.115:22-10.0.0.1:49680.service: Deactivated successfully. Jan 29 12:01:47.738739 systemd-logind[1551]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:01:47.738776 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:01:47.739999 systemd-logind[1551]: Removed session 11. Jan 29 12:01:52.742378 systemd[1]: Started sshd@11-10.0.0.115:22-10.0.0.1:41450.service - OpenSSH per-connection server daemon (10.0.0.1:41450). Jan 29 12:01:52.769722 sshd[3771]: Accepted publickey for core from 10.0.0.1 port 41450 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:01:52.771352 sshd[3771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:52.775481 systemd-logind[1551]: New session 12 of user core. Jan 29 12:01:52.784496 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:01:52.887108 sshd[3771]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:52.891280 systemd[1]: sshd@11-10.0.0.115:22-10.0.0.1:41450.service: Deactivated successfully. Jan 29 12:01:52.893884 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:01:52.894614 systemd-logind[1551]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:01:52.895538 systemd-logind[1551]: Removed session 12. Jan 29 12:01:57.903389 systemd[1]: Started sshd@12-10.0.0.115:22-10.0.0.1:41460.service - OpenSSH per-connection server daemon (10.0.0.1:41460). Jan 29 12:01:57.935378 sshd[3809]: Accepted publickey for core from 10.0.0.1 port 41460 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:01:57.937461 sshd[3809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:01:57.941954 systemd-logind[1551]: New session 13 of user core. Jan 29 12:01:57.955438 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:01:58.057654 sshd[3809]: pam_unix(sshd:session): session closed for user core Jan 29 12:01:58.061131 systemd[1]: sshd@12-10.0.0.115:22-10.0.0.1:41460.service: Deactivated successfully. Jan 29 12:01:58.063193 systemd-logind[1551]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:01:58.063276 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:01:58.064269 systemd-logind[1551]: Removed session 13. Jan 29 12:02:03.072375 systemd[1]: Started sshd@13-10.0.0.115:22-10.0.0.1:35248.service - OpenSSH per-connection server daemon (10.0.0.1:35248). Jan 29 12:02:03.099333 sshd[3845]: Accepted publickey for core from 10.0.0.1 port 35248 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:03.100717 sshd[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:03.104377 systemd-logind[1551]: New session 14 of user core. Jan 29 12:02:03.114405 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:02:03.218407 sshd[3845]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:03.227375 systemd[1]: Started sshd@14-10.0.0.115:22-10.0.0.1:35258.service - OpenSSH per-connection server daemon (10.0.0.1:35258). Jan 29 12:02:03.227960 systemd[1]: sshd@13-10.0.0.115:22-10.0.0.1:35248.service: Deactivated successfully. Jan 29 12:02:03.230588 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:02:03.232230 systemd-logind[1551]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:02:03.233234 systemd-logind[1551]: Removed session 14. Jan 29 12:02:03.254718 sshd[3857]: Accepted publickey for core from 10.0.0.1 port 35258 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:03.256190 sshd[3857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:03.260393 systemd-logind[1551]: New session 15 of user core. Jan 29 12:02:03.268522 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:02:03.428609 sshd[3857]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:03.436499 systemd[1]: Started sshd@15-10.0.0.115:22-10.0.0.1:35270.service - OpenSSH per-connection server daemon (10.0.0.1:35270). Jan 29 12:02:03.437139 systemd[1]: sshd@14-10.0.0.115:22-10.0.0.1:35258.service: Deactivated successfully. Jan 29 12:02:03.439593 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:02:03.441224 systemd-logind[1551]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:02:03.442359 systemd-logind[1551]: Removed session 15. Jan 29 12:02:03.465939 sshd[3870]: Accepted publickey for core from 10.0.0.1 port 35270 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:03.467607 sshd[3870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:03.472020 systemd-logind[1551]: New session 16 of user core. Jan 29 12:02:03.479420 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:02:04.788507 sshd[3870]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:04.804443 systemd[1]: Started sshd@16-10.0.0.115:22-10.0.0.1:35286.service - OpenSSH per-connection server daemon (10.0.0.1:35286). Jan 29 12:02:04.805357 systemd[1]: sshd@15-10.0.0.115:22-10.0.0.1:35270.service: Deactivated successfully. Jan 29 12:02:04.807593 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:02:04.809660 systemd-logind[1551]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:02:04.810873 systemd-logind[1551]: Removed session 16. Jan 29 12:02:04.832054 sshd[3910]: Accepted publickey for core from 10.0.0.1 port 35286 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:04.833852 sshd[3910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:04.838044 systemd-logind[1551]: New session 17 of user core. Jan 29 12:02:04.848446 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:02:05.184272 sshd[3910]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:05.190380 systemd[1]: Started sshd@17-10.0.0.115:22-10.0.0.1:35300.service - OpenSSH per-connection server daemon (10.0.0.1:35300). Jan 29 12:02:05.190880 systemd[1]: sshd@16-10.0.0.115:22-10.0.0.1:35286.service: Deactivated successfully. Jan 29 12:02:05.195516 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:02:05.195572 systemd-logind[1551]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:02:05.197049 systemd-logind[1551]: Removed session 17. Jan 29 12:02:05.218851 sshd[3926]: Accepted publickey for core from 10.0.0.1 port 35300 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:05.220634 sshd[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:05.225329 systemd-logind[1551]: New session 18 of user core. Jan 29 12:02:05.235478 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:02:05.350959 sshd[3926]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:05.355714 systemd[1]: sshd@17-10.0.0.115:22-10.0.0.1:35300.service: Deactivated successfully. Jan 29 12:02:05.358335 systemd-logind[1551]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:02:05.358373 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:02:05.359772 systemd-logind[1551]: Removed session 18. Jan 29 12:02:10.370490 systemd[1]: Started sshd@18-10.0.0.115:22-10.0.0.1:35312.service - OpenSSH per-connection server daemon (10.0.0.1:35312). Jan 29 12:02:10.398256 sshd[3967]: Accepted publickey for core from 10.0.0.1 port 35312 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:10.399921 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:10.404154 systemd-logind[1551]: New session 19 of user core. Jan 29 12:02:10.411447 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:02:10.512106 sshd[3967]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:10.516617 systemd[1]: sshd@18-10.0.0.115:22-10.0.0.1:35312.service: Deactivated successfully. Jan 29 12:02:10.518834 systemd-logind[1551]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:02:10.518877 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:02:10.519896 systemd-logind[1551]: Removed session 19. Jan 29 12:02:15.520463 systemd[1]: Started sshd@19-10.0.0.115:22-10.0.0.1:51188.service - OpenSSH per-connection server daemon (10.0.0.1:51188). Jan 29 12:02:15.548269 sshd[4006]: Accepted publickey for core from 10.0.0.1 port 51188 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:15.549829 sshd[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:15.553802 systemd-logind[1551]: New session 20 of user core. Jan 29 12:02:15.564448 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:02:15.677539 sshd[4006]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:15.681721 systemd[1]: sshd@19-10.0.0.115:22-10.0.0.1:51188.service: Deactivated successfully. Jan 29 12:02:15.684077 systemd-logind[1551]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:02:15.684263 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:02:15.685282 systemd-logind[1551]: Removed session 20. Jan 29 12:02:20.689408 systemd[1]: Started sshd@20-10.0.0.115:22-10.0.0.1:51190.service - OpenSSH per-connection server daemon (10.0.0.1:51190). Jan 29 12:02:20.717464 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 51190 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:20.718958 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:20.722802 systemd-logind[1551]: New session 21 of user core. Jan 29 12:02:20.729423 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:02:20.826084 sshd[4043]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:20.829776 systemd[1]: sshd@20-10.0.0.115:22-10.0.0.1:51190.service: Deactivated successfully. Jan 29 12:02:20.832281 systemd-logind[1551]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:02:20.832366 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:02:20.833536 systemd-logind[1551]: Removed session 21. Jan 29 12:02:25.837427 systemd[1]: Started sshd@21-10.0.0.115:22-10.0.0.1:38472.service - OpenSSH per-connection server daemon (10.0.0.1:38472). Jan 29 12:02:25.864946 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 38472 ssh2: RSA SHA256:cvfFoES5BDjlDoexsEK91Vm+2p49HiPi8UmWm2d9zy0 Jan 29 12:02:25.866602 sshd[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:25.870725 systemd-logind[1551]: New session 22 of user core. Jan 29 12:02:25.881465 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:02:25.981378 sshd[4081]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:25.985850 systemd[1]: sshd@21-10.0.0.115:22-10.0.0.1:38472.service: Deactivated successfully. Jan 29 12:02:25.988764 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:02:25.989612 systemd-logind[1551]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:02:25.990849 systemd-logind[1551]: Removed session 22.