Jan 13 21:27:18.867393 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 13 21:27:18.867414 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:27:18.867425 kernel: BIOS-provided physical RAM map: Jan 13 21:27:18.867431 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 13 21:27:18.867437 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 13 21:27:18.867443 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 13 21:27:18.867451 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 13 21:27:18.867457 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 13 21:27:18.867463 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 13 21:27:18.867471 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 13 21:27:18.867477 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 13 21:27:18.867484 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 13 21:27:18.867490 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 13 21:27:18.867498 kernel: NX (Execute Disable) protection: active Jan 13 21:27:18.867508 kernel: APIC: Static calls initialized Jan 13 21:27:18.867517 kernel: SMBIOS 2.8 present. Jan 13 21:27:18.867524 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 13 21:27:18.867531 kernel: Hypervisor detected: KVM Jan 13 21:27:18.867537 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 13 21:27:18.867544 kernel: kvm-clock: using sched offset of 2245209739 cycles Jan 13 21:27:18.867551 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 13 21:27:18.867558 kernel: tsc: Detected 2794.748 MHz processor Jan 13 21:27:18.867565 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 13 21:27:18.867572 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 13 21:27:18.867579 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 13 21:27:18.867588 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 13 21:27:18.867595 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 13 21:27:18.867602 kernel: Using GB pages for direct mapping Jan 13 21:27:18.867608 kernel: ACPI: Early table checksum verification disabled Jan 13 21:27:18.867615 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 13 21:27:18.867631 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:27:18.867638 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:27:18.867645 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:27:18.867654 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 13 21:27:18.867661 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:27:18.867667 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:27:18.867674 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:27:18.867681 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 13 21:27:18.867688 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 13 21:27:18.867695 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 13 21:27:18.867705 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 13 21:27:18.867714 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 13 21:27:18.867721 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 13 21:27:18.867797 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 13 21:27:18.867805 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 13 21:27:18.867812 kernel: No NUMA configuration found Jan 13 21:27:18.867819 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 13 21:27:18.867826 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 13 21:27:18.867836 kernel: Zone ranges: Jan 13 21:27:18.867843 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 13 21:27:18.867850 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 13 21:27:18.867857 kernel: Normal empty Jan 13 21:27:18.867864 kernel: Movable zone start for each node Jan 13 21:27:18.867871 kernel: Early memory node ranges Jan 13 21:27:18.867878 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 13 21:27:18.867886 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 13 21:27:18.867896 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 13 21:27:18.867907 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 13 21:27:18.867914 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 13 21:27:18.867922 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 13 21:27:18.867929 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 13 21:27:18.867938 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 13 21:27:18.867945 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 13 21:27:18.867954 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 13 21:27:18.867962 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 13 21:27:18.867969 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 13 21:27:18.867978 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 13 21:27:18.867985 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 13 21:27:18.867992 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 13 21:27:18.868000 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 13 21:27:18.868007 kernel: TSC deadline timer available Jan 13 21:27:18.868014 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 13 21:27:18.868021 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 13 21:27:18.868028 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 13 21:27:18.868035 kernel: kvm-guest: setup PV sched yield Jan 13 21:27:18.868042 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 13 21:27:18.868051 kernel: Booting paravirtualized kernel on KVM Jan 13 21:27:18.868058 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 13 21:27:18.868066 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 13 21:27:18.868073 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 13 21:27:18.868080 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 13 21:27:18.868087 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 13 21:27:18.868093 kernel: kvm-guest: PV spinlocks enabled Jan 13 21:27:18.868101 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 13 21:27:18.868109 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:27:18.868119 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 13 21:27:18.868126 kernel: random: crng init done Jan 13 21:27:18.868133 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 13 21:27:18.868140 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 13 21:27:18.868147 kernel: Fallback order for Node 0: 0 Jan 13 21:27:18.868154 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 13 21:27:18.868161 kernel: Policy zone: DMA32 Jan 13 21:27:18.868168 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 13 21:27:18.868178 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 13 21:27:18.868185 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 13 21:27:18.868192 kernel: ftrace: allocating 37918 entries in 149 pages Jan 13 21:27:18.868199 kernel: ftrace: allocated 149 pages with 4 groups Jan 13 21:27:18.868206 kernel: Dynamic Preempt: voluntary Jan 13 21:27:18.868213 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 13 21:27:18.868221 kernel: rcu: RCU event tracing is enabled. Jan 13 21:27:18.868229 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 13 21:27:18.868236 kernel: Trampoline variant of Tasks RCU enabled. Jan 13 21:27:18.868245 kernel: Rude variant of Tasks RCU enabled. Jan 13 21:27:18.868252 kernel: Tracing variant of Tasks RCU enabled. Jan 13 21:27:18.868259 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 13 21:27:18.868267 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 13 21:27:18.868274 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 13 21:27:18.868281 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 13 21:27:18.868288 kernel: Console: colour VGA+ 80x25 Jan 13 21:27:18.868295 kernel: printk: console [ttyS0] enabled Jan 13 21:27:18.868302 kernel: ACPI: Core revision 20230628 Jan 13 21:27:18.868311 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 13 21:27:18.868318 kernel: APIC: Switch to symmetric I/O mode setup Jan 13 21:27:18.868325 kernel: x2apic enabled Jan 13 21:27:18.868332 kernel: APIC: Switched APIC routing to: physical x2apic Jan 13 21:27:18.868340 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 13 21:27:18.868347 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 13 21:27:18.868354 kernel: kvm-guest: setup PV IPIs Jan 13 21:27:18.868370 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 13 21:27:18.868378 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 13 21:27:18.868385 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 13 21:27:18.868392 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 13 21:27:18.868400 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 13 21:27:18.868409 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 13 21:27:18.868417 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 13 21:27:18.868424 kernel: Spectre V2 : Mitigation: Retpolines Jan 13 21:27:18.868432 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 13 21:27:18.868439 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 13 21:27:18.868449 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 13 21:27:18.868456 kernel: RETBleed: Mitigation: untrained return thunk Jan 13 21:27:18.868464 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 13 21:27:18.868471 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 13 21:27:18.868479 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 13 21:27:18.868487 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 13 21:27:18.868494 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 13 21:27:18.868502 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 13 21:27:18.868511 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 13 21:27:18.868519 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 13 21:27:18.868526 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 13 21:27:18.868534 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 13 21:27:18.868544 kernel: Freeing SMP alternatives memory: 32K Jan 13 21:27:18.868553 kernel: pid_max: default: 32768 minimum: 301 Jan 13 21:27:18.868560 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 13 21:27:18.868567 kernel: landlock: Up and running. Jan 13 21:27:18.868575 kernel: SELinux: Initializing. Jan 13 21:27:18.868584 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:27:18.868592 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 13 21:27:18.868599 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 13 21:27:18.868607 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:27:18.868614 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:27:18.868629 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 13 21:27:18.868636 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 13 21:27:18.868644 kernel: ... version: 0 Jan 13 21:27:18.868651 kernel: ... bit width: 48 Jan 13 21:27:18.868660 kernel: ... generic registers: 6 Jan 13 21:27:18.868668 kernel: ... value mask: 0000ffffffffffff Jan 13 21:27:18.868675 kernel: ... max period: 00007fffffffffff Jan 13 21:27:18.868683 kernel: ... fixed-purpose events: 0 Jan 13 21:27:18.868690 kernel: ... event mask: 000000000000003f Jan 13 21:27:18.868697 kernel: signal: max sigframe size: 1776 Jan 13 21:27:18.868705 kernel: rcu: Hierarchical SRCU implementation. Jan 13 21:27:18.868712 kernel: rcu: Max phase no-delay instances is 400. Jan 13 21:27:18.868719 kernel: smp: Bringing up secondary CPUs ... Jan 13 21:27:18.868740 kernel: smpboot: x86: Booting SMP configuration: Jan 13 21:27:18.868747 kernel: .... node #0, CPUs: #1 #2 #3 Jan 13 21:27:18.868754 kernel: smp: Brought up 1 node, 4 CPUs Jan 13 21:27:18.868762 kernel: smpboot: Max logical packages: 1 Jan 13 21:27:18.868769 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 13 21:27:18.868776 kernel: devtmpfs: initialized Jan 13 21:27:18.868784 kernel: x86/mm: Memory block size: 128MB Jan 13 21:27:18.868791 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 13 21:27:18.868799 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 13 21:27:18.868808 kernel: pinctrl core: initialized pinctrl subsystem Jan 13 21:27:18.868816 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 13 21:27:18.868823 kernel: audit: initializing netlink subsys (disabled) Jan 13 21:27:18.868830 kernel: audit: type=2000 audit(1736803639.241:1): state=initialized audit_enabled=0 res=1 Jan 13 21:27:18.868838 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 13 21:27:18.868845 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 13 21:27:18.868852 kernel: cpuidle: using governor menu Jan 13 21:27:18.868860 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 13 21:27:18.868867 kernel: dca service started, version 1.12.1 Jan 13 21:27:18.868877 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 13 21:27:18.868884 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 13 21:27:18.868892 kernel: PCI: Using configuration type 1 for base access Jan 13 21:27:18.868899 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 13 21:27:18.868907 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 13 21:27:18.868914 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 13 21:27:18.868922 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 13 21:27:18.868930 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 13 21:27:18.868939 kernel: ACPI: Added _OSI(Module Device) Jan 13 21:27:18.868949 kernel: ACPI: Added _OSI(Processor Device) Jan 13 21:27:18.868958 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 13 21:27:18.868966 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 13 21:27:18.868973 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 13 21:27:18.868980 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 13 21:27:18.868987 kernel: ACPI: Interpreter enabled Jan 13 21:27:18.868995 kernel: ACPI: PM: (supports S0 S3 S5) Jan 13 21:27:18.869004 kernel: ACPI: Using IOAPIC for interrupt routing Jan 13 21:27:18.869014 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 13 21:27:18.869024 kernel: PCI: Using E820 reservations for host bridge windows Jan 13 21:27:18.869032 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 13 21:27:18.869039 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 13 21:27:18.869219 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 13 21:27:18.869347 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 13 21:27:18.869467 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 13 21:27:18.869477 kernel: PCI host bridge to bus 0000:00 Jan 13 21:27:18.869613 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 13 21:27:18.869760 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 13 21:27:18.869876 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 13 21:27:18.869986 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 13 21:27:18.870102 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 13 21:27:18.870213 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 13 21:27:18.870321 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 13 21:27:18.870468 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 13 21:27:18.870600 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 13 21:27:18.870749 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 13 21:27:18.870871 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 13 21:27:18.870994 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 13 21:27:18.871114 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 13 21:27:18.871249 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 13 21:27:18.871381 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 13 21:27:18.871504 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 13 21:27:18.871638 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 13 21:27:18.871783 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 13 21:27:18.871903 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 13 21:27:18.872027 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 13 21:27:18.872145 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 13 21:27:18.872285 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 13 21:27:18.872406 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 13 21:27:18.872524 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 13 21:27:18.872652 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 13 21:27:18.872884 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 13 21:27:18.873036 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 13 21:27:18.873161 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 13 21:27:18.873299 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 13 21:27:18.873420 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 13 21:27:18.873538 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 13 21:27:18.873678 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 13 21:27:18.873826 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 13 21:27:18.873837 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 13 21:27:18.873849 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 13 21:27:18.873856 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 13 21:27:18.873864 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 13 21:27:18.873872 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 13 21:27:18.873879 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 13 21:27:18.873887 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 13 21:27:18.873894 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 13 21:27:18.873902 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 13 21:27:18.873909 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 13 21:27:18.873919 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 13 21:27:18.873927 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 13 21:27:18.873934 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 13 21:27:18.873942 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 13 21:27:18.873949 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 13 21:27:18.873957 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 13 21:27:18.873964 kernel: iommu: Default domain type: Translated Jan 13 21:27:18.873971 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 13 21:27:18.873979 kernel: PCI: Using ACPI for IRQ routing Jan 13 21:27:18.873988 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 13 21:27:18.873996 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 13 21:27:18.874004 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 13 21:27:18.874128 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 13 21:27:18.874246 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 13 21:27:18.874371 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 13 21:27:18.874382 kernel: vgaarb: loaded Jan 13 21:27:18.874390 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 13 21:27:18.874401 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 13 21:27:18.874409 kernel: clocksource: Switched to clocksource kvm-clock Jan 13 21:27:18.874416 kernel: VFS: Disk quotas dquot_6.6.0 Jan 13 21:27:18.874424 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 13 21:27:18.874433 kernel: pnp: PnP ACPI init Jan 13 21:27:18.874588 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 13 21:27:18.874608 kernel: pnp: PnP ACPI: found 6 devices Jan 13 21:27:18.874630 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 13 21:27:18.874648 kernel: NET: Registered PF_INET protocol family Jan 13 21:27:18.874662 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 13 21:27:18.874676 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 13 21:27:18.874690 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 13 21:27:18.874701 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 13 21:27:18.874715 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 13 21:27:18.874741 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 13 21:27:18.874755 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:27:18.874766 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 13 21:27:18.874777 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 13 21:27:18.874784 kernel: NET: Registered PF_XDP protocol family Jan 13 21:27:18.874924 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 13 21:27:18.875123 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 13 21:27:18.875255 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 13 21:27:18.875367 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 13 21:27:18.875484 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 13 21:27:18.875593 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 13 21:27:18.875607 kernel: PCI: CLS 0 bytes, default 64 Jan 13 21:27:18.875615 kernel: Initialise system trusted keyrings Jan 13 21:27:18.875632 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 13 21:27:18.875640 kernel: Key type asymmetric registered Jan 13 21:27:18.875648 kernel: Asymmetric key parser 'x509' registered Jan 13 21:27:18.875655 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 13 21:27:18.875663 kernel: io scheduler mq-deadline registered Jan 13 21:27:18.875670 kernel: io scheduler kyber registered Jan 13 21:27:18.875678 kernel: io scheduler bfq registered Jan 13 21:27:18.875685 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 13 21:27:18.875696 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 13 21:27:18.875703 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 13 21:27:18.875711 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 13 21:27:18.875718 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 13 21:27:18.875726 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 13 21:27:18.875746 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 13 21:27:18.875754 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 13 21:27:18.875761 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 13 21:27:18.875892 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 13 21:27:18.875907 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 13 21:27:18.876020 kernel: rtc_cmos 00:04: registered as rtc0 Jan 13 21:27:18.876132 kernel: rtc_cmos 00:04: setting system clock to 2025-01-13T21:27:18 UTC (1736803638) Jan 13 21:27:18.876243 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 13 21:27:18.876253 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 13 21:27:18.876261 kernel: NET: Registered PF_INET6 protocol family Jan 13 21:27:18.876268 kernel: Segment Routing with IPv6 Jan 13 21:27:18.876279 kernel: In-situ OAM (IOAM) with IPv6 Jan 13 21:27:18.876287 kernel: NET: Registered PF_PACKET protocol family Jan 13 21:27:18.876294 kernel: Key type dns_resolver registered Jan 13 21:27:18.876302 kernel: IPI shorthand broadcast: enabled Jan 13 21:27:18.876309 kernel: sched_clock: Marking stable (578003526, 104931386)->(733960558, -51025646) Jan 13 21:27:18.876317 kernel: registered taskstats version 1 Jan 13 21:27:18.876324 kernel: Loading compiled-in X.509 certificates Jan 13 21:27:18.876332 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 13 21:27:18.876339 kernel: Key type .fscrypt registered Jan 13 21:27:18.876347 kernel: Key type fscrypt-provisioning registered Jan 13 21:27:18.876357 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 13 21:27:18.876364 kernel: ima: Allocated hash algorithm: sha1 Jan 13 21:27:18.876371 kernel: ima: No architecture policies found Jan 13 21:27:18.876379 kernel: clk: Disabling unused clocks Jan 13 21:27:18.876387 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 13 21:27:18.876394 kernel: Write protecting the kernel read-only data: 36864k Jan 13 21:27:18.876402 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 13 21:27:18.876409 kernel: Run /init as init process Jan 13 21:27:18.876419 kernel: with arguments: Jan 13 21:27:18.876426 kernel: /init Jan 13 21:27:18.876433 kernel: with environment: Jan 13 21:27:18.876441 kernel: HOME=/ Jan 13 21:27:18.876451 kernel: TERM=linux Jan 13 21:27:18.876460 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 13 21:27:18.876470 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:27:18.876480 systemd[1]: Detected virtualization kvm. Jan 13 21:27:18.876491 systemd[1]: Detected architecture x86-64. Jan 13 21:27:18.876499 systemd[1]: Running in initrd. Jan 13 21:27:18.876507 systemd[1]: No hostname configured, using default hostname. Jan 13 21:27:18.876514 systemd[1]: Hostname set to . Jan 13 21:27:18.876523 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:27:18.876531 systemd[1]: Queued start job for default target initrd.target. Jan 13 21:27:18.876539 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:27:18.876547 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:27:18.876558 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 13 21:27:18.876577 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:27:18.876588 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 13 21:27:18.876596 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 13 21:27:18.876606 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 13 21:27:18.876617 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 13 21:27:18.876633 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:27:18.876642 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:27:18.876650 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:27:18.876658 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:27:18.876667 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:27:18.876675 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:27:18.876683 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:27:18.876694 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:27:18.876702 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:27:18.876711 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:27:18.876719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:27:18.876727 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:27:18.876844 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:27:18.876871 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:27:18.876888 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 13 21:27:18.876897 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:27:18.876920 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 13 21:27:18.876932 systemd[1]: Starting systemd-fsck-usr.service... Jan 13 21:27:18.876942 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:27:18.876953 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:27:18.876964 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:27:18.876975 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 13 21:27:18.876985 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:27:18.876996 systemd[1]: Finished systemd-fsck-usr.service. Jan 13 21:27:18.877042 systemd-journald[193]: Collecting audit messages is disabled. Jan 13 21:27:18.877077 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:27:18.877092 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:27:18.877103 systemd-journald[193]: Journal started Jan 13 21:27:18.877130 systemd-journald[193]: Runtime Journal (/run/log/journal/5ff975d66f964912b61c7b4f2210e1a6) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:27:18.875242 systemd-modules-load[194]: Inserted module 'overlay' Jan 13 21:27:18.912360 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 13 21:27:18.912381 kernel: Bridge firewalling registered Jan 13 21:27:18.912391 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:27:18.906852 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 13 21:27:18.914314 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:27:18.929940 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:27:18.932808 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:27:18.935793 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:27:18.938456 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:27:18.941481 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:27:18.944975 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:27:18.947629 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:27:18.951885 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:27:18.956252 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:27:18.969447 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:27:18.979883 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 13 21:27:18.991248 dracut-cmdline[230]: dracut-dracut-053 Jan 13 21:27:18.994093 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 13 21:27:19.000610 systemd-resolved[220]: Positive Trust Anchors: Jan 13 21:27:19.000640 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:27:19.000681 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:27:19.003593 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 13 21:27:19.004714 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:27:19.010391 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:27:19.077791 kernel: SCSI subsystem initialized Jan 13 21:27:19.086764 kernel: Loading iSCSI transport class v2.0-870. Jan 13 21:27:19.097773 kernel: iscsi: registered transport (tcp) Jan 13 21:27:19.119772 kernel: iscsi: registered transport (qla4xxx) Jan 13 21:27:19.119823 kernel: QLogic iSCSI HBA Driver Jan 13 21:27:19.174449 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 13 21:27:19.180910 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 13 21:27:19.209438 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 13 21:27:19.209529 kernel: device-mapper: uevent: version 1.0.3 Jan 13 21:27:19.209542 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 13 21:27:19.251770 kernel: raid6: avx2x4 gen() 26493 MB/s Jan 13 21:27:19.268762 kernel: raid6: avx2x2 gen() 19476 MB/s Jan 13 21:27:19.285942 kernel: raid6: avx2x1 gen() 18320 MB/s Jan 13 21:27:19.286028 kernel: raid6: using algorithm avx2x4 gen() 26493 MB/s Jan 13 21:27:19.304121 kernel: raid6: .... xor() 5868 MB/s, rmw enabled Jan 13 21:27:19.304160 kernel: raid6: using avx2x2 recovery algorithm Jan 13 21:27:19.325775 kernel: xor: automatically using best checksumming function avx Jan 13 21:27:19.484785 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 13 21:27:19.498099 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:27:19.507879 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:27:19.520001 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 13 21:27:19.524656 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:27:19.534946 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 13 21:27:19.551879 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Jan 13 21:27:19.589810 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:27:19.602170 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:27:19.665964 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:27:19.671974 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 13 21:27:19.693775 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 13 21:27:19.696423 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:27:19.704641 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 13 21:27:19.748429 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 13 21:27:19.748631 kernel: cryptd: max_cpu_qlen set to 1000 Jan 13 21:27:19.748657 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 13 21:27:19.748692 kernel: GPT:9289727 != 19775487 Jan 13 21:27:19.748712 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 13 21:27:19.748747 kernel: GPT:9289727 != 19775487 Jan 13 21:27:19.748768 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 13 21:27:19.748787 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:27:19.699549 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:27:19.703566 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:27:19.712220 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 13 21:27:19.744342 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:27:19.750095 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:27:19.750274 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:27:19.753219 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:27:19.755804 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:27:19.755940 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:27:19.758459 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:27:19.764947 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:27:19.771814 kernel: AVX2 version of gcm_enc/dec engaged. Jan 13 21:27:19.771833 kernel: AES CTR mode by8 optimization enabled Jan 13 21:27:19.776753 kernel: libata version 3.00 loaded. Jan 13 21:27:19.790776 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (476) Jan 13 21:27:19.794762 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (461) Jan 13 21:27:19.797984 kernel: ahci 0000:00:1f.2: version 3.0 Jan 13 21:27:19.808232 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 13 21:27:19.808419 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 13 21:27:19.808567 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 13 21:27:19.808710 kernel: scsi host0: ahci Jan 13 21:27:19.808878 kernel: scsi host1: ahci Jan 13 21:27:19.809021 kernel: scsi host2: ahci Jan 13 21:27:19.809160 kernel: scsi host3: ahci Jan 13 21:27:19.809299 kernel: scsi host4: ahci Jan 13 21:27:19.809436 kernel: scsi host5: ahci Jan 13 21:27:19.809621 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 13 21:27:19.809633 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 13 21:27:19.809643 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 13 21:27:19.809653 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 13 21:27:19.809662 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 13 21:27:19.809676 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 13 21:27:19.805280 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 13 21:27:19.833886 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:27:19.841319 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 13 21:27:19.855703 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:27:19.860626 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 13 21:27:19.861979 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 13 21:27:19.876843 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 13 21:27:19.878718 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 13 21:27:19.897111 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:27:20.118331 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 13 21:27:20.118416 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 13 21:27:20.118430 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 13 21:27:20.119968 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 13 21:27:20.120044 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 13 21:27:20.120055 disk-uuid[554]: Primary Header is updated. Jan 13 21:27:20.120055 disk-uuid[554]: Secondary Entries is updated. Jan 13 21:27:20.120055 disk-uuid[554]: Secondary Header is updated. Jan 13 21:27:20.129085 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 13 21:27:20.129114 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:27:20.129125 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 13 21:27:20.129135 kernel: ata3.00: applying bridge limits Jan 13 21:27:20.129145 kernel: ata3.00: configured for UDMA/100 Jan 13 21:27:20.129155 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 13 21:27:20.129197 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:27:20.165793 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 13 21:27:20.178854 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 13 21:27:20.178871 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 13 21:27:21.140773 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 13 21:27:21.141184 disk-uuid[564]: The operation has completed successfully. Jan 13 21:27:21.172292 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 13 21:27:21.172434 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 13 21:27:21.201892 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 13 21:27:21.205063 sh[591]: Success Jan 13 21:27:21.217764 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 13 21:27:21.251919 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 13 21:27:21.281277 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 13 21:27:21.286767 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 13 21:27:21.296456 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 13 21:27:21.296489 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:27:21.296504 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 13 21:27:21.298611 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 13 21:27:21.298629 kernel: BTRFS info (device dm-0): using free space tree Jan 13 21:27:21.303640 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 13 21:27:21.305955 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 13 21:27:21.318853 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 13 21:27:21.336453 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 13 21:27:21.342187 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:27:21.342213 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:27:21.342229 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:27:21.343770 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:27:21.353484 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 13 21:27:21.355937 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:27:21.446901 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:27:21.468929 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:27:21.501325 systemd-networkd[769]: lo: Link UP Jan 13 21:27:21.501339 systemd-networkd[769]: lo: Gained carrier Jan 13 21:27:21.518691 systemd-networkd[769]: Enumeration completed Jan 13 21:27:21.518828 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:27:21.519277 systemd[1]: Reached target network.target - Network. Jan 13 21:27:21.524466 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:27:21.524477 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:27:21.529520 systemd-networkd[769]: eth0: Link UP Jan 13 21:27:21.529531 systemd-networkd[769]: eth0: Gained carrier Jan 13 21:27:21.529539 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:27:21.535079 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 13 21:27:21.547789 systemd-networkd[769]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:27:21.547918 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 13 21:27:21.601542 ignition[774]: Ignition 2.19.0 Jan 13 21:27:21.601564 ignition[774]: Stage: fetch-offline Jan 13 21:27:21.601609 ignition[774]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:27:21.601619 ignition[774]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:27:21.601721 ignition[774]: parsed url from cmdline: "" Jan 13 21:27:21.601724 ignition[774]: no config URL provided Jan 13 21:27:21.601744 ignition[774]: reading system config file "/usr/lib/ignition/user.ign" Jan 13 21:27:21.601753 ignition[774]: no config at "/usr/lib/ignition/user.ign" Jan 13 21:27:21.601781 ignition[774]: op(1): [started] loading QEMU firmware config module Jan 13 21:27:21.601787 ignition[774]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 13 21:27:21.609391 ignition[774]: op(1): [finished] loading QEMU firmware config module Jan 13 21:27:21.625218 ignition[774]: parsing config with SHA512: cf1eb32fc58f427cfb8d2182f7925a58c71a755eef5f431a3f538caeb5ac146f4e810b4668edb58cc06fa5b876135b732149fba246fd280479488d5ca3b65733 Jan 13 21:27:21.629314 unknown[774]: fetched base config from "system" Jan 13 21:27:21.630001 unknown[774]: fetched user config from "qemu" Jan 13 21:27:21.630509 ignition[774]: fetch-offline: fetch-offline passed Jan 13 21:27:21.630597 ignition[774]: Ignition finished successfully Jan 13 21:27:21.632982 systemd-resolved[220]: Detected conflict on linux IN A 10.0.0.128 Jan 13 21:27:21.632993 systemd-resolved[220]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Jan 13 21:27:21.636277 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:27:21.638797 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 13 21:27:21.651195 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 13 21:27:21.666352 ignition[784]: Ignition 2.19.0 Jan 13 21:27:21.666368 ignition[784]: Stage: kargs Jan 13 21:27:21.666570 ignition[784]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:27:21.666587 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:27:21.667641 ignition[784]: kargs: kargs passed Jan 13 21:27:21.667696 ignition[784]: Ignition finished successfully Jan 13 21:27:21.674983 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 13 21:27:21.687872 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 13 21:27:21.705101 ignition[791]: Ignition 2.19.0 Jan 13 21:27:21.705113 ignition[791]: Stage: disks Jan 13 21:27:21.705323 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jan 13 21:27:21.705335 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:27:21.706208 ignition[791]: disks: disks passed Jan 13 21:27:21.706255 ignition[791]: Ignition finished successfully Jan 13 21:27:21.713108 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 13 21:27:21.715642 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 13 21:27:21.715940 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:27:21.716341 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:27:21.716755 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:27:21.724109 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:27:21.736922 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 13 21:27:21.749029 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 13 21:27:21.923350 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 13 21:27:21.939842 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 13 21:27:22.024767 kernel: EXT4-fs (vda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 13 21:27:22.025386 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 13 21:27:22.027662 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 13 21:27:22.039823 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:27:22.042609 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 13 21:27:22.045163 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 13 21:27:22.045203 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 13 21:27:22.054013 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (811) Jan 13 21:27:22.054046 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:27:22.054058 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:27:22.054068 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:27:22.047036 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:27:22.070167 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 13 21:27:22.075819 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:27:22.076877 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:27:22.090944 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 13 21:27:22.123624 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Jan 13 21:27:22.130054 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Jan 13 21:27:22.134245 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Jan 13 21:27:22.139178 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Jan 13 21:27:22.229975 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 13 21:27:22.240960 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 13 21:27:22.247768 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 13 21:27:22.250771 kernel: BTRFS info (device vda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:27:22.272152 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 13 21:27:22.295501 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 13 21:27:22.383429 ignition[929]: INFO : Ignition 2.19.0 Jan 13 21:27:22.383429 ignition[929]: INFO : Stage: mount Jan 13 21:27:22.385314 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:27:22.385314 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:27:22.385314 ignition[929]: INFO : mount: mount passed Jan 13 21:27:22.385314 ignition[929]: INFO : Ignition finished successfully Jan 13 21:27:22.390861 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 13 21:27:22.400993 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 13 21:27:22.407760 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 13 21:27:22.418752 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (937) Jan 13 21:27:22.421278 kernel: BTRFS info (device vda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 13 21:27:22.421299 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 13 21:27:22.421309 kernel: BTRFS info (device vda6): using free space tree Jan 13 21:27:22.423750 kernel: BTRFS info (device vda6): auto enabling async discard Jan 13 21:27:22.425391 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 13 21:27:22.455225 ignition[954]: INFO : Ignition 2.19.0 Jan 13 21:27:22.455225 ignition[954]: INFO : Stage: files Jan 13 21:27:22.457019 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:27:22.457019 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:27:22.459662 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Jan 13 21:27:22.461306 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 13 21:27:22.461306 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 13 21:27:22.465040 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 13 21:27:22.466487 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 13 21:27:22.468071 unknown[954]: wrote ssh authorized keys file for user: core Jan 13 21:27:22.469183 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 13 21:27:22.471287 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:27:22.473072 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 13 21:27:22.474785 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:27:22.474785 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 13 21:27:22.512438 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 13 21:27:22.606893 systemd-networkd[769]: eth0: Gained IPv6LL Jan 13 21:27:22.612275 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 13 21:27:22.614346 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 13 21:27:22.614346 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 13 21:27:22.614346 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:27:22.614346 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 13 21:27:22.614346 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:27:22.614346 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 13 21:27:22.614346 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:27:22.614346 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 13 21:27:22.614346 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:27:22.614346 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 13 21:27:22.614346 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:27:22.614346 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:27:22.614346 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:27:22.614346 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 13 21:27:22.976953 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 13 21:27:23.318987 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 13 21:27:23.318987 ignition[954]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 13 21:27:23.323208 ignition[954]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:27:23.323208 ignition[954]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 13 21:27:23.323208 ignition[954]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 13 21:27:23.323208 ignition[954]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 13 21:27:23.323208 ignition[954]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:27:23.323208 ignition[954]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 13 21:27:23.323208 ignition[954]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 13 21:27:23.323208 ignition[954]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Jan 13 21:27:23.323208 ignition[954]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:27:23.323208 ignition[954]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 13 21:27:23.323208 ignition[954]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Jan 13 21:27:23.323208 ignition[954]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Jan 13 21:27:23.355034 ignition[954]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:27:23.359791 ignition[954]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 13 21:27:23.361839 ignition[954]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Jan 13 21:27:23.361839 ignition[954]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Jan 13 21:27:23.361839 ignition[954]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Jan 13 21:27:23.361839 ignition[954]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:27:23.361839 ignition[954]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 13 21:27:23.361839 ignition[954]: INFO : files: files passed Jan 13 21:27:23.361839 ignition[954]: INFO : Ignition finished successfully Jan 13 21:27:23.362719 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 13 21:27:23.372952 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 13 21:27:23.375387 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 13 21:27:23.377220 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 13 21:27:23.377339 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 13 21:27:23.385690 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Jan 13 21:27:23.387413 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:27:23.387413 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:27:23.391157 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 13 21:27:23.390395 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:27:23.392937 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 13 21:27:23.406008 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 13 21:27:23.431648 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 13 21:27:23.431809 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 13 21:27:23.432413 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 13 21:27:23.435925 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 13 21:27:23.436354 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 13 21:27:23.448944 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 13 21:27:23.464498 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:27:23.468722 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 13 21:27:23.495214 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:27:23.495601 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:27:23.498423 systemd[1]: Stopped target timers.target - Timer Units. Jan 13 21:27:23.499020 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 13 21:27:23.499133 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 13 21:27:23.503521 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 13 21:27:23.504127 systemd[1]: Stopped target basic.target - Basic System. Jan 13 21:27:23.504526 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 13 21:27:23.505338 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 13 21:27:23.505761 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 13 21:27:23.516192 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 13 21:27:23.517031 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 13 21:27:23.517431 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 13 21:27:23.518032 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 13 21:27:23.518416 systemd[1]: Stopped target swap.target - Swaps. Jan 13 21:27:23.518997 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 13 21:27:23.519104 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 13 21:27:23.530205 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:27:23.531103 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:27:23.531457 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 13 21:27:23.536140 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:27:23.536766 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 13 21:27:23.536873 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 13 21:27:23.541834 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 13 21:27:23.541945 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 13 21:27:23.544394 systemd[1]: Stopped target paths.target - Path Units. Jan 13 21:27:23.545131 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 13 21:27:23.550810 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:27:23.553673 systemd[1]: Stopped target slices.target - Slice Units. Jan 13 21:27:23.554229 systemd[1]: Stopped target sockets.target - Socket Units. Jan 13 21:27:23.554533 systemd[1]: iscsid.socket: Deactivated successfully. Jan 13 21:27:23.554624 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 13 21:27:23.557478 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 13 21:27:23.557602 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 13 21:27:23.559400 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 13 21:27:23.559562 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 13 21:27:23.562878 systemd[1]: ignition-files.service: Deactivated successfully. Jan 13 21:27:23.563025 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 13 21:27:23.580945 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 13 21:27:23.583140 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 13 21:27:23.584984 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 13 21:27:23.586166 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:27:23.588627 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 13 21:27:23.589867 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 13 21:27:23.592692 ignition[1008]: INFO : Ignition 2.19.0 Jan 13 21:27:23.592692 ignition[1008]: INFO : Stage: umount Jan 13 21:27:23.594795 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 13 21:27:23.594795 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 13 21:27:23.594795 ignition[1008]: INFO : umount: umount passed Jan 13 21:27:23.594795 ignition[1008]: INFO : Ignition finished successfully Jan 13 21:27:23.599533 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 13 21:27:23.600726 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 13 21:27:23.604597 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 13 21:27:23.605656 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 13 21:27:23.609421 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 13 21:27:23.611193 systemd[1]: Stopped target network.target - Network. Jan 13 21:27:23.613145 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 13 21:27:23.613219 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 13 21:27:23.616150 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 13 21:27:23.616205 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 13 21:27:23.619145 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 13 21:27:23.620068 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 13 21:27:23.622010 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 13 21:27:23.622065 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 13 21:27:23.625463 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 13 21:27:23.627721 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 13 21:27:23.635766 systemd-networkd[769]: eth0: DHCPv6 lease lost Jan 13 21:27:23.637761 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 13 21:27:23.638853 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 13 21:27:23.641282 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 13 21:27:23.642448 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 13 21:27:23.646691 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 13 21:27:23.646783 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:27:23.659890 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 13 21:27:23.660875 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 13 21:27:23.660949 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 13 21:27:23.663238 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 13 21:27:23.663290 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:27:23.665607 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 13 21:27:23.665663 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 13 21:27:23.668076 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 13 21:27:23.668129 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:27:23.670335 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:27:23.681114 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 13 21:27:23.681278 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 13 21:27:23.684698 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 13 21:27:23.684898 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:27:23.687161 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 13 21:27:23.687212 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 13 21:27:23.689209 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 13 21:27:23.689247 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:27:23.691238 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 13 21:27:23.691289 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 13 21:27:23.693413 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 13 21:27:23.693461 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 13 21:27:23.695390 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 13 21:27:23.695437 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 13 21:27:23.708940 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 13 21:27:23.710132 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 13 21:27:23.710199 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:27:23.712567 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 13 21:27:23.712622 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:27:23.714842 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 13 21:27:23.714897 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:27:23.717330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 13 21:27:23.717382 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:27:23.719938 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 13 21:27:23.720064 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 13 21:27:23.782155 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 13 21:27:23.783238 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 13 21:27:23.785723 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 13 21:27:23.787844 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 13 21:27:23.788833 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 13 21:27:23.804040 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 13 21:27:23.810546 systemd[1]: Switching root. Jan 13 21:27:23.843222 systemd-journald[193]: Journal stopped Jan 13 21:27:25.275070 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 13 21:27:25.275144 kernel: SELinux: policy capability network_peer_controls=1 Jan 13 21:27:25.275177 kernel: SELinux: policy capability open_perms=1 Jan 13 21:27:25.275193 kernel: SELinux: policy capability extended_socket_class=1 Jan 13 21:27:25.275208 kernel: SELinux: policy capability always_check_network=0 Jan 13 21:27:25.275223 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 13 21:27:25.275237 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 13 21:27:25.275258 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 13 21:27:25.275272 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 13 21:27:25.275287 kernel: audit: type=1403 audit(1736803644.557:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 13 21:27:25.275304 systemd[1]: Successfully loaded SELinux policy in 44.139ms. Jan 13 21:27:25.275340 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.451ms. Jan 13 21:27:25.275358 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 13 21:27:25.275375 systemd[1]: Detected virtualization kvm. Jan 13 21:27:25.275392 systemd[1]: Detected architecture x86-64. Jan 13 21:27:25.275414 systemd[1]: Detected first boot. Jan 13 21:27:25.275430 systemd[1]: Initializing machine ID from VM UUID. Jan 13 21:27:25.275462 zram_generator::config[1072]: No configuration found. Jan 13 21:27:25.275486 systemd[1]: Populated /etc with preset unit settings. Jan 13 21:27:25.275505 systemd[1]: Queued start job for default target multi-user.target. Jan 13 21:27:25.275521 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 13 21:27:25.275537 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 13 21:27:25.275553 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 13 21:27:25.275569 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 13 21:27:25.275586 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 13 21:27:25.275602 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 13 21:27:25.275618 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 13 21:27:25.275638 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 13 21:27:25.275655 systemd[1]: Created slice user.slice - User and Session Slice. Jan 13 21:27:25.275671 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 13 21:27:25.275688 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 13 21:27:25.275704 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 13 21:27:25.275722 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 13 21:27:25.279560 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 13 21:27:25.279580 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 13 21:27:25.279595 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 13 21:27:25.279614 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 13 21:27:25.279629 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 13 21:27:25.279643 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 13 21:27:25.279657 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 13 21:27:25.279672 systemd[1]: Reached target slices.target - Slice Units. Jan 13 21:27:25.279686 systemd[1]: Reached target swap.target - Swaps. Jan 13 21:27:25.279700 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 13 21:27:25.279714 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 13 21:27:25.279742 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 13 21:27:25.279757 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 13 21:27:25.279771 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 13 21:27:25.279785 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 13 21:27:25.279799 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 13 21:27:25.279814 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 13 21:27:25.279827 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 13 21:27:25.279842 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 13 21:27:25.279856 systemd[1]: Mounting media.mount - External Media Directory... Jan 13 21:27:25.279871 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:27:25.279889 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 13 21:27:25.279903 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 13 21:27:25.279918 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 13 21:27:25.279932 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 13 21:27:25.279946 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:27:25.279961 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 13 21:27:25.279975 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 13 21:27:25.279990 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:27:25.280007 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:27:25.280024 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:27:25.280041 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 13 21:27:25.280055 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:27:25.280069 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 13 21:27:25.280102 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 13 21:27:25.280118 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 13 21:27:25.280142 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 13 21:27:25.280162 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 13 21:27:25.280178 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 13 21:27:25.280195 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 13 21:27:25.280210 kernel: loop: module loaded Jan 13 21:27:25.280226 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 13 21:27:25.280242 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:27:25.280283 systemd-journald[1159]: Collecting audit messages is disabled. Jan 13 21:27:25.280317 kernel: fuse: init (API version 7.39) Jan 13 21:27:25.280336 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 13 21:27:25.280352 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 13 21:27:25.280368 systemd[1]: Mounted media.mount - External Media Directory. Jan 13 21:27:25.280383 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 13 21:27:25.280399 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 13 21:27:25.280415 systemd-journald[1159]: Journal started Jan 13 21:27:25.280443 systemd-journald[1159]: Runtime Journal (/run/log/journal/5ff975d66f964912b61c7b4f2210e1a6) is 6.0M, max 48.4M, 42.3M free. Jan 13 21:27:25.281793 systemd[1]: Started systemd-journald.service - Journal Service. Jan 13 21:27:25.284502 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 13 21:27:25.285930 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 13 21:27:25.287492 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 13 21:27:25.287706 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 13 21:27:25.289247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:27:25.289448 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:27:25.291188 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:27:25.291492 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:27:25.293538 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 13 21:27:25.293777 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 13 21:27:25.295521 kernel: ACPI: bus type drm_connector registered Jan 13 21:27:25.295906 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:27:25.296213 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:27:25.297694 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:27:25.298582 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:27:25.300112 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 13 21:27:25.301905 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 13 21:27:25.303753 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 13 21:27:25.305814 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 13 21:27:25.319872 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 13 21:27:25.335852 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 13 21:27:25.338367 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 13 21:27:25.339598 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 13 21:27:25.342229 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 13 21:27:25.346894 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 13 21:27:25.348496 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:27:25.350065 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 13 21:27:25.351317 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:27:25.352893 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 13 21:27:25.361952 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 13 21:27:25.366625 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 13 21:27:25.368100 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 13 21:27:25.374040 systemd-journald[1159]: Time spent on flushing to /var/log/journal/5ff975d66f964912b61c7b4f2210e1a6 is 12.501ms for 945 entries. Jan 13 21:27:25.374040 systemd-journald[1159]: System Journal (/var/log/journal/5ff975d66f964912b61c7b4f2210e1a6) is 8.0M, max 195.6M, 187.6M free. Jan 13 21:27:25.752870 systemd-journald[1159]: Received client request to flush runtime journal. Jan 13 21:27:25.388309 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 13 21:27:25.391200 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 13 21:27:25.404949 udevadm[1214]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 13 21:27:25.413480 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 13 21:27:25.416406 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 13 21:27:25.416420 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. Jan 13 21:27:25.422719 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 13 21:27:25.433009 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 13 21:27:25.456297 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 13 21:27:25.465015 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 13 21:27:25.480963 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jan 13 21:27:25.480977 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Jan 13 21:27:25.485828 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 13 21:27:25.561817 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 13 21:27:25.563300 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 13 21:27:25.754853 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 13 21:27:26.135975 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 13 21:27:26.147847 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 13 21:27:26.180444 systemd-udevd[1235]: Using default interface naming scheme 'v255'. Jan 13 21:27:26.200583 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 13 21:27:26.210905 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 13 21:27:26.225880 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 13 21:27:26.235545 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 13 21:27:26.248767 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1245) Jan 13 21:27:26.291789 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 13 21:27:26.306709 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 13 21:27:26.312798 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 13 21:27:26.322752 kernel: ACPI: button: Power Button [PWRF] Jan 13 21:27:26.332816 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 13 21:27:26.339356 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 13 21:27:26.339700 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 13 21:27:26.344152 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 13 21:27:26.382467 systemd-networkd[1239]: lo: Link UP Jan 13 21:27:26.382479 systemd-networkd[1239]: lo: Gained carrier Jan 13 21:27:26.387062 systemd-networkd[1239]: Enumeration completed Jan 13 21:27:26.387180 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 13 21:27:26.387876 systemd-networkd[1239]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:27:26.387884 systemd-networkd[1239]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 13 21:27:26.388937 systemd-networkd[1239]: eth0: Link UP Jan 13 21:27:26.388941 systemd-networkd[1239]: eth0: Gained carrier Jan 13 21:27:26.388952 systemd-networkd[1239]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 13 21:27:26.447045 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 13 21:27:26.457980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 13 21:27:26.468876 kernel: mousedev: PS/2 mouse device common for all mice Jan 13 21:27:26.468891 systemd-networkd[1239]: eth0: DHCPv4 address 10.0.0.128/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 13 21:27:26.481752 kernel: kvm_amd: TSC scaling supported Jan 13 21:27:26.481820 kernel: kvm_amd: Nested Virtualization enabled Jan 13 21:27:26.481837 kernel: kvm_amd: Nested Paging enabled Jan 13 21:27:26.481852 kernel: kvm_amd: LBR virtualization supported Jan 13 21:27:26.481867 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 13 21:27:26.481882 kernel: kvm_amd: Virtual GIF supported Jan 13 21:27:26.503894 kernel: EDAC MC: Ver: 3.0.0 Jan 13 21:27:26.543217 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 13 21:27:26.555281 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 13 21:27:26.565912 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 13 21:27:26.574752 lvm[1281]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:27:26.605582 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 13 21:27:26.607256 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 13 21:27:26.618928 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 13 21:27:26.623539 lvm[1284]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 13 21:27:26.662494 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 13 21:27:26.664124 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 13 21:27:26.665459 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 13 21:27:26.665493 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 13 21:27:26.666597 systemd[1]: Reached target machines.target - Containers. Jan 13 21:27:26.669021 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 13 21:27:26.691995 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 13 21:27:26.694686 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 13 21:27:26.695844 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:27:26.696748 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 13 21:27:26.700135 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 13 21:27:26.704594 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 13 21:27:26.707261 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 13 21:27:26.719789 kernel: loop0: detected capacity change from 0 to 211296 Jan 13 21:27:26.725574 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 13 21:27:26.731785 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 13 21:27:26.732624 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 13 21:27:26.742768 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 13 21:27:26.769757 kernel: loop1: detected capacity change from 0 to 140768 Jan 13 21:27:26.796760 kernel: loop2: detected capacity change from 0 to 142488 Jan 13 21:27:26.841784 kernel: loop3: detected capacity change from 0 to 211296 Jan 13 21:27:26.849895 kernel: loop4: detected capacity change from 0 to 140768 Jan 13 21:27:26.862766 kernel: loop5: detected capacity change from 0 to 142488 Jan 13 21:27:26.870993 (sd-merge)[1304]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 13 21:27:26.871602 (sd-merge)[1304]: Merged extensions into '/usr'. Jan 13 21:27:26.875873 systemd[1]: Reloading requested from client PID 1292 ('systemd-sysext') (unit systemd-sysext.service)... Jan 13 21:27:26.875895 systemd[1]: Reloading... Jan 13 21:27:26.933789 zram_generator::config[1335]: No configuration found. Jan 13 21:27:26.959710 ldconfig[1289]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 13 21:27:27.060249 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:27:27.127274 systemd[1]: Reloading finished in 250 ms. Jan 13 21:27:27.147776 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 13 21:27:27.149345 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 13 21:27:27.169011 systemd[1]: Starting ensure-sysext.service... Jan 13 21:27:27.171377 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 13 21:27:27.179813 systemd[1]: Reloading requested from client PID 1376 ('systemctl') (unit ensure-sysext.service)... Jan 13 21:27:27.179830 systemd[1]: Reloading... Jan 13 21:27:27.193004 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 13 21:27:27.193351 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 13 21:27:27.194425 systemd-tmpfiles[1377]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 13 21:27:27.194721 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Jan 13 21:27:27.194855 systemd-tmpfiles[1377]: ACLs are not supported, ignoring. Jan 13 21:27:27.198208 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:27:27.198218 systemd-tmpfiles[1377]: Skipping /boot Jan 13 21:27:27.208668 systemd-tmpfiles[1377]: Detected autofs mount point /boot during canonicalization of boot. Jan 13 21:27:27.208682 systemd-tmpfiles[1377]: Skipping /boot Jan 13 21:27:27.237099 zram_generator::config[1411]: No configuration found. Jan 13 21:27:27.350930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:27:27.420789 systemd[1]: Reloading finished in 240 ms. Jan 13 21:27:27.441162 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 13 21:27:27.460210 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 13 21:27:27.463180 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 13 21:27:27.466000 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 13 21:27:27.470137 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 13 21:27:27.475335 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 13 21:27:27.482906 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:27:27.483067 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:27:27.484257 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:27:27.489223 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:27:27.493004 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:27:27.494557 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:27:27.494937 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:27:27.496351 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:27:27.496668 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:27:27.499027 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:27:27.499318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:27:27.505490 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:27:27.505901 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:27:27.510379 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 13 21:27:27.517560 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 13 21:27:27.523687 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:27:27.524157 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 13 21:27:27.525093 augenrules[1483]: No rules Jan 13 21:27:27.529125 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 13 21:27:27.532540 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 13 21:27:27.536382 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 13 21:27:27.540010 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 13 21:27:27.543345 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 13 21:27:27.546087 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 13 21:27:27.547795 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 13 21:27:27.549161 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 13 21:27:27.550853 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 13 21:27:27.553474 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 13 21:27:27.553818 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 13 21:27:27.555556 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 13 21:27:27.556080 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 13 21:27:27.557929 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 13 21:27:27.558141 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 13 21:27:27.559961 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 13 21:27:27.560186 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 13 21:27:27.564111 systemd[1]: Finished ensure-sysext.service. Jan 13 21:27:27.565561 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 13 21:27:27.568976 systemd-resolved[1455]: Positive Trust Anchors: Jan 13 21:27:27.569274 systemd-resolved[1455]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 13 21:27:27.569314 systemd-resolved[1455]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 13 21:27:27.574440 systemd-resolved[1455]: Defaulting to hostname 'linux'. Jan 13 21:27:27.575084 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 13 21:27:27.575149 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 13 21:27:27.585929 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 13 21:27:27.587149 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 13 21:27:27.587275 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 13 21:27:27.588693 systemd[1]: Reached target network.target - Network. Jan 13 21:27:27.589726 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 13 21:27:27.598842 systemd-networkd[1239]: eth0: Gained IPv6LL Jan 13 21:27:27.601635 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 13 21:27:27.603350 systemd[1]: Reached target network-online.target - Network is Online. Jan 13 21:27:27.661867 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 13 21:27:29.021183 systemd-resolved[1455]: Clock change detected. Flushing caches. Jan 13 21:27:29.021190 systemd[1]: Reached target sysinit.target - System Initialization. Jan 13 21:27:29.021214 systemd-timesyncd[1512]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 13 21:27:29.022369 systemd-timesyncd[1512]: Initial clock synchronization to Mon 2025-01-13 21:27:29.021131 UTC. Jan 13 21:27:29.022498 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 13 21:27:29.023822 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 13 21:27:29.025184 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 13 21:27:29.026534 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 13 21:27:29.026586 systemd[1]: Reached target paths.target - Path Units. Jan 13 21:27:29.027571 systemd[1]: Reached target time-set.target - System Time Set. Jan 13 21:27:29.028939 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 13 21:27:29.030255 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 13 21:27:29.031537 systemd[1]: Reached target timers.target - Timer Units. Jan 13 21:27:29.033233 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 13 21:27:29.036546 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 13 21:27:29.038840 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 13 21:27:29.048690 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 13 21:27:29.049872 systemd[1]: Reached target sockets.target - Socket Units. Jan 13 21:27:29.050852 systemd[1]: Reached target basic.target - Basic System. Jan 13 21:27:29.052139 systemd[1]: System is tainted: cgroupsv1 Jan 13 21:27:29.052190 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:27:29.052223 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 13 21:27:29.054074 systemd[1]: Starting containerd.service - containerd container runtime... Jan 13 21:27:29.056383 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 13 21:27:29.058649 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 13 21:27:29.062526 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 13 21:27:29.066554 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 13 21:27:29.069547 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 13 21:27:29.074334 jq[1521]: false Jan 13 21:27:29.073017 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:29.078539 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 13 21:27:29.082472 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 13 21:27:29.088458 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 13 21:27:29.090811 dbus-daemon[1520]: [system] SELinux support is enabled Jan 13 21:27:29.094050 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 13 21:27:29.094759 extend-filesystems[1524]: Found loop3 Jan 13 21:27:29.094759 extend-filesystems[1524]: Found loop4 Jan 13 21:27:29.097719 extend-filesystems[1524]: Found loop5 Jan 13 21:27:29.097719 extend-filesystems[1524]: Found sr0 Jan 13 21:27:29.097719 extend-filesystems[1524]: Found vda Jan 13 21:27:29.097719 extend-filesystems[1524]: Found vda1 Jan 13 21:27:29.097719 extend-filesystems[1524]: Found vda2 Jan 13 21:27:29.097719 extend-filesystems[1524]: Found vda3 Jan 13 21:27:29.097719 extend-filesystems[1524]: Found usr Jan 13 21:27:29.097719 extend-filesystems[1524]: Found vda4 Jan 13 21:27:29.097719 extend-filesystems[1524]: Found vda6 Jan 13 21:27:29.097719 extend-filesystems[1524]: Found vda7 Jan 13 21:27:29.097719 extend-filesystems[1524]: Found vda9 Jan 13 21:27:29.097719 extend-filesystems[1524]: Checking size of /dev/vda9 Jan 13 21:27:29.104484 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 13 21:27:29.110462 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 13 21:27:29.113293 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 13 21:27:29.121645 systemd[1]: Starting update-engine.service - Update Engine... Jan 13 21:27:29.124241 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 13 21:27:29.127707 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 13 21:27:29.140719 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 13 21:27:29.141127 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 13 21:27:29.142102 systemd[1]: motdgen.service: Deactivated successfully. Jan 13 21:27:29.143556 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 13 21:27:29.145742 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 13 21:27:29.149815 extend-filesystems[1524]: Resized partition /dev/vda9 Jan 13 21:27:29.148958 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 13 21:27:29.158562 update_engine[1546]: I20250113 21:27:29.153041 1546 main.cc:92] Flatcar Update Engine starting Jan 13 21:27:29.158562 update_engine[1546]: I20250113 21:27:29.157761 1546 update_check_scheduler.cc:74] Next update check in 3m54s Jan 13 21:27:29.158911 jq[1553]: true Jan 13 21:27:29.159012 extend-filesystems[1563]: resize2fs 1.47.1 (20-May-2024) Jan 13 21:27:29.149249 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 13 21:27:29.171176 (ntainerd)[1566]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 13 21:27:29.185602 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1242) Jan 13 21:27:29.188160 jq[1565]: true Jan 13 21:27:29.195346 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 13 21:27:29.197930 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 13 21:27:29.200794 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 13 21:27:29.221803 tar[1561]: linux-amd64/helm Jan 13 21:27:29.229988 systemd[1]: Started update-engine.service - Update Engine. Jan 13 21:27:29.231435 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 13 21:27:29.231559 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 13 21:27:29.231593 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 13 21:27:29.232926 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 13 21:27:29.232948 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 13 21:27:29.239450 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 13 21:27:29.261467 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 13 21:27:29.434674 locksmithd[1601]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 13 21:27:29.439803 systemd-logind[1542]: Watching system buttons on /dev/input/event1 (Power Button) Jan 13 21:27:29.440261 systemd-logind[1542]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 13 21:27:29.441855 systemd-logind[1542]: New seat seat0. Jan 13 21:27:29.445680 systemd[1]: Started systemd-logind.service - User Login Management. Jan 13 21:27:29.614009 tar[1561]: linux-amd64/LICENSE Jan 13 21:27:29.614142 tar[1561]: linux-amd64/README.md Jan 13 21:27:29.632655 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 13 21:27:29.627091 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 13 21:27:30.538670 containerd[1566]: time="2025-01-13T21:27:30.538475990Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 13 21:27:30.540147 sshd_keygen[1554]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 13 21:27:30.565962 containerd[1566]: time="2025-01-13T21:27:30.565896524Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:30.568244 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 13 21:27:30.573790 containerd[1566]: time="2025-01-13T21:27:30.568739245Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:27:30.573790 containerd[1566]: time="2025-01-13T21:27:30.568779771Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 13 21:27:30.573790 containerd[1566]: time="2025-01-13T21:27:30.568802814Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 13 21:27:30.573790 containerd[1566]: time="2025-01-13T21:27:30.569032334Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 13 21:27:30.573790 containerd[1566]: time="2025-01-13T21:27:30.569053955Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:30.573790 containerd[1566]: time="2025-01-13T21:27:30.569142190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:27:30.573790 containerd[1566]: time="2025-01-13T21:27:30.569158000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:30.573790 containerd[1566]: time="2025-01-13T21:27:30.569495974Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:27:30.573790 containerd[1566]: time="2025-01-13T21:27:30.569519017Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:30.573790 containerd[1566]: time="2025-01-13T21:27:30.569537913Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:27:30.573790 containerd[1566]: time="2025-01-13T21:27:30.569552820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:30.574089 containerd[1566]: time="2025-01-13T21:27:30.569673066Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:30.574089 containerd[1566]: time="2025-01-13T21:27:30.569961216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 13 21:27:30.574089 containerd[1566]: time="2025-01-13T21:27:30.570168665Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 13 21:27:30.574089 containerd[1566]: time="2025-01-13T21:27:30.570187170Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 13 21:27:30.574089 containerd[1566]: time="2025-01-13T21:27:30.570331591Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 13 21:27:30.574089 containerd[1566]: time="2025-01-13T21:27:30.570417933Z" level=info msg="metadata content store policy set" policy=shared Jan 13 21:27:30.594587 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 13 21:27:30.600728 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 13 21:27:30.600845 extend-filesystems[1563]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 13 21:27:30.600845 extend-filesystems[1563]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 13 21:27:30.600845 extend-filesystems[1563]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 13 21:27:30.601132 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 13 21:27:30.607668 extend-filesystems[1524]: Resized filesystem in /dev/vda9 Jan 13 21:27:30.609888 systemd[1]: issuegen.service: Deactivated successfully. Jan 13 21:27:30.610345 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 13 21:27:30.623558 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 13 21:27:30.664569 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 13 21:27:30.672627 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 13 21:27:30.675024 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 13 21:27:30.676476 systemd[1]: Reached target getty.target - Login Prompts. Jan 13 21:27:30.938414 bash[1600]: Updated "/home/core/.ssh/authorized_keys" Jan 13 21:27:30.940441 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 13 21:27:30.952177 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 13 21:27:30.989955 containerd[1566]: time="2025-01-13T21:27:30.989873211Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 13 21:27:30.990007 containerd[1566]: time="2025-01-13T21:27:30.989976315Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 13 21:27:30.990007 containerd[1566]: time="2025-01-13T21:27:30.989994168Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 13 21:27:30.990055 containerd[1566]: time="2025-01-13T21:27:30.990022792Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 13 21:27:30.990075 containerd[1566]: time="2025-01-13T21:27:30.990056004Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 13 21:27:30.990283 containerd[1566]: time="2025-01-13T21:27:30.990251982Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 13 21:27:30.990626 containerd[1566]: time="2025-01-13T21:27:30.990593071Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 13 21:27:30.990754 containerd[1566]: time="2025-01-13T21:27:30.990705232Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 13 21:27:30.990754 containerd[1566]: time="2025-01-13T21:27:30.990731902Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 13 21:27:30.990754 containerd[1566]: time="2025-01-13T21:27:30.990746880Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 13 21:27:30.990754 containerd[1566]: time="2025-01-13T21:27:30.990762970Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 13 21:27:30.990754 containerd[1566]: time="2025-01-13T21:27:30.990775884Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 13 21:27:30.991016 containerd[1566]: time="2025-01-13T21:27:30.990787987Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 13 21:27:30.991016 containerd[1566]: time="2025-01-13T21:27:30.990803836Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 13 21:27:30.991016 containerd[1566]: time="2025-01-13T21:27:30.990817873Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 13 21:27:30.991016 containerd[1566]: time="2025-01-13T21:27:30.990830697Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 13 21:27:30.991016 containerd[1566]: time="2025-01-13T21:27:30.990842188Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 13 21:27:30.991016 containerd[1566]: time="2025-01-13T21:27:30.990854131Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 13 21:27:30.991016 containerd[1566]: time="2025-01-13T21:27:30.990874479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991016 containerd[1566]: time="2025-01-13T21:27:30.990894186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991016 containerd[1566]: time="2025-01-13T21:27:30.990906239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991016 containerd[1566]: time="2025-01-13T21:27:30.990917890Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991016 containerd[1566]: time="2025-01-13T21:27:30.990942887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991016 containerd[1566]: time="2025-01-13T21:27:30.990955451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991016 containerd[1566]: time="2025-01-13T21:27:30.990971351Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991016 containerd[1566]: time="2025-01-13T21:27:30.990983483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991389 containerd[1566]: time="2025-01-13T21:27:30.990995596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991389 containerd[1566]: time="2025-01-13T21:27:30.991009552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991389 containerd[1566]: time="2025-01-13T21:27:30.991021344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991389 containerd[1566]: time="2025-01-13T21:27:30.991033427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991389 containerd[1566]: time="2025-01-13T21:27:30.991045620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991389 containerd[1566]: time="2025-01-13T21:27:30.991059887Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 13 21:27:30.991389 containerd[1566]: time="2025-01-13T21:27:30.991078241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991389 containerd[1566]: time="2025-01-13T21:27:30.991088731Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991389 containerd[1566]: time="2025-01-13T21:27:30.991104080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 13 21:27:30.991389 containerd[1566]: time="2025-01-13T21:27:30.991148944Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 13 21:27:30.991389 containerd[1566]: time="2025-01-13T21:27:30.991164603Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 13 21:27:30.991389 containerd[1566]: time="2025-01-13T21:27:30.991174502Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 13 21:27:30.991389 containerd[1566]: time="2025-01-13T21:27:30.991185853Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 13 21:27:30.991718 containerd[1566]: time="2025-01-13T21:27:30.991195792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991718 containerd[1566]: time="2025-01-13T21:27:30.991209487Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 13 21:27:30.991718 containerd[1566]: time="2025-01-13T21:27:30.991219155Z" level=info msg="NRI interface is disabled by configuration." Jan 13 21:27:30.991718 containerd[1566]: time="2025-01-13T21:27:30.991228383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 13 21:27:30.991829 containerd[1566]: time="2025-01-13T21:27:30.991494021Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 13 21:27:30.991829 containerd[1566]: time="2025-01-13T21:27:30.991543714Z" level=info msg="Connect containerd service" Jan 13 21:27:30.991829 containerd[1566]: time="2025-01-13T21:27:30.991578189Z" level=info msg="using legacy CRI server" Jan 13 21:27:30.991829 containerd[1566]: time="2025-01-13T21:27:30.991584981Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 13 21:27:30.991829 containerd[1566]: time="2025-01-13T21:27:30.991664741Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 13 21:27:30.992232 containerd[1566]: time="2025-01-13T21:27:30.992206206Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 13 21:27:30.992451 containerd[1566]: time="2025-01-13T21:27:30.992379802Z" level=info msg="Start subscribing containerd event" Jan 13 21:27:30.992654 containerd[1566]: time="2025-01-13T21:27:30.992543860Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 13 21:27:30.992699 containerd[1566]: time="2025-01-13T21:27:30.992686136Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 13 21:27:30.992720 containerd[1566]: time="2025-01-13T21:27:30.992627346Z" level=info msg="Start recovering state" Jan 13 21:27:30.992787 containerd[1566]: time="2025-01-13T21:27:30.992764413Z" level=info msg="Start event monitor" Jan 13 21:27:30.993101 containerd[1566]: time="2025-01-13T21:27:30.992903294Z" level=info msg="Start snapshots syncer" Jan 13 21:27:30.993101 containerd[1566]: time="2025-01-13T21:27:30.992919043Z" level=info msg="Start cni network conf syncer for default" Jan 13 21:27:30.993101 containerd[1566]: time="2025-01-13T21:27:30.992937949Z" level=info msg="Start streaming server" Jan 13 21:27:30.993101 containerd[1566]: time="2025-01-13T21:27:30.993004443Z" level=info msg="containerd successfully booted in 0.900782s" Jan 13 21:27:30.993132 systemd[1]: Started containerd.service - containerd container runtime. Jan 13 21:27:31.386785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:31.388536 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 13 21:27:31.390614 systemd[1]: Startup finished in 6.571s (kernel) + 5.518s (userspace) = 12.089s. Jan 13 21:27:31.413792 (kubelet)[1654]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:27:31.893657 kubelet[1654]: E0113 21:27:31.893563 1654 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:27:31.898598 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:27:31.898911 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:27:37.876544 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 13 21:27:37.884656 systemd[1]: Started sshd@0-10.0.0.128:22-10.0.0.1:53834.service - OpenSSH per-connection server daemon (10.0.0.1:53834). Jan 13 21:27:37.921952 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 53834 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:37.924063 sshd[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:37.932917 systemd-logind[1542]: New session 1 of user core. Jan 13 21:27:37.934102 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 13 21:27:37.949567 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 13 21:27:37.961904 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 13 21:27:37.964443 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 13 21:27:37.972050 (systemd)[1674]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 13 21:27:38.065301 systemd[1674]: Queued start job for default target default.target. Jan 13 21:27:38.065696 systemd[1674]: Created slice app.slice - User Application Slice. Jan 13 21:27:38.065713 systemd[1674]: Reached target paths.target - Paths. Jan 13 21:27:38.065725 systemd[1674]: Reached target timers.target - Timers. Jan 13 21:27:38.078422 systemd[1674]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 13 21:27:38.084850 systemd[1674]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 13 21:27:38.084923 systemd[1674]: Reached target sockets.target - Sockets. Jan 13 21:27:38.084939 systemd[1674]: Reached target basic.target - Basic System. Jan 13 21:27:38.084980 systemd[1674]: Reached target default.target - Main User Target. Jan 13 21:27:38.085018 systemd[1674]: Startup finished in 106ms. Jan 13 21:27:38.085518 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 13 21:27:38.086840 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 13 21:27:38.141587 systemd[1]: Started sshd@1-10.0.0.128:22-10.0.0.1:53840.service - OpenSSH per-connection server daemon (10.0.0.1:53840). Jan 13 21:27:38.174401 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 53840 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:38.175804 sshd[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:38.179909 systemd-logind[1542]: New session 2 of user core. Jan 13 21:27:38.189541 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 13 21:27:38.243040 sshd[1686]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:38.255583 systemd[1]: Started sshd@2-10.0.0.128:22-10.0.0.1:53854.service - OpenSSH per-connection server daemon (10.0.0.1:53854). Jan 13 21:27:38.256170 systemd[1]: sshd@1-10.0.0.128:22-10.0.0.1:53840.service: Deactivated successfully. Jan 13 21:27:38.259024 systemd-logind[1542]: Session 2 logged out. Waiting for processes to exit. Jan 13 21:27:38.260204 systemd[1]: session-2.scope: Deactivated successfully. Jan 13 21:27:38.261351 systemd-logind[1542]: Removed session 2. Jan 13 21:27:38.287921 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 53854 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:38.289335 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:38.293291 systemd-logind[1542]: New session 3 of user core. Jan 13 21:27:38.304547 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 13 21:27:38.353279 sshd[1691]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:38.364551 systemd[1]: Started sshd@3-10.0.0.128:22-10.0.0.1:53858.service - OpenSSH per-connection server daemon (10.0.0.1:53858). Jan 13 21:27:38.365123 systemd[1]: sshd@2-10.0.0.128:22-10.0.0.1:53854.service: Deactivated successfully. Jan 13 21:27:38.368061 systemd-logind[1542]: Session 3 logged out. Waiting for processes to exit. Jan 13 21:27:38.368997 systemd[1]: session-3.scope: Deactivated successfully. Jan 13 21:27:38.369983 systemd-logind[1542]: Removed session 3. Jan 13 21:27:38.398071 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 53858 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:38.399592 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:38.403141 systemd-logind[1542]: New session 4 of user core. Jan 13 21:27:38.412564 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 13 21:27:38.465032 sshd[1699]: pam_unix(sshd:session): session closed for user core Jan 13 21:27:38.475545 systemd[1]: Started sshd@4-10.0.0.128:22-10.0.0.1:53862.service - OpenSSH per-connection server daemon (10.0.0.1:53862). Jan 13 21:27:38.476076 systemd[1]: sshd@3-10.0.0.128:22-10.0.0.1:53858.service: Deactivated successfully. Jan 13 21:27:38.477960 systemd[1]: session-4.scope: Deactivated successfully. Jan 13 21:27:38.478651 systemd-logind[1542]: Session 4 logged out. Waiting for processes to exit. Jan 13 21:27:38.480225 systemd-logind[1542]: Removed session 4. Jan 13 21:27:38.509794 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 53862 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:27:38.511366 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:27:38.515290 systemd-logind[1542]: New session 5 of user core. Jan 13 21:27:38.524564 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 13 21:27:38.581716 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 13 21:27:38.582044 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 13 21:27:38.858511 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 13 21:27:38.858742 (dockerd)[1732]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 13 21:27:39.141992 dockerd[1732]: time="2025-01-13T21:27:39.141914979Z" level=info msg="Starting up" Jan 13 21:27:39.972600 dockerd[1732]: time="2025-01-13T21:27:39.972549957Z" level=info msg="Loading containers: start." Jan 13 21:27:40.077352 kernel: Initializing XFRM netlink socket Jan 13 21:27:40.153560 systemd-networkd[1239]: docker0: Link UP Jan 13 21:27:40.181339 dockerd[1732]: time="2025-01-13T21:27:40.181276754Z" level=info msg="Loading containers: done." Jan 13 21:27:40.196942 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1249586-merged.mount: Deactivated successfully. Jan 13 21:27:40.197890 dockerd[1732]: time="2025-01-13T21:27:40.197844815Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 13 21:27:40.197995 dockerd[1732]: time="2025-01-13T21:27:40.197978536Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 13 21:27:40.198150 dockerd[1732]: time="2025-01-13T21:27:40.198133987Z" level=info msg="Daemon has completed initialization" Jan 13 21:27:40.238138 dockerd[1732]: time="2025-01-13T21:27:40.237722670Z" level=info msg="API listen on /run/docker.sock" Jan 13 21:27:40.238000 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 13 21:27:40.977999 containerd[1566]: time="2025-01-13T21:27:40.977946263Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Jan 13 21:27:41.648698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2688218329.mount: Deactivated successfully. Jan 13 21:27:42.128704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 13 21:27:42.140460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:42.287943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:42.292587 (kubelet)[1951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:27:42.593405 kubelet[1951]: E0113 21:27:42.593210 1951 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:27:42.600576 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:27:42.600848 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:27:43.025583 containerd[1566]: time="2025-01-13T21:27:43.025525305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:43.026353 containerd[1566]: time="2025-01-13T21:27:43.026288657Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139254" Jan 13 21:27:43.027533 containerd[1566]: time="2025-01-13T21:27:43.027500820Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:43.029991 containerd[1566]: time="2025-01-13T21:27:43.029959290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:43.030988 containerd[1566]: time="2025-01-13T21:27:43.030954657Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.052965423s" Jan 13 21:27:43.031036 containerd[1566]: time="2025-01-13T21:27:43.030987489Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Jan 13 21:27:43.051818 containerd[1566]: time="2025-01-13T21:27:43.051783108Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Jan 13 21:27:45.351387 containerd[1566]: time="2025-01-13T21:27:45.351334788Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:45.352213 containerd[1566]: time="2025-01-13T21:27:45.352174713Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217732" Jan 13 21:27:45.354986 containerd[1566]: time="2025-01-13T21:27:45.354947914Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:45.358399 containerd[1566]: time="2025-01-13T21:27:45.358364010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:45.359397 containerd[1566]: time="2025-01-13T21:27:45.359349539Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.307530493s" Jan 13 21:27:45.359397 containerd[1566]: time="2025-01-13T21:27:45.359385035Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Jan 13 21:27:45.381965 containerd[1566]: time="2025-01-13T21:27:45.381908044Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Jan 13 21:27:46.772299 containerd[1566]: time="2025-01-13T21:27:46.772232258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:46.773381 containerd[1566]: time="2025-01-13T21:27:46.773282668Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332822" Jan 13 21:27:46.775899 containerd[1566]: time="2025-01-13T21:27:46.775864510Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:46.778908 containerd[1566]: time="2025-01-13T21:27:46.778865428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:46.779904 containerd[1566]: time="2025-01-13T21:27:46.779866445Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.397922804s" Jan 13 21:27:46.779904 containerd[1566]: time="2025-01-13T21:27:46.779899397Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Jan 13 21:27:46.802051 containerd[1566]: time="2025-01-13T21:27:46.802025301Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Jan 13 21:27:47.840204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2447553014.mount: Deactivated successfully. Jan 13 21:27:48.954157 containerd[1566]: time="2025-01-13T21:27:48.954080112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:48.955451 containerd[1566]: time="2025-01-13T21:27:48.955063636Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Jan 13 21:27:48.957146 containerd[1566]: time="2025-01-13T21:27:48.957122778Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:48.959554 containerd[1566]: time="2025-01-13T21:27:48.959504284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:48.960042 containerd[1566]: time="2025-01-13T21:27:48.959996306Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 2.157939908s" Jan 13 21:27:48.960087 containerd[1566]: time="2025-01-13T21:27:48.960040990Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Jan 13 21:27:48.982764 containerd[1566]: time="2025-01-13T21:27:48.982714040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 13 21:27:49.526311 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4043485350.mount: Deactivated successfully. Jan 13 21:27:50.223263 containerd[1566]: time="2025-01-13T21:27:50.223198902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:50.224019 containerd[1566]: time="2025-01-13T21:27:50.223957264Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 13 21:27:50.225200 containerd[1566]: time="2025-01-13T21:27:50.225153247Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:50.228037 containerd[1566]: time="2025-01-13T21:27:50.228001529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:50.229470 containerd[1566]: time="2025-01-13T21:27:50.229418707Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.246651487s" Jan 13 21:27:50.229511 containerd[1566]: time="2025-01-13T21:27:50.229472367Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 13 21:27:50.251574 containerd[1566]: time="2025-01-13T21:27:50.251543579Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 13 21:27:50.773027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4196036138.mount: Deactivated successfully. Jan 13 21:27:50.778177 containerd[1566]: time="2025-01-13T21:27:50.778129535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:50.778930 containerd[1566]: time="2025-01-13T21:27:50.778871126Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 13 21:27:50.780024 containerd[1566]: time="2025-01-13T21:27:50.779981649Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:50.782083 containerd[1566]: time="2025-01-13T21:27:50.782054015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:50.782721 containerd[1566]: time="2025-01-13T21:27:50.782691851Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 531.115341ms" Jan 13 21:27:50.782765 containerd[1566]: time="2025-01-13T21:27:50.782719613Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 13 21:27:50.808050 containerd[1566]: time="2025-01-13T21:27:50.808017065Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 13 21:27:51.651902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount640730547.mount: Deactivated successfully. Jan 13 21:27:52.628694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 13 21:27:52.641462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:52.778145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:52.782385 (kubelet)[2091]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 13 21:27:52.825588 kubelet[2091]: E0113 21:27:52.825511 2091 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 13 21:27:52.830114 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 13 21:27:52.830452 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 13 21:27:54.505423 containerd[1566]: time="2025-01-13T21:27:54.505358370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:54.506148 containerd[1566]: time="2025-01-13T21:27:54.506114398Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jan 13 21:27:54.507409 containerd[1566]: time="2025-01-13T21:27:54.507376214Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:54.510606 containerd[1566]: time="2025-01-13T21:27:54.510576395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:27:54.511574 containerd[1566]: time="2025-01-13T21:27:54.511521217Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.703472593s" Jan 13 21:27:54.511574 containerd[1566]: time="2025-01-13T21:27:54.511566622Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 13 21:27:57.107631 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:57.125585 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:57.145144 systemd[1]: Reloading requested from client PID 2220 ('systemctl') (unit session-5.scope)... Jan 13 21:27:57.145162 systemd[1]: Reloading... Jan 13 21:27:57.240351 zram_generator::config[2268]: No configuration found. Jan 13 21:27:57.511693 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:27:57.594882 systemd[1]: Reloading finished in 449 ms. Jan 13 21:27:57.647511 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 13 21:27:57.647613 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 13 21:27:57.647973 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:57.655794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:27:57.795972 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:27:57.800591 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:27:57.844937 kubelet[2319]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:27:57.844937 kubelet[2319]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:27:57.844937 kubelet[2319]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:27:57.845397 kubelet[2319]: I0113 21:27:57.844976 2319 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:27:58.121802 kubelet[2319]: I0113 21:27:58.121667 2319 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:27:58.121802 kubelet[2319]: I0113 21:27:58.121709 2319 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:27:58.121985 kubelet[2319]: I0113 21:27:58.121963 2319 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:27:58.136149 kubelet[2319]: E0113 21:27:58.136103 2319 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:58.136909 kubelet[2319]: I0113 21:27:58.136876 2319 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:27:58.149966 kubelet[2319]: I0113 21:27:58.149931 2319 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:27:58.150429 kubelet[2319]: I0113 21:27:58.150393 2319 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:27:58.150582 kubelet[2319]: I0113 21:27:58.150546 2319 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:27:58.150996 kubelet[2319]: I0113 21:27:58.150964 2319 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:27:58.150996 kubelet[2319]: I0113 21:27:58.150981 2319 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:27:58.151119 kubelet[2319]: I0113 21:27:58.151093 2319 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:27:58.151247 kubelet[2319]: I0113 21:27:58.151217 2319 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:27:58.151247 kubelet[2319]: I0113 21:27:58.151240 2319 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:27:58.151331 kubelet[2319]: I0113 21:27:58.151269 2319 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:27:58.151331 kubelet[2319]: I0113 21:27:58.151295 2319 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:27:58.153444 kubelet[2319]: W0113 21:27:58.153124 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:58.153444 kubelet[2319]: E0113 21:27:58.153193 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:58.153444 kubelet[2319]: W0113 21:27:58.153388 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:58.153444 kubelet[2319]: I0113 21:27:58.153406 2319 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:27:58.153444 kubelet[2319]: E0113 21:27:58.153429 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:58.156538 kubelet[2319]: I0113 21:27:58.156512 2319 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:27:58.157494 kubelet[2319]: W0113 21:27:58.157462 2319 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 13 21:27:58.158223 kubelet[2319]: I0113 21:27:58.158037 2319 server.go:1256] "Started kubelet" Jan 13 21:27:58.158291 kubelet[2319]: I0113 21:27:58.158264 2319 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:27:58.158342 kubelet[2319]: I0113 21:27:58.158307 2319 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:27:58.158986 kubelet[2319]: I0113 21:27:58.158613 2319 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:27:58.159632 kubelet[2319]: I0113 21:27:58.159415 2319 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:27:58.161581 kubelet[2319]: I0113 21:27:58.159978 2319 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:27:58.161581 kubelet[2319]: I0113 21:27:58.160112 2319 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:27:58.162394 kubelet[2319]: I0113 21:27:58.162012 2319 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:27:58.162394 kubelet[2319]: I0113 21:27:58.162077 2319 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:27:58.162394 kubelet[2319]: E0113 21:27:58.162158 2319 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:27:58.163787 kubelet[2319]: I0113 21:27:58.163157 2319 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:27:58.163787 kubelet[2319]: E0113 21:27:58.163172 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="200ms" Jan 13 21:27:58.163787 kubelet[2319]: I0113 21:27:58.163263 2319 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:27:58.163787 kubelet[2319]: W0113 21:27:58.163432 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:58.163787 kubelet[2319]: E0113 21:27:58.163479 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:58.164185 kubelet[2319]: E0113 21:27:58.164168 2319 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.128:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.128:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5dbe49334d20 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:27:58.158015776 +0000 UTC m=+0.353376608,LastTimestamp:2025-01-13 21:27:58.158015776 +0000 UTC m=+0.353376608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:27:58.164381 kubelet[2319]: E0113 21:27:58.164220 2319 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:27:58.164901 kubelet[2319]: I0113 21:27:58.164865 2319 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:27:58.176348 kubelet[2319]: I0113 21:27:58.176304 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:27:58.178367 kubelet[2319]: I0113 21:27:58.177730 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:27:58.178367 kubelet[2319]: I0113 21:27:58.177763 2319 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:27:58.178367 kubelet[2319]: I0113 21:27:58.177787 2319 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:27:58.178367 kubelet[2319]: E0113 21:27:58.178170 2319 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:27:58.184490 kubelet[2319]: W0113 21:27:58.184098 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:58.184490 kubelet[2319]: E0113 21:27:58.184153 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:58.187482 kubelet[2319]: I0113 21:27:58.187451 2319 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:27:58.187482 kubelet[2319]: I0113 21:27:58.187473 2319 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:27:58.187482 kubelet[2319]: I0113 21:27:58.187492 2319 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:27:58.263979 kubelet[2319]: I0113 21:27:58.263950 2319 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:27:58.264443 kubelet[2319]: E0113 21:27:58.264409 2319 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jan 13 21:27:58.278507 kubelet[2319]: E0113 21:27:58.278466 2319 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:27:58.364155 kubelet[2319]: E0113 21:27:58.364129 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="400ms" Jan 13 21:27:58.465648 kubelet[2319]: I0113 21:27:58.465598 2319 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:27:58.465957 kubelet[2319]: E0113 21:27:58.465937 2319 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jan 13 21:27:58.479068 kubelet[2319]: E0113 21:27:58.479028 2319 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 13 21:27:58.765440 kubelet[2319]: E0113 21:27:58.765275 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="800ms" Jan 13 21:27:58.771651 kubelet[2319]: I0113 21:27:58.771579 2319 policy_none.go:49] "None policy: Start" Jan 13 21:27:58.772515 kubelet[2319]: I0113 21:27:58.772500 2319 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:27:58.772597 kubelet[2319]: I0113 21:27:58.772527 2319 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:27:58.780091 kubelet[2319]: I0113 21:27:58.780054 2319 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:27:58.780460 kubelet[2319]: I0113 21:27:58.780438 2319 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:27:58.782355 kubelet[2319]: E0113 21:27:58.782336 2319 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 13 21:27:58.812051 kubelet[2319]: E0113 21:27:58.812009 2319 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.128:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.128:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5dbe49334d20 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 21:27:58.158015776 +0000 UTC m=+0.353376608,LastTimestamp:2025-01-13 21:27:58.158015776 +0000 UTC m=+0.353376608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 13 21:27:58.867663 kubelet[2319]: I0113 21:27:58.867622 2319 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:27:58.868168 kubelet[2319]: E0113 21:27:58.868132 2319 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jan 13 21:27:58.879246 kubelet[2319]: I0113 21:27:58.879208 2319 topology_manager.go:215] "Topology Admit Handler" podUID="797f5aa08c09815334574427d10ec1aa" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:27:58.881958 kubelet[2319]: I0113 21:27:58.881926 2319 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:27:58.882824 kubelet[2319]: I0113 21:27:58.882783 2319 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:27:58.968138 kubelet[2319]: I0113 21:27:58.968086 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/797f5aa08c09815334574427d10ec1aa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"797f5aa08c09815334574427d10ec1aa\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:27:58.968138 kubelet[2319]: I0113 21:27:58.968138 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/797f5aa08c09815334574427d10ec1aa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"797f5aa08c09815334574427d10ec1aa\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:27:58.968309 kubelet[2319]: I0113 21:27:58.968164 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/797f5aa08c09815334574427d10ec1aa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"797f5aa08c09815334574427d10ec1aa\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:27:58.968309 kubelet[2319]: I0113 21:27:58.968192 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:27:58.968309 kubelet[2319]: I0113 21:27:58.968215 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:27:58.968309 kubelet[2319]: I0113 21:27:58.968233 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:27:58.968309 kubelet[2319]: I0113 21:27:58.968249 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:27:58.968440 kubelet[2319]: I0113 21:27:58.968359 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:27:58.968440 kubelet[2319]: I0113 21:27:58.968423 2319 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:27:59.093417 kubelet[2319]: W0113 21:27:59.093197 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:59.093417 kubelet[2319]: E0113 21:27:59.093286 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:59.134714 kubelet[2319]: W0113 21:27:59.134651 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:59.134714 kubelet[2319]: E0113 21:27:59.134713 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.128:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:59.170974 kubelet[2319]: W0113 21:27:59.170910 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:59.170974 kubelet[2319]: E0113 21:27:59.170980 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:59.188674 kubelet[2319]: E0113 21:27:59.188626 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:59.189299 containerd[1566]: time="2025-01-13T21:27:59.189243276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:797f5aa08c09815334574427d10ec1aa,Namespace:kube-system,Attempt:0,}" Jan 13 21:27:59.190432 kubelet[2319]: E0113 21:27:59.190409 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:59.190874 containerd[1566]: time="2025-01-13T21:27:59.190824070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Jan 13 21:27:59.191997 kubelet[2319]: E0113 21:27:59.191975 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:27:59.192291 containerd[1566]: time="2025-01-13T21:27:59.192267257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Jan 13 21:27:59.345913 kubelet[2319]: W0113 21:27:59.345746 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:59.345913 kubelet[2319]: E0113 21:27:59.345811 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.128:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:27:59.566663 kubelet[2319]: E0113 21:27:59.566619 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.128:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.128:6443: connect: connection refused" interval="1.6s" Jan 13 21:27:59.670020 kubelet[2319]: I0113 21:27:59.669970 2319 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:27:59.670484 kubelet[2319]: E0113 21:27:59.670445 2319 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.128:6443/api/v1/nodes\": dial tcp 10.0.0.128:6443: connect: connection refused" node="localhost" Jan 13 21:28:00.180097 kubelet[2319]: E0113 21:28:00.179994 2319 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.128:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:28:00.590218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount366464734.mount: Deactivated successfully. Jan 13 21:28:00.596009 containerd[1566]: time="2025-01-13T21:28:00.595954552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:28:00.597814 containerd[1566]: time="2025-01-13T21:28:00.597733608Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:28:00.598987 containerd[1566]: time="2025-01-13T21:28:00.598945621Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:28:00.600208 containerd[1566]: time="2025-01-13T21:28:00.600172512Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:28:00.601259 containerd[1566]: time="2025-01-13T21:28:00.601214416Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:28:00.602121 containerd[1566]: time="2025-01-13T21:28:00.602074559Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 13 21:28:00.603021 containerd[1566]: time="2025-01-13T21:28:00.602967724Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 13 21:28:00.606380 containerd[1566]: time="2025-01-13T21:28:00.606340549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 13 21:28:00.607516 containerd[1566]: time="2025-01-13T21:28:00.607475007Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.416558012s" Jan 13 21:28:00.611033 containerd[1566]: time="2025-01-13T21:28:00.610986712Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.418668149s" Jan 13 21:28:00.611988 containerd[1566]: time="2025-01-13T21:28:00.611939910Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.422585736s" Jan 13 21:28:00.754159 containerd[1566]: time="2025-01-13T21:28:00.753849621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:00.754159 containerd[1566]: time="2025-01-13T21:28:00.753910005Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:00.754159 containerd[1566]: time="2025-01-13T21:28:00.753921747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:00.754159 containerd[1566]: time="2025-01-13T21:28:00.754031412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:00.756271 containerd[1566]: time="2025-01-13T21:28:00.756178879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:00.756271 containerd[1566]: time="2025-01-13T21:28:00.756222652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:00.756271 containerd[1566]: time="2025-01-13T21:28:00.756232700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:00.756501 containerd[1566]: time="2025-01-13T21:28:00.756327999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:00.759304 containerd[1566]: time="2025-01-13T21:28:00.758956368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:00.759304 containerd[1566]: time="2025-01-13T21:28:00.759023534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:00.759304 containerd[1566]: time="2025-01-13T21:28:00.759036899Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:00.759304 containerd[1566]: time="2025-01-13T21:28:00.759232246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:00.817113 containerd[1566]: time="2025-01-13T21:28:00.817054463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"d137cb23707338d05d8a157f1fcb69c273e5ccc7bc9f5b89648c649e790d5e90\"" Jan 13 21:28:00.817373 containerd[1566]: time="2025-01-13T21:28:00.817348194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"eea20bde2e5049ec4d2ecc54f1c23158397a5dc13e326bcfcc11cbed64cd6f9d\"" Jan 13 21:28:00.818674 containerd[1566]: time="2025-01-13T21:28:00.818643513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:797f5aa08c09815334574427d10ec1aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"39570d1e275b45da606ea8774ac4dea61e5208df4391b6aadb7ea3c9eb00f351\"" Jan 13 21:28:00.819363 kubelet[2319]: E0113 21:28:00.819091 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:00.819363 kubelet[2319]: E0113 21:28:00.819329 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:00.820200 kubelet[2319]: E0113 21:28:00.819804 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:00.822149 containerd[1566]: time="2025-01-13T21:28:00.822111637Z" level=info msg="CreateContainer within sandbox \"eea20bde2e5049ec4d2ecc54f1c23158397a5dc13e326bcfcc11cbed64cd6f9d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 13 21:28:00.822266 containerd[1566]: time="2025-01-13T21:28:00.822230209Z" level=info msg="CreateContainer within sandbox \"d137cb23707338d05d8a157f1fcb69c273e5ccc7bc9f5b89648c649e790d5e90\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 13 21:28:00.823621 containerd[1566]: time="2025-01-13T21:28:00.823592644Z" level=info msg="CreateContainer within sandbox \"39570d1e275b45da606ea8774ac4dea61e5208df4391b6aadb7ea3c9eb00f351\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 13 21:28:00.844663 containerd[1566]: time="2025-01-13T21:28:00.844562991Z" level=info msg="CreateContainer within sandbox \"eea20bde2e5049ec4d2ecc54f1c23158397a5dc13e326bcfcc11cbed64cd6f9d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9b5d343e848d93e3518b7139f1271aa482cb6d19e72c7a1e69c2623c9861983d\"" Jan 13 21:28:00.845226 containerd[1566]: time="2025-01-13T21:28:00.845200547Z" level=info msg="StartContainer for \"9b5d343e848d93e3518b7139f1271aa482cb6d19e72c7a1e69c2623c9861983d\"" Jan 13 21:28:00.851714 kubelet[2319]: W0113 21:28:00.851676 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:28:00.851714 kubelet[2319]: E0113 21:28:00.851711 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.128:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:28:00.872599 containerd[1566]: time="2025-01-13T21:28:00.872540068Z" level=info msg="CreateContainer within sandbox \"39570d1e275b45da606ea8774ac4dea61e5208df4391b6aadb7ea3c9eb00f351\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d35db244278de2525b601467900d1c453fe5617ee3839cf796c9f26a49e670cf\"" Jan 13 21:28:00.872909 containerd[1566]: time="2025-01-13T21:28:00.872879956Z" level=info msg="StartContainer for \"d35db244278de2525b601467900d1c453fe5617ee3839cf796c9f26a49e670cf\"" Jan 13 21:28:00.876848 containerd[1566]: time="2025-01-13T21:28:00.876787634Z" level=info msg="CreateContainer within sandbox \"d137cb23707338d05d8a157f1fcb69c273e5ccc7bc9f5b89648c649e790d5e90\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8a5456696c1cbe47f31166cad05448d384d7fd47dc6c0479b787fb27fe7a4fd2\"" Jan 13 21:28:00.877432 containerd[1566]: time="2025-01-13T21:28:00.877394542Z" level=info msg="StartContainer for \"8a5456696c1cbe47f31166cad05448d384d7fd47dc6c0479b787fb27fe7a4fd2\"" Jan 13 21:28:00.921018 containerd[1566]: time="2025-01-13T21:28:00.920957138Z" level=info msg="StartContainer for \"9b5d343e848d93e3518b7139f1271aa482cb6d19e72c7a1e69c2623c9861983d\" returns successfully" Jan 13 21:28:00.958714 containerd[1566]: time="2025-01-13T21:28:00.958658431Z" level=info msg="StartContainer for \"8a5456696c1cbe47f31166cad05448d384d7fd47dc6c0479b787fb27fe7a4fd2\" returns successfully" Jan 13 21:28:00.958714 containerd[1566]: time="2025-01-13T21:28:00.958690291Z" level=info msg="StartContainer for \"d35db244278de2525b601467900d1c453fe5617ee3839cf796c9f26a49e670cf\" returns successfully" Jan 13 21:28:01.017203 kubelet[2319]: W0113 21:28:01.017042 2319 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:28:01.017203 kubelet[2319]: E0113 21:28:01.017174 2319 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.128:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.128:6443: connect: connection refused Jan 13 21:28:01.193872 kubelet[2319]: E0113 21:28:01.193834 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:01.198698 kubelet[2319]: E0113 21:28:01.198671 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:01.200980 kubelet[2319]: E0113 21:28:01.200951 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:01.271998 kubelet[2319]: I0113 21:28:01.271870 2319 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:28:01.879530 kubelet[2319]: E0113 21:28:01.879491 2319 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 13 21:28:01.971017 kubelet[2319]: I0113 21:28:01.969632 2319 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:28:02.156056 kubelet[2319]: I0113 21:28:02.155916 2319 apiserver.go:52] "Watching apiserver" Jan 13 21:28:02.162952 kubelet[2319]: I0113 21:28:02.162884 2319 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:28:02.204083 kubelet[2319]: E0113 21:28:02.204057 2319 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 13 21:28:02.204520 kubelet[2319]: E0113 21:28:02.204444 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:04.897005 systemd[1]: Reloading requested from client PID 2594 ('systemctl') (unit session-5.scope)... Jan 13 21:28:04.897020 systemd[1]: Reloading... Jan 13 21:28:04.971344 zram_generator::config[2636]: No configuration found. Jan 13 21:28:05.083841 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 13 21:28:05.166485 systemd[1]: Reloading finished in 269 ms. Jan 13 21:28:05.205056 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:28:05.221177 systemd[1]: kubelet.service: Deactivated successfully. Jan 13 21:28:05.221600 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:28:05.233528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 13 21:28:05.379279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 13 21:28:05.385266 (kubelet)[2688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 13 21:28:05.429298 kubelet[2688]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:28:05.429298 kubelet[2688]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 13 21:28:05.429298 kubelet[2688]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 13 21:28:05.429298 kubelet[2688]: I0113 21:28:05.429265 2688 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 13 21:28:05.434423 kubelet[2688]: I0113 21:28:05.434402 2688 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 13 21:28:05.434423 kubelet[2688]: I0113 21:28:05.434422 2688 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 13 21:28:05.434592 kubelet[2688]: I0113 21:28:05.434572 2688 server.go:919] "Client rotation is on, will bootstrap in background" Jan 13 21:28:05.435858 kubelet[2688]: I0113 21:28:05.435824 2688 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 13 21:28:05.437582 kubelet[2688]: I0113 21:28:05.437403 2688 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 13 21:28:05.446954 kubelet[2688]: I0113 21:28:05.446922 2688 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 13 21:28:05.447503 kubelet[2688]: I0113 21:28:05.447479 2688 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 13 21:28:05.447668 kubelet[2688]: I0113 21:28:05.447641 2688 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 13 21:28:05.447764 kubelet[2688]: I0113 21:28:05.447674 2688 topology_manager.go:138] "Creating topology manager with none policy" Jan 13 21:28:05.447764 kubelet[2688]: I0113 21:28:05.447684 2688 container_manager_linux.go:301] "Creating device plugin manager" Jan 13 21:28:05.447764 kubelet[2688]: I0113 21:28:05.447719 2688 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:28:05.447857 kubelet[2688]: I0113 21:28:05.447812 2688 kubelet.go:396] "Attempting to sync node with API server" Jan 13 21:28:05.447857 kubelet[2688]: I0113 21:28:05.447826 2688 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 13 21:28:05.447908 kubelet[2688]: I0113 21:28:05.447883 2688 kubelet.go:312] "Adding apiserver pod source" Jan 13 21:28:05.447908 kubelet[2688]: I0113 21:28:05.447900 2688 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 13 21:28:05.451818 kubelet[2688]: I0113 21:28:05.451636 2688 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 13 21:28:05.453225 kubelet[2688]: I0113 21:28:05.453163 2688 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 13 21:28:05.453712 kubelet[2688]: I0113 21:28:05.453681 2688 server.go:1256] "Started kubelet" Jan 13 21:28:05.454800 kubelet[2688]: I0113 21:28:05.454745 2688 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 13 21:28:05.455430 kubelet[2688]: I0113 21:28:05.455090 2688 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 13 21:28:05.455430 kubelet[2688]: I0113 21:28:05.455272 2688 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 13 21:28:05.455799 kubelet[2688]: I0113 21:28:05.455527 2688 server.go:461] "Adding debug handlers to kubelet server" Jan 13 21:28:05.456897 kubelet[2688]: I0113 21:28:05.456729 2688 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 13 21:28:05.462513 kubelet[2688]: E0113 21:28:05.462489 2688 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 13 21:28:05.462552 kubelet[2688]: I0113 21:28:05.462530 2688 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 13 21:28:05.462667 kubelet[2688]: I0113 21:28:05.462612 2688 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 13 21:28:05.462759 kubelet[2688]: I0113 21:28:05.462742 2688 reconciler_new.go:29] "Reconciler: start to sync state" Jan 13 21:28:05.464435 kubelet[2688]: I0113 21:28:05.464370 2688 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 13 21:28:05.464435 kubelet[2688]: E0113 21:28:05.464398 2688 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 13 21:28:05.467936 kubelet[2688]: I0113 21:28:05.467908 2688 factory.go:221] Registration of the containerd container factory successfully Jan 13 21:28:05.467936 kubelet[2688]: I0113 21:28:05.467935 2688 factory.go:221] Registration of the systemd container factory successfully Jan 13 21:28:05.472931 kubelet[2688]: I0113 21:28:05.472890 2688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 13 21:28:05.478349 kubelet[2688]: I0113 21:28:05.476997 2688 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 13 21:28:05.478349 kubelet[2688]: I0113 21:28:05.477040 2688 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 13 21:28:05.478349 kubelet[2688]: I0113 21:28:05.477067 2688 kubelet.go:2329] "Starting kubelet main sync loop" Jan 13 21:28:05.478349 kubelet[2688]: E0113 21:28:05.477149 2688 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 13 21:28:05.530360 kubelet[2688]: I0113 21:28:05.530332 2688 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 13 21:28:05.530535 kubelet[2688]: I0113 21:28:05.530526 2688 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 13 21:28:05.530590 kubelet[2688]: I0113 21:28:05.530583 2688 state_mem.go:36] "Initialized new in-memory state store" Jan 13 21:28:05.530779 kubelet[2688]: I0113 21:28:05.530769 2688 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 13 21:28:05.530868 kubelet[2688]: I0113 21:28:05.530859 2688 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 13 21:28:05.530909 kubelet[2688]: I0113 21:28:05.530902 2688 policy_none.go:49] "None policy: Start" Jan 13 21:28:05.531659 kubelet[2688]: I0113 21:28:05.531645 2688 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 13 21:28:05.531725 kubelet[2688]: I0113 21:28:05.531716 2688 state_mem.go:35] "Initializing new in-memory state store" Jan 13 21:28:05.531913 kubelet[2688]: I0113 21:28:05.531903 2688 state_mem.go:75] "Updated machine memory state" Jan 13 21:28:05.533578 kubelet[2688]: I0113 21:28:05.533454 2688 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 13 21:28:05.533868 kubelet[2688]: I0113 21:28:05.533857 2688 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 13 21:28:05.566926 kubelet[2688]: I0113 21:28:05.566890 2688 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 13 21:28:05.573726 kubelet[2688]: I0113 21:28:05.573698 2688 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 13 21:28:05.573810 kubelet[2688]: I0113 21:28:05.573796 2688 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 13 21:28:05.578817 kubelet[2688]: I0113 21:28:05.577858 2688 topology_manager.go:215] "Topology Admit Handler" podUID="797f5aa08c09815334574427d10ec1aa" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 13 21:28:05.578817 kubelet[2688]: I0113 21:28:05.577929 2688 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 13 21:28:05.578817 kubelet[2688]: I0113 21:28:05.577960 2688 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 13 21:28:05.765417 kubelet[2688]: I0113 21:28:05.765285 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:28:05.765417 kubelet[2688]: I0113 21:28:05.765358 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:28:05.765417 kubelet[2688]: I0113 21:28:05.765389 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:28:05.765417 kubelet[2688]: I0113 21:28:05.765419 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/797f5aa08c09815334574427d10ec1aa-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"797f5aa08c09815334574427d10ec1aa\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:28:05.765595 kubelet[2688]: I0113 21:28:05.765446 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/797f5aa08c09815334574427d10ec1aa-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"797f5aa08c09815334574427d10ec1aa\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:28:05.765595 kubelet[2688]: I0113 21:28:05.765475 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:28:05.765595 kubelet[2688]: I0113 21:28:05.765520 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/797f5aa08c09815334574427d10ec1aa-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"797f5aa08c09815334574427d10ec1aa\") " pod="kube-system/kube-apiserver-localhost" Jan 13 21:28:05.765595 kubelet[2688]: I0113 21:28:05.765551 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Jan 13 21:28:05.765595 kubelet[2688]: I0113 21:28:05.765584 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Jan 13 21:28:05.883605 kubelet[2688]: E0113 21:28:05.883558 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:05.884399 kubelet[2688]: E0113 21:28:05.884181 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:05.886370 kubelet[2688]: E0113 21:28:05.886308 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:06.327649 sudo[1714]: pam_unix(sudo:session): session closed for user root Jan 13 21:28:06.329980 sshd[1707]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:06.333847 systemd[1]: sshd@4-10.0.0.128:22-10.0.0.1:53862.service: Deactivated successfully. Jan 13 21:28:06.335953 systemd-logind[1542]: Session 5 logged out. Waiting for processes to exit. Jan 13 21:28:06.335993 systemd[1]: session-5.scope: Deactivated successfully. Jan 13 21:28:06.337217 systemd-logind[1542]: Removed session 5. Jan 13 21:28:06.448621 kubelet[2688]: I0113 21:28:06.448581 2688 apiserver.go:52] "Watching apiserver" Jan 13 21:28:06.463851 kubelet[2688]: I0113 21:28:06.463771 2688 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 13 21:28:06.495479 kubelet[2688]: E0113 21:28:06.495381 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:06.495479 kubelet[2688]: E0113 21:28:06.495434 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:06.495656 kubelet[2688]: E0113 21:28:06.495573 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:06.520754 kubelet[2688]: I0113 21:28:06.520703 2688 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.520660812 podStartE2EDuration="1.520660812s" podCreationTimestamp="2025-01-13 21:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:28:06.511425033 +0000 UTC m=+1.121751645" watchObservedRunningTime="2025-01-13 21:28:06.520660812 +0000 UTC m=+1.130987424" Jan 13 21:28:06.530760 kubelet[2688]: I0113 21:28:06.528904 2688 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.528247685 podStartE2EDuration="1.528247685s" podCreationTimestamp="2025-01-13 21:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:28:06.52078512 +0000 UTC m=+1.131111732" watchObservedRunningTime="2025-01-13 21:28:06.528247685 +0000 UTC m=+1.138574297" Jan 13 21:28:06.530760 kubelet[2688]: I0113 21:28:06.529062 2688 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.52903688 podStartE2EDuration="1.52903688s" podCreationTimestamp="2025-01-13 21:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:28:06.528163934 +0000 UTC m=+1.138490546" watchObservedRunningTime="2025-01-13 21:28:06.52903688 +0000 UTC m=+1.139363492" Jan 13 21:28:07.497549 kubelet[2688]: E0113 21:28:07.497519 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:08.491669 kubelet[2688]: E0113 21:28:08.491619 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:09.612156 kubelet[2688]: E0113 21:28:09.612115 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:10.221136 kubelet[2688]: E0113 21:28:10.221096 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:10.502691 kubelet[2688]: E0113 21:28:10.502577 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:14.305664 update_engine[1546]: I20250113 21:28:14.305577 1546 update_attempter.cc:509] Updating boot flags... Jan 13 21:28:14.331349 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2760) Jan 13 21:28:14.365433 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2759) Jan 13 21:28:14.401352 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2759) Jan 13 21:28:18.250929 kubelet[2688]: I0113 21:28:18.250895 2688 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 13 21:28:18.251387 containerd[1566]: time="2025-01-13T21:28:18.251234896Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 13 21:28:18.251625 kubelet[2688]: I0113 21:28:18.251448 2688 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 13 21:28:18.293916 kubelet[2688]: I0113 21:28:18.292996 2688 topology_manager.go:215] "Topology Admit Handler" podUID="2878c028-d406-4462-badd-02da5c73b641" podNamespace="kube-system" podName="kube-proxy-f86vv" Jan 13 21:28:18.299180 kubelet[2688]: I0113 21:28:18.299148 2688 topology_manager.go:215] "Topology Admit Handler" podUID="a6bedb89-87f7-45ca-bc62-fcd1810b08cd" podNamespace="kube-flannel" podName="kube-flannel-ds-wpcql" Jan 13 21:28:18.337178 kubelet[2688]: I0113 21:28:18.337115 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a6bedb89-87f7-45ca-bc62-fcd1810b08cd-run\") pod \"kube-flannel-ds-wpcql\" (UID: \"a6bedb89-87f7-45ca-bc62-fcd1810b08cd\") " pod="kube-flannel/kube-flannel-ds-wpcql" Jan 13 21:28:18.337360 kubelet[2688]: I0113 21:28:18.337190 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/a6bedb89-87f7-45ca-bc62-fcd1810b08cd-flannel-cfg\") pod \"kube-flannel-ds-wpcql\" (UID: \"a6bedb89-87f7-45ca-bc62-fcd1810b08cd\") " pod="kube-flannel/kube-flannel-ds-wpcql" Jan 13 21:28:18.337360 kubelet[2688]: I0113 21:28:18.337220 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2878c028-d406-4462-badd-02da5c73b641-kube-proxy\") pod \"kube-proxy-f86vv\" (UID: \"2878c028-d406-4462-badd-02da5c73b641\") " pod="kube-system/kube-proxy-f86vv" Jan 13 21:28:18.337360 kubelet[2688]: I0113 21:28:18.337245 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2878c028-d406-4462-badd-02da5c73b641-lib-modules\") pod \"kube-proxy-f86vv\" (UID: \"2878c028-d406-4462-badd-02da5c73b641\") " pod="kube-system/kube-proxy-f86vv" Jan 13 21:28:18.337360 kubelet[2688]: I0113 21:28:18.337313 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7wp9\" (UniqueName: \"kubernetes.io/projected/2878c028-d406-4462-badd-02da5c73b641-kube-api-access-c7wp9\") pod \"kube-proxy-f86vv\" (UID: \"2878c028-d406-4462-badd-02da5c73b641\") " pod="kube-system/kube-proxy-f86vv" Jan 13 21:28:18.337360 kubelet[2688]: I0113 21:28:18.337360 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/a6bedb89-87f7-45ca-bc62-fcd1810b08cd-cni-plugin\") pod \"kube-flannel-ds-wpcql\" (UID: \"a6bedb89-87f7-45ca-bc62-fcd1810b08cd\") " pod="kube-flannel/kube-flannel-ds-wpcql" Jan 13 21:28:18.337551 kubelet[2688]: I0113 21:28:18.337387 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/a6bedb89-87f7-45ca-bc62-fcd1810b08cd-cni\") pod \"kube-flannel-ds-wpcql\" (UID: \"a6bedb89-87f7-45ca-bc62-fcd1810b08cd\") " pod="kube-flannel/kube-flannel-ds-wpcql" Jan 13 21:28:18.337551 kubelet[2688]: I0113 21:28:18.337415 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2878c028-d406-4462-badd-02da5c73b641-xtables-lock\") pod \"kube-proxy-f86vv\" (UID: \"2878c028-d406-4462-badd-02da5c73b641\") " pod="kube-system/kube-proxy-f86vv" Jan 13 21:28:18.337551 kubelet[2688]: I0113 21:28:18.337457 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6bedb89-87f7-45ca-bc62-fcd1810b08cd-xtables-lock\") pod \"kube-flannel-ds-wpcql\" (UID: \"a6bedb89-87f7-45ca-bc62-fcd1810b08cd\") " pod="kube-flannel/kube-flannel-ds-wpcql" Jan 13 21:28:18.337551 kubelet[2688]: I0113 21:28:18.337497 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m6bx\" (UniqueName: \"kubernetes.io/projected/a6bedb89-87f7-45ca-bc62-fcd1810b08cd-kube-api-access-7m6bx\") pod \"kube-flannel-ds-wpcql\" (UID: \"a6bedb89-87f7-45ca-bc62-fcd1810b08cd\") " pod="kube-flannel/kube-flannel-ds-wpcql" Jan 13 21:28:18.442335 kubelet[2688]: E0113 21:28:18.442281 2688 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 21:28:18.442335 kubelet[2688]: E0113 21:28:18.442328 2688 projected.go:200] Error preparing data for projected volume kube-api-access-c7wp9 for pod kube-system/kube-proxy-f86vv: configmap "kube-root-ca.crt" not found Jan 13 21:28:18.442486 kubelet[2688]: E0113 21:28:18.442380 2688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2878c028-d406-4462-badd-02da5c73b641-kube-api-access-c7wp9 podName:2878c028-d406-4462-badd-02da5c73b641 nodeName:}" failed. No retries permitted until 2025-01-13 21:28:18.942362024 +0000 UTC m=+13.552688626 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c7wp9" (UniqueName: "kubernetes.io/projected/2878c028-d406-4462-badd-02da5c73b641-kube-api-access-c7wp9") pod "kube-proxy-f86vv" (UID: "2878c028-d406-4462-badd-02da5c73b641") : configmap "kube-root-ca.crt" not found Jan 13 21:28:18.442808 kubelet[2688]: E0113 21:28:18.442788 2688 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 21:28:18.442808 kubelet[2688]: E0113 21:28:18.442808 2688 projected.go:200] Error preparing data for projected volume kube-api-access-7m6bx for pod kube-flannel/kube-flannel-ds-wpcql: configmap "kube-root-ca.crt" not found Jan 13 21:28:18.442892 kubelet[2688]: E0113 21:28:18.442846 2688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6bedb89-87f7-45ca-bc62-fcd1810b08cd-kube-api-access-7m6bx podName:a6bedb89-87f7-45ca-bc62-fcd1810b08cd nodeName:}" failed. No retries permitted until 2025-01-13 21:28:18.942833377 +0000 UTC m=+13.553159989 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7m6bx" (UniqueName: "kubernetes.io/projected/a6bedb89-87f7-45ca-bc62-fcd1810b08cd-kube-api-access-7m6bx") pod "kube-flannel-ds-wpcql" (UID: "a6bedb89-87f7-45ca-bc62-fcd1810b08cd") : configmap "kube-root-ca.crt" not found Jan 13 21:28:18.494903 kubelet[2688]: E0113 21:28:18.494871 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:19.043293 kubelet[2688]: E0113 21:28:19.043244 2688 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 21:28:19.043293 kubelet[2688]: E0113 21:28:19.043288 2688 projected.go:200] Error preparing data for projected volume kube-api-access-c7wp9 for pod kube-system/kube-proxy-f86vv: configmap "kube-root-ca.crt" not found Jan 13 21:28:19.043451 kubelet[2688]: E0113 21:28:19.043348 2688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2878c028-d406-4462-badd-02da5c73b641-kube-api-access-c7wp9 podName:2878c028-d406-4462-badd-02da5c73b641 nodeName:}" failed. No retries permitted until 2025-01-13 21:28:20.043332614 +0000 UTC m=+14.653659226 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-c7wp9" (UniqueName: "kubernetes.io/projected/2878c028-d406-4462-badd-02da5c73b641-kube-api-access-c7wp9") pod "kube-proxy-f86vv" (UID: "2878c028-d406-4462-badd-02da5c73b641") : configmap "kube-root-ca.crt" not found Jan 13 21:28:19.043451 kubelet[2688]: E0113 21:28:19.043379 2688 projected.go:294] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 13 21:28:19.043451 kubelet[2688]: E0113 21:28:19.043406 2688 projected.go:200] Error preparing data for projected volume kube-api-access-7m6bx for pod kube-flannel/kube-flannel-ds-wpcql: configmap "kube-root-ca.crt" not found Jan 13 21:28:19.043451 kubelet[2688]: E0113 21:28:19.043451 2688 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a6bedb89-87f7-45ca-bc62-fcd1810b08cd-kube-api-access-7m6bx podName:a6bedb89-87f7-45ca-bc62-fcd1810b08cd nodeName:}" failed. No retries permitted until 2025-01-13 21:28:20.043437563 +0000 UTC m=+14.653764175 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-7m6bx" (UniqueName: "kubernetes.io/projected/a6bedb89-87f7-45ca-bc62-fcd1810b08cd-kube-api-access-7m6bx") pod "kube-flannel-ds-wpcql" (UID: "a6bedb89-87f7-45ca-bc62-fcd1810b08cd") : configmap "kube-root-ca.crt" not found Jan 13 21:28:19.617390 kubelet[2688]: E0113 21:28:19.617360 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:20.099524 kubelet[2688]: E0113 21:28:20.099485 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:20.100186 containerd[1566]: time="2025-01-13T21:28:20.099995213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f86vv,Uid:2878c028-d406-4462-badd-02da5c73b641,Namespace:kube-system,Attempt:0,}" Jan 13 21:28:20.105140 kubelet[2688]: E0113 21:28:20.105114 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:20.105563 containerd[1566]: time="2025-01-13T21:28:20.105515033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-wpcql,Uid:a6bedb89-87f7-45ca-bc62-fcd1810b08cd,Namespace:kube-flannel,Attempt:0,}" Jan 13 21:28:20.136527 containerd[1566]: time="2025-01-13T21:28:20.136438440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:20.137306 containerd[1566]: time="2025-01-13T21:28:20.137224127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:20.137306 containerd[1566]: time="2025-01-13T21:28:20.137266147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:20.137453 containerd[1566]: time="2025-01-13T21:28:20.137401824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:20.138249 containerd[1566]: time="2025-01-13T21:28:20.138143999Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:20.138249 containerd[1566]: time="2025-01-13T21:28:20.138196569Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:20.138249 containerd[1566]: time="2025-01-13T21:28:20.138210966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:20.138402 containerd[1566]: time="2025-01-13T21:28:20.138304784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:20.186736 containerd[1566]: time="2025-01-13T21:28:20.186683397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f86vv,Uid:2878c028-d406-4462-badd-02da5c73b641,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc7cd3503b7584fafb5ab3b3e29e9e2683b3fb76caf28c0ef9992545af1c98e0\"" Jan 13 21:28:20.187647 kubelet[2688]: E0113 21:28:20.187624 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:20.190389 containerd[1566]: time="2025-01-13T21:28:20.190360048Z" level=info msg="CreateContainer within sandbox \"dc7cd3503b7584fafb5ab3b3e29e9e2683b3fb76caf28c0ef9992545af1c98e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 13 21:28:20.206075 containerd[1566]: time="2025-01-13T21:28:20.206035766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-wpcql,Uid:a6bedb89-87f7-45ca-bc62-fcd1810b08cd,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"af353a7e3eda9234c58508b71dd15a69423602313f7589e5ef064d4cc3f976d2\"" Jan 13 21:28:20.207073 kubelet[2688]: E0113 21:28:20.206881 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:20.208255 containerd[1566]: time="2025-01-13T21:28:20.208201637Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 13 21:28:20.213336 containerd[1566]: time="2025-01-13T21:28:20.213284159Z" level=info msg="CreateContainer within sandbox \"dc7cd3503b7584fafb5ab3b3e29e9e2683b3fb76caf28c0ef9992545af1c98e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"964c1a1e25e1a02c948bc3ac4d9be8c4e4bd6468b32ac5e7f734120a202f4219\"" Jan 13 21:28:20.214041 containerd[1566]: time="2025-01-13T21:28:20.213999995Z" level=info msg="StartContainer for \"964c1a1e25e1a02c948bc3ac4d9be8c4e4bd6468b32ac5e7f734120a202f4219\"" Jan 13 21:28:20.272967 containerd[1566]: time="2025-01-13T21:28:20.272913545Z" level=info msg="StartContainer for \"964c1a1e25e1a02c948bc3ac4d9be8c4e4bd6468b32ac5e7f734120a202f4219\" returns successfully" Jan 13 21:28:20.518383 kubelet[2688]: E0113 21:28:20.518166 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:20.518383 kubelet[2688]: E0113 21:28:20.518286 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:20.525218 kubelet[2688]: I0113 21:28:20.525180 2688 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-f86vv" podStartSLOduration=2.525147562 podStartE2EDuration="2.525147562s" podCreationTimestamp="2025-01-13 21:28:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:28:20.524835251 +0000 UTC m=+15.135161864" watchObservedRunningTime="2025-01-13 21:28:20.525147562 +0000 UTC m=+15.135474174" Jan 13 21:28:21.905542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3520305627.mount: Deactivated successfully. Jan 13 21:28:21.945736 containerd[1566]: time="2025-01-13T21:28:21.945681974Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:21.946414 containerd[1566]: time="2025-01-13T21:28:21.946363644Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 13 21:28:21.947635 containerd[1566]: time="2025-01-13T21:28:21.947583783Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:21.955515 containerd[1566]: time="2025-01-13T21:28:21.955478029Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:21.956407 containerd[1566]: time="2025-01-13T21:28:21.956356161Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.748108417s" Jan 13 21:28:21.956407 containerd[1566]: time="2025-01-13T21:28:21.956411025Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 13 21:28:21.958033 containerd[1566]: time="2025-01-13T21:28:21.957991225Z" level=info msg="CreateContainer within sandbox \"af353a7e3eda9234c58508b71dd15a69423602313f7589e5ef064d4cc3f976d2\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 13 21:28:21.970761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2355989518.mount: Deactivated successfully. Jan 13 21:28:21.971397 containerd[1566]: time="2025-01-13T21:28:21.971357436Z" level=info msg="CreateContainer within sandbox \"af353a7e3eda9234c58508b71dd15a69423602313f7589e5ef064d4cc3f976d2\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"172100d26d023cd02a341b262177ace8295860f46ae11f6352131a81a8ce6528\"" Jan 13 21:28:21.971926 containerd[1566]: time="2025-01-13T21:28:21.971904561Z" level=info msg="StartContainer for \"172100d26d023cd02a341b262177ace8295860f46ae11f6352131a81a8ce6528\"" Jan 13 21:28:22.022387 containerd[1566]: time="2025-01-13T21:28:22.022352024Z" level=info msg="StartContainer for \"172100d26d023cd02a341b262177ace8295860f46ae11f6352131a81a8ce6528\" returns successfully" Jan 13 21:28:22.109412 containerd[1566]: time="2025-01-13T21:28:22.109340528Z" level=info msg="shim disconnected" id=172100d26d023cd02a341b262177ace8295860f46ae11f6352131a81a8ce6528 namespace=k8s.io Jan 13 21:28:22.109412 containerd[1566]: time="2025-01-13T21:28:22.109407194Z" level=warning msg="cleaning up after shim disconnected" id=172100d26d023cd02a341b262177ace8295860f46ae11f6352131a81a8ce6528 namespace=k8s.io Jan 13 21:28:22.109412 containerd[1566]: time="2025-01-13T21:28:22.109416551Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:28:22.522848 kubelet[2688]: E0113 21:28:22.522819 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:22.523402 containerd[1566]: time="2025-01-13T21:28:22.523349601Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 13 21:28:24.225060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3618696467.mount: Deactivated successfully. Jan 13 21:28:24.747161 containerd[1566]: time="2025-01-13T21:28:24.747092556Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:24.748093 containerd[1566]: time="2025-01-13T21:28:24.748039895Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 13 21:28:24.749473 containerd[1566]: time="2025-01-13T21:28:24.749426254Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:24.752360 containerd[1566]: time="2025-01-13T21:28:24.752295565Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 13 21:28:24.753138 containerd[1566]: time="2025-01-13T21:28:24.753097831Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.229716821s" Jan 13 21:28:24.753138 containerd[1566]: time="2025-01-13T21:28:24.753135242Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 13 21:28:24.755030 containerd[1566]: time="2025-01-13T21:28:24.754996828Z" level=info msg="CreateContainer within sandbox \"af353a7e3eda9234c58508b71dd15a69423602313f7589e5ef064d4cc3f976d2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 13 21:28:24.766082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4252286900.mount: Deactivated successfully. Jan 13 21:28:24.767892 containerd[1566]: time="2025-01-13T21:28:24.767846003Z" level=info msg="CreateContainer within sandbox \"af353a7e3eda9234c58508b71dd15a69423602313f7589e5ef064d4cc3f976d2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"64268bb416ad36f504163ff6199db9a698bb76ffd17311ef138e7d7db1da12bf\"" Jan 13 21:28:24.768420 containerd[1566]: time="2025-01-13T21:28:24.768386945Z" level=info msg="StartContainer for \"64268bb416ad36f504163ff6199db9a698bb76ffd17311ef138e7d7db1da12bf\"" Jan 13 21:28:24.819301 containerd[1566]: time="2025-01-13T21:28:24.819202666Z" level=info msg="StartContainer for \"64268bb416ad36f504163ff6199db9a698bb76ffd17311ef138e7d7db1da12bf\" returns successfully" Jan 13 21:28:24.898160 kubelet[2688]: I0113 21:28:24.897804 2688 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 13 21:28:25.106128 kubelet[2688]: I0113 21:28:25.105980 2688 topology_manager.go:215] "Topology Admit Handler" podUID="72dc5c7b-f500-49ab-b37a-ddf3dfcc61f8" podNamespace="kube-system" podName="coredns-76f75df574-cpxbn" Jan 13 21:28:25.140386 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64268bb416ad36f504163ff6199db9a698bb76ffd17311ef138e7d7db1da12bf-rootfs.mount: Deactivated successfully. Jan 13 21:28:25.146012 kubelet[2688]: I0113 21:28:25.145939 2688 topology_manager.go:215] "Topology Admit Handler" podUID="c279e453-0da5-49bb-ac04-05dccb319d48" podNamespace="kube-system" podName="coredns-76f75df574-r6gjh" Jan 13 21:28:25.166991 containerd[1566]: time="2025-01-13T21:28:25.166628960Z" level=info msg="shim disconnected" id=64268bb416ad36f504163ff6199db9a698bb76ffd17311ef138e7d7db1da12bf namespace=k8s.io Jan 13 21:28:25.166991 containerd[1566]: time="2025-01-13T21:28:25.166691498Z" level=warning msg="cleaning up after shim disconnected" id=64268bb416ad36f504163ff6199db9a698bb76ffd17311ef138e7d7db1da12bf namespace=k8s.io Jan 13 21:28:25.166991 containerd[1566]: time="2025-01-13T21:28:25.166701697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 13 21:28:25.180465 kubelet[2688]: I0113 21:28:25.180404 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72dc5c7b-f500-49ab-b37a-ddf3dfcc61f8-config-volume\") pod \"coredns-76f75df574-cpxbn\" (UID: \"72dc5c7b-f500-49ab-b37a-ddf3dfcc61f8\") " pod="kube-system/coredns-76f75df574-cpxbn" Jan 13 21:28:25.180465 kubelet[2688]: I0113 21:28:25.180461 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c279e453-0da5-49bb-ac04-05dccb319d48-config-volume\") pod \"coredns-76f75df574-r6gjh\" (UID: \"c279e453-0da5-49bb-ac04-05dccb319d48\") " pod="kube-system/coredns-76f75df574-r6gjh" Jan 13 21:28:25.180652 kubelet[2688]: I0113 21:28:25.180493 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psp4h\" (UniqueName: \"kubernetes.io/projected/c279e453-0da5-49bb-ac04-05dccb319d48-kube-api-access-psp4h\") pod \"coredns-76f75df574-r6gjh\" (UID: \"c279e453-0da5-49bb-ac04-05dccb319d48\") " pod="kube-system/coredns-76f75df574-r6gjh" Jan 13 21:28:25.180652 kubelet[2688]: I0113 21:28:25.180519 2688 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n772\" (UniqueName: \"kubernetes.io/projected/72dc5c7b-f500-49ab-b37a-ddf3dfcc61f8-kube-api-access-2n772\") pod \"coredns-76f75df574-cpxbn\" (UID: \"72dc5c7b-f500-49ab-b37a-ddf3dfcc61f8\") " pod="kube-system/coredns-76f75df574-cpxbn" Jan 13 21:28:25.410509 kubelet[2688]: E0113 21:28:25.410475 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:25.411071 containerd[1566]: time="2025-01-13T21:28:25.411023898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cpxbn,Uid:72dc5c7b-f500-49ab-b37a-ddf3dfcc61f8,Namespace:kube-system,Attempt:0,}" Jan 13 21:28:25.439530 systemd[1]: run-netns-cni\x2d30c86428\x2d078f\x2dfd49\x2da7f2\x2daf5d5a878100.mount: Deactivated successfully. Jan 13 21:28:25.439752 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c968fe80483d3117c602ff0198ac418bdac3c1364b140866d0b92a2e7d6df2c0-shm.mount: Deactivated successfully. Jan 13 21:28:25.439907 containerd[1566]: time="2025-01-13T21:28:25.439651146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cpxbn,Uid:72dc5c7b-f500-49ab-b37a-ddf3dfcc61f8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c968fe80483d3117c602ff0198ac418bdac3c1364b140866d0b92a2e7d6df2c0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:28:25.439950 kubelet[2688]: E0113 21:28:25.439915 2688 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c968fe80483d3117c602ff0198ac418bdac3c1364b140866d0b92a2e7d6df2c0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:28:25.439986 kubelet[2688]: E0113 21:28:25.439971 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c968fe80483d3117c602ff0198ac418bdac3c1364b140866d0b92a2e7d6df2c0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-cpxbn" Jan 13 21:28:25.440008 kubelet[2688]: E0113 21:28:25.439992 2688 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c968fe80483d3117c602ff0198ac418bdac3c1364b140866d0b92a2e7d6df2c0\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-cpxbn" Jan 13 21:28:25.440079 kubelet[2688]: E0113 21:28:25.440065 2688 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-cpxbn_kube-system(72dc5c7b-f500-49ab-b37a-ddf3dfcc61f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-cpxbn_kube-system(72dc5c7b-f500-49ab-b37a-ddf3dfcc61f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c968fe80483d3117c602ff0198ac418bdac3c1364b140866d0b92a2e7d6df2c0\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-cpxbn" podUID="72dc5c7b-f500-49ab-b37a-ddf3dfcc61f8" Jan 13 21:28:25.449032 kubelet[2688]: E0113 21:28:25.449004 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:25.449635 containerd[1566]: time="2025-01-13T21:28:25.449605473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r6gjh,Uid:c279e453-0da5-49bb-ac04-05dccb319d48,Namespace:kube-system,Attempt:0,}" Jan 13 21:28:25.470201 containerd[1566]: time="2025-01-13T21:28:25.470132725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r6gjh,Uid:c279e453-0da5-49bb-ac04-05dccb319d48,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8db087bab1b8c987ee3e0e1d3772ba3100a42b4d578b16983ff2d5d221819e8d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:28:25.470434 kubelet[2688]: E0113 21:28:25.470401 2688 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db087bab1b8c987ee3e0e1d3772ba3100a42b4d578b16983ff2d5d221819e8d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 13 21:28:25.470503 kubelet[2688]: E0113 21:28:25.470464 2688 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db087bab1b8c987ee3e0e1d3772ba3100a42b4d578b16983ff2d5d221819e8d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-r6gjh" Jan 13 21:28:25.470503 kubelet[2688]: E0113 21:28:25.470489 2688 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8db087bab1b8c987ee3e0e1d3772ba3100a42b4d578b16983ff2d5d221819e8d\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-r6gjh" Jan 13 21:28:25.470568 kubelet[2688]: E0113 21:28:25.470552 2688 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-r6gjh_kube-system(c279e453-0da5-49bb-ac04-05dccb319d48)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-r6gjh_kube-system(c279e453-0da5-49bb-ac04-05dccb319d48)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8db087bab1b8c987ee3e0e1d3772ba3100a42b4d578b16983ff2d5d221819e8d\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-r6gjh" podUID="c279e453-0da5-49bb-ac04-05dccb319d48" Jan 13 21:28:25.528689 kubelet[2688]: E0113 21:28:25.528650 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:25.530383 containerd[1566]: time="2025-01-13T21:28:25.530308737Z" level=info msg="CreateContainer within sandbox \"af353a7e3eda9234c58508b71dd15a69423602313f7589e5ef064d4cc3f976d2\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 13 21:28:25.544660 containerd[1566]: time="2025-01-13T21:28:25.544610123Z" level=info msg="CreateContainer within sandbox \"af353a7e3eda9234c58508b71dd15a69423602313f7589e5ef064d4cc3f976d2\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"36e0ded83410a7e98640d57d075bde148ab6defdbda9e8826370fd9de8c325f0\"" Jan 13 21:28:25.545059 containerd[1566]: time="2025-01-13T21:28:25.545014246Z" level=info msg="StartContainer for \"36e0ded83410a7e98640d57d075bde148ab6defdbda9e8826370fd9de8c325f0\"" Jan 13 21:28:25.601117 containerd[1566]: time="2025-01-13T21:28:25.601077893Z" level=info msg="StartContainer for \"36e0ded83410a7e98640d57d075bde148ab6defdbda9e8826370fd9de8c325f0\" returns successfully" Jan 13 21:28:26.532475 kubelet[2688]: E0113 21:28:26.532445 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:26.540533 kubelet[2688]: I0113 21:28:26.540473 2688 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-wpcql" podStartSLOduration=3.9944251360000003 podStartE2EDuration="8.540439495s" podCreationTimestamp="2025-01-13 21:28:18 +0000 UTC" firstStartedPulling="2025-01-13 21:28:20.207430477 +0000 UTC m=+14.817757089" lastFinishedPulling="2025-01-13 21:28:24.753444836 +0000 UTC m=+19.363771448" observedRunningTime="2025-01-13 21:28:26.540335679 +0000 UTC m=+21.150662291" watchObservedRunningTime="2025-01-13 21:28:26.540439495 +0000 UTC m=+21.150766107" Jan 13 21:28:26.644555 systemd-networkd[1239]: flannel.1: Link UP Jan 13 21:28:26.644565 systemd-networkd[1239]: flannel.1: Gained carrier Jan 13 21:28:27.533590 kubelet[2688]: E0113 21:28:27.533563 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:27.836495 systemd-networkd[1239]: flannel.1: Gained IPv6LL Jan 13 21:28:30.801696 systemd[1]: Started sshd@5-10.0.0.128:22-10.0.0.1:42222.service - OpenSSH per-connection server daemon (10.0.0.1:42222). Jan 13 21:28:30.839081 sshd[3330]: Accepted publickey for core from 10.0.0.1 port 42222 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:30.840557 sshd[3330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:30.844751 systemd-logind[1542]: New session 6 of user core. Jan 13 21:28:30.857664 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 13 21:28:30.970071 sshd[3330]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:30.974617 systemd[1]: sshd@5-10.0.0.128:22-10.0.0.1:42222.service: Deactivated successfully. Jan 13 21:28:30.977217 systemd[1]: session-6.scope: Deactivated successfully. Jan 13 21:28:30.977998 systemd-logind[1542]: Session 6 logged out. Waiting for processes to exit. Jan 13 21:28:30.978949 systemd-logind[1542]: Removed session 6. Jan 13 21:28:35.982744 systemd[1]: Started sshd@6-10.0.0.128:22-10.0.0.1:42228.service - OpenSSH per-connection server daemon (10.0.0.1:42228). Jan 13 21:28:36.015780 sshd[3367]: Accepted publickey for core from 10.0.0.1 port 42228 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:36.017422 sshd[3367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:36.021363 systemd-logind[1542]: New session 7 of user core. Jan 13 21:28:36.029635 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 13 21:28:36.137690 sshd[3367]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:36.142280 systemd[1]: sshd@6-10.0.0.128:22-10.0.0.1:42228.service: Deactivated successfully. Jan 13 21:28:36.144719 systemd-logind[1542]: Session 7 logged out. Waiting for processes to exit. Jan 13 21:28:36.144827 systemd[1]: session-7.scope: Deactivated successfully. Jan 13 21:28:36.145944 systemd-logind[1542]: Removed session 7. Jan 13 21:28:37.478214 kubelet[2688]: E0113 21:28:37.478171 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:37.478732 containerd[1566]: time="2025-01-13T21:28:37.478555322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r6gjh,Uid:c279e453-0da5-49bb-ac04-05dccb319d48,Namespace:kube-system,Attempt:0,}" Jan 13 21:28:37.500602 systemd-networkd[1239]: cni0: Link UP Jan 13 21:28:37.500612 systemd-networkd[1239]: cni0: Gained carrier Jan 13 21:28:37.504533 systemd-networkd[1239]: cni0: Lost carrier Jan 13 21:28:37.509266 systemd-networkd[1239]: vethae741b30: Link UP Jan 13 21:28:37.511053 kernel: cni0: port 1(vethae741b30) entered blocking state Jan 13 21:28:37.511135 kernel: cni0: port 1(vethae741b30) entered disabled state Jan 13 21:28:37.511159 kernel: vethae741b30: entered allmulticast mode Jan 13 21:28:37.512347 kernel: vethae741b30: entered promiscuous mode Jan 13 21:28:37.512396 kernel: cni0: port 1(vethae741b30) entered blocking state Jan 13 21:28:37.514087 kernel: cni0: port 1(vethae741b30) entered forwarding state Jan 13 21:28:37.514142 kernel: cni0: port 1(vethae741b30) entered disabled state Jan 13 21:28:37.520554 kernel: cni0: port 1(vethae741b30) entered blocking state Jan 13 21:28:37.520613 kernel: cni0: port 1(vethae741b30) entered forwarding state Jan 13 21:28:37.522336 systemd-networkd[1239]: vethae741b30: Gained carrier Jan 13 21:28:37.522649 systemd-networkd[1239]: cni0: Gained carrier Jan 13 21:28:37.524871 containerd[1566]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001a938), "name":"cbr0", "type":"bridge"} Jan 13 21:28:37.524871 containerd[1566]: delegateAdd: netconf sent to delegate plugin: Jan 13 21:28:37.541730 containerd[1566]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T21:28:37.541052631Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:37.541730 containerd[1566]: time="2025-01-13T21:28:37.541675001Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:37.541730 containerd[1566]: time="2025-01-13T21:28:37.541690741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:37.542052 containerd[1566]: time="2025-01-13T21:28:37.541793524Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:37.565745 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:28:37.593785 containerd[1566]: time="2025-01-13T21:28:37.593751204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-r6gjh,Uid:c279e453-0da5-49bb-ac04-05dccb319d48,Namespace:kube-system,Attempt:0,} returns sandbox id \"0deb8749ccf71cbaab065b8db3495fcc7fb1a462bd613f7fcc157d192931c3cc\"" Jan 13 21:28:37.594749 kubelet[2688]: E0113 21:28:37.594723 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:37.596687 containerd[1566]: time="2025-01-13T21:28:37.596591318Z" level=info msg="CreateContainer within sandbox \"0deb8749ccf71cbaab065b8db3495fcc7fb1a462bd613f7fcc157d192931c3cc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:28:37.613658 containerd[1566]: time="2025-01-13T21:28:37.613616627Z" level=info msg="CreateContainer within sandbox \"0deb8749ccf71cbaab065b8db3495fcc7fb1a462bd613f7fcc157d192931c3cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"31f9dd19cf37646adc19d1d68fe6236fdef3f9a6c842414977726d03fe1e5f01\"" Jan 13 21:28:37.614080 containerd[1566]: time="2025-01-13T21:28:37.614058198Z" level=info msg="StartContainer for \"31f9dd19cf37646adc19d1d68fe6236fdef3f9a6c842414977726d03fe1e5f01\"" Jan 13 21:28:37.663581 containerd[1566]: time="2025-01-13T21:28:37.663543447Z" level=info msg="StartContainer for \"31f9dd19cf37646adc19d1d68fe6236fdef3f9a6c842414977726d03fe1e5f01\" returns successfully" Jan 13 21:28:38.491189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1674777089.mount: Deactivated successfully. Jan 13 21:28:38.552613 kubelet[2688]: E0113 21:28:38.552580 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:38.561064 kubelet[2688]: I0113 21:28:38.560740 2688 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-r6gjh" podStartSLOduration=19.560708198 podStartE2EDuration="19.560708198s" podCreationTimestamp="2025-01-13 21:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:28:38.560582342 +0000 UTC m=+33.170908954" watchObservedRunningTime="2025-01-13 21:28:38.560708198 +0000 UTC m=+33.171034810" Jan 13 21:28:38.780497 systemd-networkd[1239]: vethae741b30: Gained IPv6LL Jan 13 21:28:39.164496 systemd-networkd[1239]: cni0: Gained IPv6LL Jan 13 21:28:39.555945 kubelet[2688]: E0113 21:28:39.555833 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:40.478084 kubelet[2688]: E0113 21:28:40.478048 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:40.478468 containerd[1566]: time="2025-01-13T21:28:40.478431434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cpxbn,Uid:72dc5c7b-f500-49ab-b37a-ddf3dfcc61f8,Namespace:kube-system,Attempt:0,}" Jan 13 21:28:40.496034 systemd-networkd[1239]: vethfb50aa29: Link UP Jan 13 21:28:40.497825 kernel: cni0: port 2(vethfb50aa29) entered blocking state Jan 13 21:28:40.497873 kernel: cni0: port 2(vethfb50aa29) entered disabled state Jan 13 21:28:40.498535 kernel: vethfb50aa29: entered allmulticast mode Jan 13 21:28:40.499394 kernel: vethfb50aa29: entered promiscuous mode Jan 13 21:28:40.504105 kernel: cni0: port 2(vethfb50aa29) entered blocking state Jan 13 21:28:40.504156 kernel: cni0: port 2(vethfb50aa29) entered forwarding state Jan 13 21:28:40.505137 systemd-networkd[1239]: vethfb50aa29: Gained carrier Jan 13 21:28:40.507811 containerd[1566]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00011c8e8), "name":"cbr0", "type":"bridge"} Jan 13 21:28:40.507811 containerd[1566]: delegateAdd: netconf sent to delegate plugin: Jan 13 21:28:40.523689 containerd[1566]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-13T21:28:40.523589364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 13 21:28:40.523689 containerd[1566]: time="2025-01-13T21:28:40.523646612Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 13 21:28:40.524239 containerd[1566]: time="2025-01-13T21:28:40.524204521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:40.524349 containerd[1566]: time="2025-01-13T21:28:40.524297896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 13 21:28:40.545258 systemd-resolved[1455]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 13 21:28:40.557222 kubelet[2688]: E0113 21:28:40.557169 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:40.570741 containerd[1566]: time="2025-01-13T21:28:40.570693713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-cpxbn,Uid:72dc5c7b-f500-49ab-b37a-ddf3dfcc61f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"34752557244088b89a9399c7fc8cc857bf8c754cb12dfddd08596d59cb95d416\"" Jan 13 21:28:40.571337 kubelet[2688]: E0113 21:28:40.571188 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:40.572916 containerd[1566]: time="2025-01-13T21:28:40.572886817Z" level=info msg="CreateContainer within sandbox \"34752557244088b89a9399c7fc8cc857bf8c754cb12dfddd08596d59cb95d416\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 13 21:28:40.590022 containerd[1566]: time="2025-01-13T21:28:40.589892723Z" level=info msg="CreateContainer within sandbox \"34752557244088b89a9399c7fc8cc857bf8c754cb12dfddd08596d59cb95d416\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"194e12e58b92b75a6ef0c029359167ace5998ca171f0e61253408c8ba980721a\"" Jan 13 21:28:40.590408 containerd[1566]: time="2025-01-13T21:28:40.590382303Z" level=info msg="StartContainer for \"194e12e58b92b75a6ef0c029359167ace5998ca171f0e61253408c8ba980721a\"" Jan 13 21:28:40.646525 containerd[1566]: time="2025-01-13T21:28:40.646471617Z" level=info msg="StartContainer for \"194e12e58b92b75a6ef0c029359167ace5998ca171f0e61253408c8ba980721a\" returns successfully" Jan 13 21:28:41.151532 systemd[1]: Started sshd@7-10.0.0.128:22-10.0.0.1:53368.service - OpenSSH per-connection server daemon (10.0.0.1:53368). Jan 13 21:28:41.185452 sshd[3634]: Accepted publickey for core from 10.0.0.1 port 53368 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:41.186997 sshd[3634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:41.190587 systemd-logind[1542]: New session 8 of user core. Jan 13 21:28:41.200576 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 13 21:28:41.302419 sshd[3634]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:41.310648 systemd[1]: Started sshd@8-10.0.0.128:22-10.0.0.1:53378.service - OpenSSH per-connection server daemon (10.0.0.1:53378). Jan 13 21:28:41.311402 systemd[1]: sshd@7-10.0.0.128:22-10.0.0.1:53368.service: Deactivated successfully. Jan 13 21:28:41.315108 systemd[1]: session-8.scope: Deactivated successfully. Jan 13 21:28:41.316239 systemd-logind[1542]: Session 8 logged out. Waiting for processes to exit. Jan 13 21:28:41.317269 systemd-logind[1542]: Removed session 8. Jan 13 21:28:41.344766 sshd[3647]: Accepted publickey for core from 10.0.0.1 port 53378 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:41.346157 sshd[3647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:41.350306 systemd-logind[1542]: New session 9 of user core. Jan 13 21:28:41.360583 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 13 21:28:41.490787 sshd[3647]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:41.502632 systemd[1]: Started sshd@9-10.0.0.128:22-10.0.0.1:53388.service - OpenSSH per-connection server daemon (10.0.0.1:53388). Jan 13 21:28:41.503197 systemd[1]: sshd@8-10.0.0.128:22-10.0.0.1:53378.service: Deactivated successfully. Jan 13 21:28:41.506597 systemd[1]: session-9.scope: Deactivated successfully. Jan 13 21:28:41.510916 systemd-logind[1542]: Session 9 logged out. Waiting for processes to exit. Jan 13 21:28:41.512838 systemd-logind[1542]: Removed session 9. Jan 13 21:28:41.539813 sshd[3661]: Accepted publickey for core from 10.0.0.1 port 53388 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:41.541265 sshd[3661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:41.545290 systemd-logind[1542]: New session 10 of user core. Jan 13 21:28:41.553560 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 13 21:28:41.560010 kubelet[2688]: E0113 21:28:41.559981 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:41.580659 kubelet[2688]: I0113 21:28:41.580228 2688 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-cpxbn" podStartSLOduration=22.580127628 podStartE2EDuration="22.580127628s" podCreationTimestamp="2025-01-13 21:28:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 21:28:41.571618121 +0000 UTC m=+36.181944763" watchObservedRunningTime="2025-01-13 21:28:41.580127628 +0000 UTC m=+36.190454240" Jan 13 21:28:41.658150 sshd[3661]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:41.661911 systemd[1]: sshd@9-10.0.0.128:22-10.0.0.1:53388.service: Deactivated successfully. Jan 13 21:28:41.664253 systemd-logind[1542]: Session 10 logged out. Waiting for processes to exit. Jan 13 21:28:41.664377 systemd[1]: session-10.scope: Deactivated successfully. Jan 13 21:28:41.665772 systemd-logind[1542]: Removed session 10. Jan 13 21:28:42.044464 systemd-networkd[1239]: vethfb50aa29: Gained IPv6LL Jan 13 21:28:42.561564 kubelet[2688]: E0113 21:28:42.561527 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:43.562765 kubelet[2688]: E0113 21:28:43.562728 2688 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 13 21:28:46.668754 systemd[1]: Started sshd@10-10.0.0.128:22-10.0.0.1:53404.service - OpenSSH per-connection server daemon (10.0.0.1:53404). Jan 13 21:28:46.704178 sshd[3705]: Accepted publickey for core from 10.0.0.1 port 53404 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:46.705639 sshd[3705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:46.709156 systemd-logind[1542]: New session 11 of user core. Jan 13 21:28:46.720571 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 13 21:28:46.825306 sshd[3705]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:46.832516 systemd[1]: Started sshd@11-10.0.0.128:22-10.0.0.1:53420.service - OpenSSH per-connection server daemon (10.0.0.1:53420). Jan 13 21:28:46.832991 systemd[1]: sshd@10-10.0.0.128:22-10.0.0.1:53404.service: Deactivated successfully. Jan 13 21:28:46.836400 systemd-logind[1542]: Session 11 logged out. Waiting for processes to exit. Jan 13 21:28:46.837065 systemd[1]: session-11.scope: Deactivated successfully. Jan 13 21:28:46.838205 systemd-logind[1542]: Removed session 11. Jan 13 21:28:46.868155 sshd[3738]: Accepted publickey for core from 10.0.0.1 port 53420 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:46.869842 sshd[3738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:46.873806 systemd-logind[1542]: New session 12 of user core. Jan 13 21:28:46.881549 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 13 21:28:47.117357 sshd[3738]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:47.125536 systemd[1]: Started sshd@12-10.0.0.128:22-10.0.0.1:53430.service - OpenSSH per-connection server daemon (10.0.0.1:53430). Jan 13 21:28:47.126005 systemd[1]: sshd@11-10.0.0.128:22-10.0.0.1:53420.service: Deactivated successfully. Jan 13 21:28:47.129187 systemd-logind[1542]: Session 12 logged out. Waiting for processes to exit. Jan 13 21:28:47.129866 systemd[1]: session-12.scope: Deactivated successfully. Jan 13 21:28:47.130802 systemd-logind[1542]: Removed session 12. Jan 13 21:28:47.160084 sshd[3751]: Accepted publickey for core from 10.0.0.1 port 53430 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:47.161954 sshd[3751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:47.165830 systemd-logind[1542]: New session 13 of user core. Jan 13 21:28:47.175702 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 13 21:28:48.334633 sshd[3751]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:48.343676 systemd[1]: Started sshd@13-10.0.0.128:22-10.0.0.1:52988.service - OpenSSH per-connection server daemon (10.0.0.1:52988). Jan 13 21:28:48.344204 systemd[1]: sshd@12-10.0.0.128:22-10.0.0.1:53430.service: Deactivated successfully. Jan 13 21:28:48.348974 systemd-logind[1542]: Session 13 logged out. Waiting for processes to exit. Jan 13 21:28:48.350195 systemd[1]: session-13.scope: Deactivated successfully. Jan 13 21:28:48.351667 systemd-logind[1542]: Removed session 13. Jan 13 21:28:48.380501 sshd[3771]: Accepted publickey for core from 10.0.0.1 port 52988 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:48.382122 sshd[3771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:48.386371 systemd-logind[1542]: New session 14 of user core. Jan 13 21:28:48.392562 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 13 21:28:48.589036 sshd[3771]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:48.596778 systemd[1]: Started sshd@14-10.0.0.128:22-10.0.0.1:53004.service - OpenSSH per-connection server daemon (10.0.0.1:53004). Jan 13 21:28:48.597299 systemd[1]: sshd@13-10.0.0.128:22-10.0.0.1:52988.service: Deactivated successfully. Jan 13 21:28:48.599746 systemd[1]: session-14.scope: Deactivated successfully. Jan 13 21:28:48.601662 systemd-logind[1542]: Session 14 logged out. Waiting for processes to exit. Jan 13 21:28:48.602753 systemd-logind[1542]: Removed session 14. Jan 13 21:28:48.635097 sshd[3786]: Accepted publickey for core from 10.0.0.1 port 53004 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:48.637515 sshd[3786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:48.641954 systemd-logind[1542]: New session 15 of user core. Jan 13 21:28:48.651576 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 13 21:28:48.755551 sshd[3786]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:48.760533 systemd[1]: sshd@14-10.0.0.128:22-10.0.0.1:53004.service: Deactivated successfully. Jan 13 21:28:48.763017 systemd-logind[1542]: Session 15 logged out. Waiting for processes to exit. Jan 13 21:28:48.763126 systemd[1]: session-15.scope: Deactivated successfully. Jan 13 21:28:48.764109 systemd-logind[1542]: Removed session 15. Jan 13 21:28:53.769523 systemd[1]: Started sshd@15-10.0.0.128:22-10.0.0.1:53014.service - OpenSSH per-connection server daemon (10.0.0.1:53014). Jan 13 21:28:53.802743 sshd[3826]: Accepted publickey for core from 10.0.0.1 port 53014 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:53.804446 sshd[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:53.808245 systemd-logind[1542]: New session 16 of user core. Jan 13 21:28:53.817555 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 13 21:28:53.921786 sshd[3826]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:53.926086 systemd[1]: sshd@15-10.0.0.128:22-10.0.0.1:53014.service: Deactivated successfully. Jan 13 21:28:53.928998 systemd-logind[1542]: Session 16 logged out. Waiting for processes to exit. Jan 13 21:28:53.929087 systemd[1]: session-16.scope: Deactivated successfully. Jan 13 21:28:53.929983 systemd-logind[1542]: Removed session 16. Jan 13 21:28:58.935526 systemd[1]: Started sshd@16-10.0.0.128:22-10.0.0.1:51110.service - OpenSSH per-connection server daemon (10.0.0.1:51110). Jan 13 21:28:58.968400 sshd[3865]: Accepted publickey for core from 10.0.0.1 port 51110 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:28:58.969876 sshd[3865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:28:58.973239 systemd-logind[1542]: New session 17 of user core. Jan 13 21:28:58.979561 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 13 21:28:59.079045 sshd[3865]: pam_unix(sshd:session): session closed for user core Jan 13 21:28:59.083241 systemd[1]: sshd@16-10.0.0.128:22-10.0.0.1:51110.service: Deactivated successfully. Jan 13 21:28:59.085751 systemd[1]: session-17.scope: Deactivated successfully. Jan 13 21:28:59.086440 systemd-logind[1542]: Session 17 logged out. Waiting for processes to exit. Jan 13 21:28:59.087443 systemd-logind[1542]: Removed session 17. Jan 13 21:29:04.097551 systemd[1]: Started sshd@17-10.0.0.128:22-10.0.0.1:51118.service - OpenSSH per-connection server daemon (10.0.0.1:51118). Jan 13 21:29:04.133020 sshd[3901]: Accepted publickey for core from 10.0.0.1 port 51118 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:29:04.134852 sshd[3901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:04.138993 systemd-logind[1542]: New session 18 of user core. Jan 13 21:29:04.148632 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 13 21:29:04.249155 sshd[3901]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:04.253298 systemd[1]: sshd@17-10.0.0.128:22-10.0.0.1:51118.service: Deactivated successfully. Jan 13 21:29:04.255512 systemd[1]: session-18.scope: Deactivated successfully. Jan 13 21:29:04.255620 systemd-logind[1542]: Session 18 logged out. Waiting for processes to exit. Jan 13 21:29:04.256543 systemd-logind[1542]: Removed session 18. Jan 13 21:29:09.259532 systemd[1]: Started sshd@18-10.0.0.128:22-10.0.0.1:52710.service - OpenSSH per-connection server daemon (10.0.0.1:52710). Jan 13 21:29:09.292476 sshd[3939]: Accepted publickey for core from 10.0.0.1 port 52710 ssh2: RSA SHA256:zXffdl8b9kLXcQDMnpblwTEmig+xqtREHhbtSBVXgEc Jan 13 21:29:09.293862 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 13 21:29:09.297601 systemd-logind[1542]: New session 19 of user core. Jan 13 21:29:09.311563 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 13 21:29:09.418608 sshd[3939]: pam_unix(sshd:session): session closed for user core Jan 13 21:29:09.422227 systemd[1]: sshd@18-10.0.0.128:22-10.0.0.1:52710.service: Deactivated successfully. Jan 13 21:29:09.424405 systemd[1]: session-19.scope: Deactivated successfully. Jan 13 21:29:09.424476 systemd-logind[1542]: Session 19 logged out. Waiting for processes to exit. Jan 13 21:29:09.425660 systemd-logind[1542]: Removed session 19.