Jan 30 13:47:07.896765 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:47:07.896794 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:47:07.896809 kernel: BIOS-provided physical RAM map: Jan 30 13:47:07.896817 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:47:07.896825 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:47:07.896833 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:47:07.896843 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 30 13:47:07.896852 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 30 13:47:07.896860 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:47:07.896871 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 13:47:07.896880 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:47:07.896888 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:47:07.896897 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 13:47:07.896905 kernel: NX (Execute Disable) protection: active Jan 30 13:47:07.896916 kernel: APIC: Static calls initialized Jan 30 13:47:07.896928 kernel: SMBIOS 2.8 present. Jan 30 13:47:07.896937 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 30 13:47:07.896946 kernel: Hypervisor detected: KVM Jan 30 13:47:07.896955 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:47:07.896964 kernel: kvm-clock: using sched offset of 2219997098 cycles Jan 30 13:47:07.896974 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:47:07.896998 kernel: tsc: Detected 2794.748 MHz processor Jan 30 13:47:07.897008 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:47:07.897018 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:47:07.897027 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 30 13:47:07.897041 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:47:07.897050 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:47:07.897060 kernel: Using GB pages for direct mapping Jan 30 13:47:07.897069 kernel: ACPI: Early table checksum verification disabled Jan 30 13:47:07.897079 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 30 13:47:07.897089 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:47:07.897098 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:47:07.897108 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:47:07.897120 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 30 13:47:07.897130 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:47:07.897140 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:47:07.897149 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:47:07.897159 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:47:07.897168 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 30 13:47:07.897178 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 30 13:47:07.897194 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 30 13:47:07.897206 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 30 13:47:07.897216 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 30 13:47:07.897226 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 30 13:47:07.897236 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 30 13:47:07.897246 kernel: No NUMA configuration found Jan 30 13:47:07.897256 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 30 13:47:07.897266 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 30 13:47:07.897278 kernel: Zone ranges: Jan 30 13:47:07.897288 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:47:07.897298 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 30 13:47:07.897308 kernel: Normal empty Jan 30 13:47:07.897318 kernel: Movable zone start for each node Jan 30 13:47:07.897327 kernel: Early memory node ranges Jan 30 13:47:07.897337 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:47:07.897347 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 30 13:47:07.897357 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 30 13:47:07.897369 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:47:07.897379 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:47:07.897389 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 30 13:47:07.897399 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:47:07.897409 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:47:07.897418 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:47:07.897428 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:47:07.897438 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:47:07.897448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:47:07.897461 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:47:07.897470 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:47:07.897480 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:47:07.897490 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:47:07.897500 kernel: TSC deadline timer available Jan 30 13:47:07.897510 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:47:07.897519 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:47:07.897529 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:47:07.897539 kernel: kvm-guest: setup PV sched yield Jan 30 13:47:07.897548 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 13:47:07.897561 kernel: Booting paravirtualized kernel on KVM Jan 30 13:47:07.897572 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:47:07.897582 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:47:07.897591 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:47:07.897601 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:47:07.897611 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:47:07.897621 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:47:07.897630 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:47:07.897642 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:47:07.897655 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:47:07.897665 kernel: random: crng init done Jan 30 13:47:07.897674 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:47:07.897684 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:47:07.897694 kernel: Fallback order for Node 0: 0 Jan 30 13:47:07.897704 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 30 13:47:07.897714 kernel: Policy zone: DMA32 Jan 30 13:47:07.897723 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:47:07.897737 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 30 13:47:07.897747 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:47:07.897765 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:47:07.897775 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:47:07.897785 kernel: Dynamic Preempt: voluntary Jan 30 13:47:07.897794 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:47:07.897805 kernel: rcu: RCU event tracing is enabled. Jan 30 13:47:07.897815 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:47:07.897825 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:47:07.897838 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:47:07.897848 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:47:07.897858 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:47:07.897868 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:47:07.897877 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:47:07.897887 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:47:07.897897 kernel: Console: colour VGA+ 80x25 Jan 30 13:47:07.897906 kernel: printk: console [ttyS0] enabled Jan 30 13:47:07.897916 kernel: ACPI: Core revision 20230628 Jan 30 13:47:07.897929 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:47:07.897939 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:47:07.897948 kernel: x2apic enabled Jan 30 13:47:07.897958 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:47:07.897968 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:47:07.897978 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:47:07.898002 kernel: kvm-guest: setup PV IPIs Jan 30 13:47:07.898025 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:47:07.898035 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:47:07.898045 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 30 13:47:07.898056 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:47:07.898066 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:47:07.898079 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:47:07.898089 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:47:07.898100 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:47:07.898110 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:47:07.898121 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:47:07.898134 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:47:07.898144 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:47:07.898154 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:47:07.898165 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:47:07.898175 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:47:07.898186 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:47:07.898197 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:47:07.898207 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:47:07.898220 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:47:07.898230 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:47:07.898241 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:47:07.898251 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:47:07.898261 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:47:07.898271 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:47:07.898282 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:47:07.898292 kernel: landlock: Up and running. Jan 30 13:47:07.898302 kernel: SELinux: Initializing. Jan 30 13:47:07.898315 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:47:07.898325 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:47:07.898336 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:47:07.898346 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:47:07.898357 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:47:07.898367 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:47:07.898377 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:47:07.898388 kernel: ... version: 0 Jan 30 13:47:07.898398 kernel: ... bit width: 48 Jan 30 13:47:07.898411 kernel: ... generic registers: 6 Jan 30 13:47:07.898421 kernel: ... value mask: 0000ffffffffffff Jan 30 13:47:07.898431 kernel: ... max period: 00007fffffffffff Jan 30 13:47:07.898442 kernel: ... fixed-purpose events: 0 Jan 30 13:47:07.898452 kernel: ... event mask: 000000000000003f Jan 30 13:47:07.898462 kernel: signal: max sigframe size: 1776 Jan 30 13:47:07.898472 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:47:07.898482 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:47:07.898493 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:47:07.898506 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:47:07.898516 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:47:07.898526 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:47:07.898536 kernel: smpboot: Max logical packages: 1 Jan 30 13:47:07.898547 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 30 13:47:07.898557 kernel: devtmpfs: initialized Jan 30 13:47:07.898568 kernel: x86/mm: Memory block size: 128MB Jan 30 13:47:07.898578 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:47:07.898589 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:47:07.898602 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:47:07.898613 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:47:07.898623 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:47:07.898634 kernel: audit: type=2000 audit(1738244828.101:1): state=initialized audit_enabled=0 res=1 Jan 30 13:47:07.898644 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:47:07.898655 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:47:07.898665 kernel: cpuidle: using governor menu Jan 30 13:47:07.898676 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:47:07.898686 kernel: dca service started, version 1.12.1 Jan 30 13:47:07.898699 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:47:07.898710 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 13:47:07.898720 kernel: PCI: Using configuration type 1 for base access Jan 30 13:47:07.898730 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:47:07.898741 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:47:07.898751 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:47:07.898770 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:47:07.898781 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:47:07.898791 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:47:07.898805 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:47:07.898815 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:47:07.898825 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:47:07.898835 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:47:07.898846 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:47:07.898856 kernel: ACPI: Interpreter enabled Jan 30 13:47:07.898866 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:47:07.898876 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:47:07.898887 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:47:07.898900 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:47:07.898910 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:47:07.898921 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:47:07.899151 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:47:07.899320 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:47:07.899475 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:47:07.899491 kernel: PCI host bridge to bus 0000:00 Jan 30 13:47:07.899653 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:47:07.899807 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:47:07.899948 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:47:07.900112 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 30 13:47:07.900253 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:47:07.900394 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 30 13:47:07.900535 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:47:07.900718 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:47:07.900895 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:47:07.901081 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 30 13:47:07.901236 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 30 13:47:07.901387 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 30 13:47:07.901540 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:47:07.901706 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:47:07.901879 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 30 13:47:07.902053 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 30 13:47:07.902208 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 30 13:47:07.902379 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:47:07.902534 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:47:07.902688 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 30 13:47:07.902859 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 30 13:47:07.903041 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:47:07.903201 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 30 13:47:07.903357 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 30 13:47:07.903511 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 30 13:47:07.903664 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 30 13:47:07.903838 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:47:07.904021 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:47:07.904194 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:47:07.904352 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 30 13:47:07.904508 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 30 13:47:07.904672 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:47:07.904838 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 13:47:07.904854 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:47:07.904870 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:47:07.904880 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:47:07.904890 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:47:07.904901 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:47:07.904911 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:47:07.904921 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:47:07.904931 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:47:07.904942 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:47:07.904952 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:47:07.904965 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:47:07.904976 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:47:07.905002 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:47:07.905013 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:47:07.905023 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:47:07.905033 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:47:07.905044 kernel: iommu: Default domain type: Translated Jan 30 13:47:07.905054 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:47:07.905064 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:47:07.905078 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:47:07.905089 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:47:07.905099 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 30 13:47:07.905257 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:47:07.905411 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:47:07.905565 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:47:07.905580 kernel: vgaarb: loaded Jan 30 13:47:07.905591 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:47:07.905605 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:47:07.905616 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:47:07.905626 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:47:07.905636 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:47:07.905647 kernel: pnp: PnP ACPI init Jan 30 13:47:07.905819 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:47:07.905835 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:47:07.905846 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:47:07.905861 kernel: NET: Registered PF_INET protocol family Jan 30 13:47:07.905872 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:47:07.905882 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:47:07.905893 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:47:07.905903 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:47:07.905914 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:47:07.905924 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:47:07.905935 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:47:07.905945 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:47:07.905959 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:47:07.905970 kernel: NET: Registered PF_XDP protocol family Jan 30 13:47:07.906126 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:47:07.906267 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:47:07.906408 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:47:07.906549 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 30 13:47:07.906689 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:47:07.906841 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 30 13:47:07.906861 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:47:07.906872 kernel: Initialise system trusted keyrings Jan 30 13:47:07.906882 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:47:07.906893 kernel: Key type asymmetric registered Jan 30 13:47:07.906903 kernel: Asymmetric key parser 'x509' registered Jan 30 13:47:07.906914 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:47:07.906925 kernel: io scheduler mq-deadline registered Jan 30 13:47:07.906935 kernel: io scheduler kyber registered Jan 30 13:47:07.906946 kernel: io scheduler bfq registered Jan 30 13:47:07.906959 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:47:07.906970 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:47:07.906981 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:47:07.907014 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:47:07.907025 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:47:07.907036 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:47:07.907047 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:47:07.907057 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:47:07.907068 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:47:07.907228 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:47:07.907250 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:47:07.907393 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:47:07.907541 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:47:07 UTC (1738244827) Jan 30 13:47:07.907686 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:47:07.907700 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:47:07.907711 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:47:07.907721 kernel: Segment Routing with IPv6 Jan 30 13:47:07.907736 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:47:07.907746 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:47:07.907767 kernel: Key type dns_resolver registered Jan 30 13:47:07.907777 kernel: IPI shorthand broadcast: enabled Jan 30 13:47:07.907788 kernel: sched_clock: Marking stable (596008253, 101015853)->(755887428, -58863322) Jan 30 13:47:07.907798 kernel: registered taskstats version 1 Jan 30 13:47:07.907809 kernel: Loading compiled-in X.509 certificates Jan 30 13:47:07.907819 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:47:07.907830 kernel: Key type .fscrypt registered Jan 30 13:47:07.907843 kernel: Key type fscrypt-provisioning registered Jan 30 13:47:07.907854 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:47:07.907865 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:47:07.907875 kernel: ima: No architecture policies found Jan 30 13:47:07.907886 kernel: clk: Disabling unused clocks Jan 30 13:47:07.907896 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:47:07.907907 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:47:07.907917 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:47:07.907928 kernel: Run /init as init process Jan 30 13:47:07.907941 kernel: with arguments: Jan 30 13:47:07.907952 kernel: /init Jan 30 13:47:07.907962 kernel: with environment: Jan 30 13:47:07.907972 kernel: HOME=/ Jan 30 13:47:07.908055 kernel: TERM=linux Jan 30 13:47:07.908067 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:47:07.908080 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:47:07.908093 systemd[1]: Detected virtualization kvm. Jan 30 13:47:07.908109 systemd[1]: Detected architecture x86-64. Jan 30 13:47:07.908120 systemd[1]: Running in initrd. Jan 30 13:47:07.908131 systemd[1]: No hostname configured, using default hostname. Jan 30 13:47:07.908142 systemd[1]: Hostname set to . Jan 30 13:47:07.908154 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:47:07.908165 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:47:07.908176 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:47:07.908187 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:47:07.908202 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:47:07.908228 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:47:07.908242 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:47:07.908254 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:47:07.908268 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:47:07.908282 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:47:07.908294 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:47:07.908305 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:47:07.908317 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:47:07.908328 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:47:07.908340 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:47:07.908351 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:47:07.908362 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:47:07.908377 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:47:07.908389 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:47:07.908400 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:47:07.908412 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:47:07.908424 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:47:07.908435 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:47:07.908447 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:47:07.908458 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:47:07.908470 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:47:07.908484 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:47:07.908496 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:47:07.908507 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:47:07.908519 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:47:07.908530 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:07.908542 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:47:07.908554 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:47:07.908568 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:47:07.908607 systemd-journald[191]: Collecting audit messages is disabled. Jan 30 13:47:07.908637 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:47:07.908652 systemd-journald[191]: Journal started Jan 30 13:47:07.908678 systemd-journald[191]: Runtime Journal (/run/log/journal/e161eac3fd4d4fd3bdd1cd1a55fb6e3d) is 6.0M, max 48.4M, 42.3M free. Jan 30 13:47:07.891501 systemd-modules-load[194]: Inserted module 'overlay' Jan 30 13:47:07.928136 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:47:07.928850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:07.932970 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:47:07.933144 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:47:07.936605 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 30 13:47:07.937537 kernel: Bridge firewalling registered Jan 30 13:47:07.938061 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:47:07.950119 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:47:07.953355 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:47:07.956027 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:47:07.959204 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:47:07.970813 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:47:07.972213 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:07.973793 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:47:07.976071 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:47:07.992180 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:47:07.994872 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:47:08.004007 dracut-cmdline[229]: dracut-dracut-053 Jan 30 13:47:08.006761 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:47:08.028309 systemd-resolved[232]: Positive Trust Anchors: Jan 30 13:47:08.028324 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:47:08.028358 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:47:08.031209 systemd-resolved[232]: Defaulting to hostname 'linux'. Jan 30 13:47:08.032548 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:47:08.039307 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:47:08.088030 kernel: SCSI subsystem initialized Jan 30 13:47:08.098012 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:47:08.108013 kernel: iscsi: registered transport (tcp) Jan 30 13:47:08.129026 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:47:08.129096 kernel: QLogic iSCSI HBA Driver Jan 30 13:47:08.182074 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:47:08.193133 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:47:08.217620 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:47:08.217663 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:47:08.217675 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:47:08.258004 kernel: raid6: avx2x4 gen() 30404 MB/s Jan 30 13:47:08.275004 kernel: raid6: avx2x2 gen() 30527 MB/s Jan 30 13:47:08.292069 kernel: raid6: avx2x1 gen() 25918 MB/s Jan 30 13:47:08.292090 kernel: raid6: using algorithm avx2x2 gen() 30527 MB/s Jan 30 13:47:08.310090 kernel: raid6: .... xor() 19907 MB/s, rmw enabled Jan 30 13:47:08.310119 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:47:08.330005 kernel: xor: automatically using best checksumming function avx Jan 30 13:47:08.478018 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:47:08.490878 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:47:08.503215 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:47:08.515325 systemd-udevd[416]: Using default interface naming scheme 'v255'. Jan 30 13:47:08.519879 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:47:08.523050 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:47:08.539210 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jan 30 13:47:08.572353 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:47:08.580118 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:47:08.642585 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:47:08.652333 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:47:08.667816 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:47:08.673011 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:47:08.696119 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:47:08.696263 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:47:08.696274 kernel: GPT:9289727 != 19775487 Jan 30 13:47:08.696285 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:47:08.696301 kernel: GPT:9289727 != 19775487 Jan 30 13:47:08.696310 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:47:08.696320 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:47:08.696331 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:47:08.673253 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:47:08.676118 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:47:08.677484 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:47:08.691959 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:47:08.706477 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:47:08.711629 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:47:08.711686 kernel: AES CTR mode by8 optimization enabled Jan 30 13:47:08.711358 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:47:08.711470 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:08.713116 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:47:08.714559 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:47:08.714787 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:08.727843 kernel: libata version 3.00 loaded. Jan 30 13:47:08.719688 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:08.730307 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:08.734013 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (481) Jan 30 13:47:08.736046 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (472) Jan 30 13:47:08.739006 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:47:08.753175 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:47:08.753197 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:47:08.753361 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:47:08.753504 kernel: scsi host0: ahci Jan 30 13:47:08.753813 kernel: scsi host1: ahci Jan 30 13:47:08.753963 kernel: scsi host2: ahci Jan 30 13:47:08.754261 kernel: scsi host3: ahci Jan 30 13:47:08.754414 kernel: scsi host4: ahci Jan 30 13:47:08.754569 kernel: scsi host5: ahci Jan 30 13:47:08.754714 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 30 13:47:08.754725 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 30 13:47:08.754745 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 30 13:47:08.754756 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 30 13:47:08.754769 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 30 13:47:08.754779 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 30 13:47:08.754528 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:47:08.794689 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:47:08.795033 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:08.802825 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:47:08.808609 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:47:08.809920 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:47:08.825202 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:47:08.827409 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:47:08.835332 disk-uuid[559]: Primary Header is updated. Jan 30 13:47:08.835332 disk-uuid[559]: Secondary Entries is updated. Jan 30 13:47:08.835332 disk-uuid[559]: Secondary Header is updated. Jan 30 13:47:08.839020 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:47:08.843014 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:47:08.846866 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:09.061043 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:47:09.061135 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:47:09.061146 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:47:09.061157 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:47:09.062010 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:47:09.063009 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:47:09.064013 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:47:09.065130 kernel: ata3.00: applying bridge limits Jan 30 13:47:09.065145 kernel: ata3.00: configured for UDMA/100 Jan 30 13:47:09.066010 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:47:09.127017 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:47:09.140649 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:47:09.140664 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:47:09.844594 disk-uuid[561]: The operation has completed successfully. Jan 30 13:47:09.845759 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:47:09.873075 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:47:09.873217 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:47:09.900153 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:47:09.905579 sh[596]: Success Jan 30 13:47:09.918016 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:47:09.949725 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:47:09.963517 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:47:09.966026 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:47:10.000210 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:47:10.000247 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:10.000259 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:47:10.002289 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:47:10.002313 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:47:10.007348 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:47:10.008292 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:47:10.018137 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:47:10.019717 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:47:10.028586 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:10.028615 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:10.028626 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:47:10.031084 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:47:10.040208 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:47:10.041872 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:10.051378 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:47:10.057159 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:47:10.110493 ignition[690]: Ignition 2.19.0 Jan 30 13:47:10.110506 ignition[690]: Stage: fetch-offline Jan 30 13:47:10.110545 ignition[690]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:10.110555 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:47:10.110650 ignition[690]: parsed url from cmdline: "" Jan 30 13:47:10.110655 ignition[690]: no config URL provided Jan 30 13:47:10.110661 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:47:10.110670 ignition[690]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:47:10.110707 ignition[690]: op(1): [started] loading QEMU firmware config module Jan 30 13:47:10.110712 ignition[690]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:47:10.117214 ignition[690]: op(1): [finished] loading QEMU firmware config module Jan 30 13:47:10.133866 ignition[690]: parsing config with SHA512: 4b56b28357f6777429ace5f31d388b2d783af2460852aba0b14d152a7d2103ad30006ce1918ff2e1b7b66924025e2a8498f23b05d4bf7c0482c754e41fd647ac Jan 30 13:47:10.137021 unknown[690]: fetched base config from "system" Jan 30 13:47:10.137031 unknown[690]: fetched user config from "qemu" Jan 30 13:47:10.137349 ignition[690]: fetch-offline: fetch-offline passed Jan 30 13:47:10.139498 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:47:10.137407 ignition[690]: Ignition finished successfully Jan 30 13:47:10.151325 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:47:10.175258 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:47:10.195628 systemd-networkd[785]: lo: Link UP Jan 30 13:47:10.195640 systemd-networkd[785]: lo: Gained carrier Jan 30 13:47:10.197481 systemd-networkd[785]: Enumeration completed Jan 30 13:47:10.197550 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:47:10.197865 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:10.197868 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:47:10.198800 systemd-networkd[785]: eth0: Link UP Jan 30 13:47:10.198805 systemd-networkd[785]: eth0: Gained carrier Jan 30 13:47:10.198812 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:10.199476 systemd[1]: Reached target network.target - Network. Jan 30 13:47:10.201481 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:47:10.208109 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:47:10.210024 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:47:10.219253 ignition[787]: Ignition 2.19.0 Jan 30 13:47:10.219264 ignition[787]: Stage: kargs Jan 30 13:47:10.219408 ignition[787]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:10.219419 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:47:10.220155 ignition[787]: kargs: kargs passed Jan 30 13:47:10.223812 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:47:10.220198 ignition[787]: Ignition finished successfully Jan 30 13:47:10.234111 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:47:10.245555 ignition[797]: Ignition 2.19.0 Jan 30 13:47:10.245566 ignition[797]: Stage: disks Jan 30 13:47:10.245757 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:10.245769 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:47:10.248543 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:47:10.246592 ignition[797]: disks: disks passed Jan 30 13:47:10.250954 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:47:10.246635 ignition[797]: Ignition finished successfully Jan 30 13:47:10.253196 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:47:10.254510 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:47:10.256192 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:47:10.257228 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:47:10.268118 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:47:10.283194 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:47:10.289533 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:47:10.299063 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:47:10.382019 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:47:10.382679 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:47:10.383297 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:47:10.408059 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:47:10.409774 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:47:10.411531 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:47:10.411567 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:47:10.422524 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (815) Jan 30 13:47:10.422548 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:10.422559 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:10.422570 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:47:10.411586 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:47:10.425616 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:47:10.417631 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:47:10.423310 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:47:10.427297 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:47:10.459927 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:47:10.465425 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:47:10.469130 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:47:10.472702 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:47:10.553816 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:47:10.562144 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:47:10.565102 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:47:10.570014 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:10.590065 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:47:10.678113 ignition[932]: INFO : Ignition 2.19.0 Jan 30 13:47:10.678113 ignition[932]: INFO : Stage: mount Jan 30 13:47:10.680175 ignition[932]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:10.680175 ignition[932]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:47:10.680175 ignition[932]: INFO : mount: mount passed Jan 30 13:47:10.680175 ignition[932]: INFO : Ignition finished successfully Jan 30 13:47:10.684152 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:47:10.697131 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:47:10.999625 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:47:11.012269 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:47:11.020962 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (940) Jan 30 13:47:11.021044 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:11.021068 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:11.022479 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:47:11.025014 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:47:11.027332 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:47:11.059697 ignition[957]: INFO : Ignition 2.19.0 Jan 30 13:47:11.059697 ignition[957]: INFO : Stage: files Jan 30 13:47:11.061684 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:11.061684 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:47:11.065074 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:47:11.066575 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:47:11.066575 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:47:11.072190 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:47:11.073959 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:47:11.076024 unknown[957]: wrote ssh authorized keys file for user: core Jan 30 13:47:11.077298 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:47:11.080129 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:47:11.082421 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 30 13:47:11.123567 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:47:11.202514 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 30 13:47:11.202514 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:47:11.206470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:47:11.206470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:47:11.206470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:47:11.206470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:47:11.206470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:47:11.206470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:47:11.206470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:47:11.206470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:47:11.206470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:47:11.206470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:47:11.206470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:47:11.206470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:47:11.206470 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 30 13:47:11.405343 systemd-networkd[785]: eth0: Gained IPv6LL Jan 30 13:47:11.563837 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:47:11.970196 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 30 13:47:11.970196 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:47:11.978465 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:47:11.978465 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:47:11.978465 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:47:11.978465 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 13:47:11.978465 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:47:11.978465 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:47:11.978465 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 13:47:11.978465 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:47:12.001894 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:47:12.023053 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:47:12.023053 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:47:12.023053 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:47:12.023053 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:47:12.023053 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:47:12.023053 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:47:12.023053 ignition[957]: INFO : files: files passed Jan 30 13:47:12.023053 ignition[957]: INFO : Ignition finished successfully Jan 30 13:47:12.054187 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:47:12.066126 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:47:12.066946 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:47:12.073081 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:47:12.073201 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:47:12.080146 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:47:12.084044 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:47:12.084044 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:47:12.087778 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:47:12.091170 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:47:12.093917 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:47:12.107113 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:47:12.140076 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:47:12.141199 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:47:12.143868 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:47:12.145930 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:47:12.148030 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:47:12.160206 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:47:12.175849 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:47:12.198349 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:47:12.209881 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:47:12.212318 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:47:12.214926 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:47:12.216852 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:47:12.217959 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:47:12.220839 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:47:12.223458 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:47:12.225732 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:47:12.228070 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:47:12.230428 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:47:12.232714 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:47:12.234968 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:47:12.238055 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:47:12.240679 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:47:12.243280 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:47:12.245280 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:47:12.246335 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:47:12.248676 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:47:12.251133 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:47:12.254131 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:47:12.255229 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:47:12.257841 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:47:12.258870 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:47:12.261383 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:47:12.262486 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:47:12.264911 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:47:12.266703 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:47:12.272068 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:47:12.275055 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:47:12.276965 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:47:12.278876 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:47:12.279776 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:47:12.281769 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:47:12.282686 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:47:12.284846 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:47:12.286226 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:47:12.289123 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:47:12.290138 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:47:12.310142 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:47:12.312056 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:47:12.312173 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:47:12.316248 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:47:12.318108 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:47:12.319195 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:47:12.321545 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:47:12.322696 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:47:12.325231 ignition[1011]: INFO : Ignition 2.19.0 Jan 30 13:47:12.325231 ignition[1011]: INFO : Stage: umount Jan 30 13:47:12.325231 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:12.325231 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:47:12.325231 ignition[1011]: INFO : umount: umount passed Jan 30 13:47:12.325231 ignition[1011]: INFO : Ignition finished successfully Jan 30 13:47:12.327825 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:47:12.328005 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:47:12.332932 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:47:12.333086 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:47:12.335581 systemd[1]: Stopped target network.target - Network. Jan 30 13:47:12.336868 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:47:12.336945 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:47:12.339255 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:47:12.339336 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:47:12.341428 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:47:12.341495 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:47:12.343608 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:47:12.343686 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:47:12.346076 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:47:12.348039 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:47:12.351190 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:47:12.357035 systemd-networkd[785]: eth0: DHCPv6 lease lost Jan 30 13:47:12.360027 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:47:12.360175 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:47:12.362592 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:47:12.362717 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:47:12.366683 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:47:12.366762 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:47:12.376060 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:47:12.377049 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:47:12.377102 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:47:12.379267 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:47:12.379317 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:47:12.381585 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:47:12.381642 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:47:12.381727 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:47:12.381770 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:47:12.382379 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:47:12.392165 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:47:12.392290 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:47:12.400754 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:47:12.400933 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:47:12.403237 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:47:12.403286 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:47:12.405364 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:47:12.405406 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:47:12.407417 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:47:12.407468 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:47:12.410021 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:47:12.410088 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:47:12.411678 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:47:12.411741 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:12.422151 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:47:12.423277 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:47:12.423338 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:47:12.425774 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:47:12.425834 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:12.429227 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:47:12.429373 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:47:12.744999 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:47:12.745165 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:47:12.747329 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:47:12.749049 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:47:12.749112 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:47:12.764210 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:47:12.771902 systemd[1]: Switching root. Jan 30 13:47:12.799761 systemd-journald[191]: Journal stopped Jan 30 13:47:14.816963 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Jan 30 13:47:14.817144 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:47:14.817163 kernel: SELinux: policy capability open_perms=1 Jan 30 13:47:14.817174 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:47:14.817185 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:47:14.817201 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:47:14.817213 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:47:14.817224 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:47:14.817235 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:47:14.817249 kernel: audit: type=1403 audit(1738244833.801:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:47:14.817262 systemd[1]: Successfully loaded SELinux policy in 47.803ms. Jan 30 13:47:14.817287 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.380ms. Jan 30 13:47:14.817300 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:47:14.817312 systemd[1]: Detected virtualization kvm. Jan 30 13:47:14.817324 systemd[1]: Detected architecture x86-64. Jan 30 13:47:14.817337 systemd[1]: Detected first boot. Jan 30 13:47:14.817348 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:47:14.817360 zram_generator::config[1056]: No configuration found. Jan 30 13:47:14.817375 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:47:14.817387 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:47:14.817398 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:47:14.817410 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:47:14.817423 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:47:14.817435 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:47:14.817447 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:47:14.817458 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:47:14.817477 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:47:14.817490 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:47:14.817502 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:47:14.817513 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:47:14.817525 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:47:14.817537 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:47:14.817549 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:47:14.817561 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:47:14.817582 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:47:14.817597 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:47:14.817610 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:47:14.817621 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:47:14.817634 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:47:14.817645 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:47:14.817657 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:47:14.817669 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:47:14.817683 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:47:14.817695 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:47:14.817707 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:47:14.817719 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:47:14.817730 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:47:14.817742 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:47:14.817753 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:47:14.817765 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:47:14.817777 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:47:14.817789 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:47:14.817803 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:47:14.817815 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:47:14.817827 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:47:14.817839 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:14.817850 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:47:14.817863 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:47:14.817874 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:47:14.817887 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:47:14.817901 systemd[1]: Reached target machines.target - Containers. Jan 30 13:47:14.817913 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:47:14.817925 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:47:14.817937 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:47:14.817948 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:47:14.817960 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:47:14.817972 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:47:14.817996 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:47:14.818008 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:47:14.818023 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:47:14.818035 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:47:14.818047 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:47:14.818058 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:47:14.818070 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:47:14.818081 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:47:14.818093 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:47:14.818104 kernel: loop: module loaded Jan 30 13:47:14.818116 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:47:14.818131 kernel: fuse: init (API version 7.39) Jan 30 13:47:14.818143 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:47:14.818172 systemd-journald[1119]: Collecting audit messages is disabled. Jan 30 13:47:14.818193 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:47:14.818206 systemd-journald[1119]: Journal started Jan 30 13:47:14.818227 systemd-journald[1119]: Runtime Journal (/run/log/journal/e161eac3fd4d4fd3bdd1cd1a55fb6e3d) is 6.0M, max 48.4M, 42.3M free. Jan 30 13:47:14.532192 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:47:14.550638 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:47:14.551220 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:47:14.821764 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:47:14.836025 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:47:14.836110 systemd[1]: Stopped verity-setup.service. Jan 30 13:47:14.839009 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:14.842024 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:47:14.843542 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:47:14.844927 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:47:14.846269 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:47:14.847398 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:47:14.862212 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:47:14.864005 kernel: ACPI: bus type drm_connector registered Jan 30 13:47:14.864130 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:47:14.865615 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:47:14.867296 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:47:14.867478 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:47:14.869196 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:47:14.869367 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:47:14.870928 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:47:14.871118 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:47:14.872592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:47:14.872766 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:47:14.874308 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:47:14.874478 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:47:14.875888 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:47:14.876071 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:47:14.877583 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:47:14.879014 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:47:14.880653 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:47:14.893973 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:47:14.905147 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:47:14.918642 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:47:14.919803 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:47:14.919840 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:47:14.921925 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:47:14.924360 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:47:14.927084 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:47:14.928578 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:47:14.930749 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:47:14.935113 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:47:14.937197 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:47:14.942154 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:47:14.943849 systemd-journald[1119]: Time spent on flushing to /var/log/journal/e161eac3fd4d4fd3bdd1cd1a55fb6e3d is 12.984ms for 945 entries. Jan 30 13:47:14.943849 systemd-journald[1119]: System Journal (/var/log/journal/e161eac3fd4d4fd3bdd1cd1a55fb6e3d) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:47:15.220279 systemd-journald[1119]: Received client request to flush runtime journal. Jan 30 13:47:15.220335 kernel: loop0: detected capacity change from 0 to 218376 Jan 30 13:47:15.220362 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:47:15.220380 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 13:47:14.942277 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:47:14.944154 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:47:14.956054 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:47:14.971368 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:47:14.972778 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:47:14.976026 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:47:15.032813 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:47:15.095140 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:47:15.108624 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:47:15.121641 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:47:15.219105 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:47:15.220706 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:47:15.227669 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:47:15.231005 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 13:47:15.231601 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:47:15.233401 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:47:15.240182 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:47:15.248508 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:47:15.249251 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:47:15.277821 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:47:15.278016 kernel: loop3: detected capacity change from 0 to 218376 Jan 30 13:47:15.285310 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:47:15.294008 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 13:47:15.305049 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 13:47:15.312934 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 30 13:47:15.312959 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Jan 30 13:47:15.316540 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:47:15.317125 (sd-merge)[1190]: Merged extensions into '/usr'. Jan 30 13:47:15.321421 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:47:15.324227 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:47:15.324243 systemd[1]: Reloading... Jan 30 13:47:15.391073 zram_generator::config[1220]: No configuration found. Jan 30 13:47:15.424961 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:47:15.507377 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:47:15.565692 systemd[1]: Reloading finished in 240 ms. Jan 30 13:47:15.599147 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:47:15.600919 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:47:15.618160 systemd[1]: Starting ensure-sysext.service... Jan 30 13:47:15.620322 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:47:15.629684 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:47:15.629701 systemd[1]: Reloading... Jan 30 13:47:15.649037 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:47:15.649911 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:47:15.651464 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:47:15.651886 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 30 13:47:15.652007 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Jan 30 13:47:15.657804 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:47:15.657946 systemd-tmpfiles[1258]: Skipping /boot Jan 30 13:47:15.675808 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:47:15.675953 systemd-tmpfiles[1258]: Skipping /boot Jan 30 13:47:15.691028 zram_generator::config[1285]: No configuration found. Jan 30 13:47:15.807718 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:47:15.861064 systemd[1]: Reloading finished in 230 ms. Jan 30 13:47:15.877865 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:47:15.891473 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:47:15.900639 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:47:15.903269 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:47:15.905639 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:47:15.909644 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:47:15.914021 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:47:15.917570 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:47:15.922382 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:15.922623 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:47:15.928069 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:47:15.933314 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:47:15.936385 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:47:15.937910 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:47:15.940848 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:47:15.943039 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:15.944803 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:47:15.945347 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:47:15.954007 systemd-udevd[1332]: Using default interface naming scheme 'v255'. Jan 30 13:47:15.955821 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:47:15.956053 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:47:15.958214 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:47:15.960291 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:47:15.960510 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:47:15.966496 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:47:15.972433 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:15.972732 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:47:15.973712 augenrules[1354]: No rules Jan 30 13:47:15.982480 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:47:15.986237 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:47:15.989244 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:47:15.995353 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:47:15.997161 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:47:15.998653 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:47:15.999972 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:16.001136 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:47:16.003930 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:47:16.007735 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:47:16.009076 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:47:16.012492 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:47:16.016914 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:47:16.017215 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:47:16.019139 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:47:16.019360 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:47:16.022910 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:47:16.024018 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:47:16.034566 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:47:16.038458 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:47:16.041817 systemd[1]: Finished ensure-sysext.service. Jan 30 13:47:16.057351 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:47:16.078186 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1391) Jan 30 13:47:16.073187 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:47:16.075023 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:47:16.075109 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:47:16.086217 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:47:16.087961 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:47:16.104601 systemd-resolved[1328]: Positive Trust Anchors: Jan 30 13:47:16.105042 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:47:16.105088 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:47:16.109820 systemd-resolved[1328]: Defaulting to hostname 'linux'. Jan 30 13:47:16.111949 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:47:16.113544 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:47:16.131973 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:47:16.142176 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:47:16.143218 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:47:16.152625 systemd-networkd[1397]: lo: Link UP Jan 30 13:47:16.152635 systemd-networkd[1397]: lo: Gained carrier Jan 30 13:47:16.158523 systemd-networkd[1397]: Enumeration completed Jan 30 13:47:16.158844 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:47:16.159092 systemd[1]: Reached target network.target - Network. Jan 30 13:47:16.161098 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:16.161103 systemd-networkd[1397]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:47:16.162823 systemd-networkd[1397]: eth0: Link UP Jan 30 13:47:16.162900 systemd-networkd[1397]: eth0: Gained carrier Jan 30 13:47:16.162966 systemd-networkd[1397]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:16.163045 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:47:16.165170 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:47:16.167189 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:47:16.170373 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:47:16.175130 systemd-networkd[1397]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:47:16.177160 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:47:16.177349 systemd-timesyncd[1400]: Network configuration changed, trying to establish connection. Jan 30 13:47:16.179968 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:47:16.180301 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:47:16.180476 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:47:17.412516 systemd-timesyncd[1400]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:47:17.412604 systemd-resolved[1328]: Clock change detected. Flushing caches. Jan 30 13:47:17.412836 systemd-timesyncd[1400]: Initial clock synchronization to Thu 2025-01-30 13:47:17.412102 UTC. Jan 30 13:47:17.429571 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:47:17.440643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:17.517567 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:47:17.529559 kernel: kvm_amd: TSC scaling supported Jan 30 13:47:17.529597 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:47:17.529623 kernel: kvm_amd: Nested Paging enabled Jan 30 13:47:17.529635 kernel: kvm_amd: LBR virtualization supported Jan 30 13:47:17.529647 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:47:17.529659 kernel: kvm_amd: Virtual GIF supported Jan 30 13:47:17.548615 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:47:17.558751 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:17.589761 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:47:17.601810 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:47:17.609832 lvm[1423]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:47:17.643150 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:47:17.644803 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:47:17.646014 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:47:17.647259 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:47:17.648770 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:47:17.650366 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:47:17.651711 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:47:17.653005 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:47:17.654330 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:47:17.654362 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:47:17.655256 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:47:17.656736 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:47:17.659651 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:47:17.673709 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:47:17.675986 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:47:17.677518 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:47:17.678692 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:47:17.679643 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:47:17.680588 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:47:17.680615 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:47:17.681483 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:47:17.683481 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:47:17.687862 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:47:17.691114 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:47:17.692170 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:47:17.693651 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:47:17.694092 jq[1430]: false Jan 30 13:47:17.695701 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:47:17.699665 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:47:17.710170 dbus-daemon[1429]: [system] SELinux support is enabled Jan 30 13:47:17.713540 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:47:17.715310 extend-filesystems[1431]: Found loop3 Jan 30 13:47:17.716292 extend-filesystems[1431]: Found loop4 Jan 30 13:47:17.716292 extend-filesystems[1431]: Found loop5 Jan 30 13:47:17.716292 extend-filesystems[1431]: Found sr0 Jan 30 13:47:17.716292 extend-filesystems[1431]: Found vda Jan 30 13:47:17.716292 extend-filesystems[1431]: Found vda1 Jan 30 13:47:17.716292 extend-filesystems[1431]: Found vda2 Jan 30 13:47:17.716292 extend-filesystems[1431]: Found vda3 Jan 30 13:47:17.716292 extend-filesystems[1431]: Found usr Jan 30 13:47:17.716292 extend-filesystems[1431]: Found vda4 Jan 30 13:47:17.716292 extend-filesystems[1431]: Found vda6 Jan 30 13:47:17.716292 extend-filesystems[1431]: Found vda7 Jan 30 13:47:17.716292 extend-filesystems[1431]: Found vda9 Jan 30 13:47:17.716292 extend-filesystems[1431]: Checking size of /dev/vda9 Jan 30 13:47:17.731184 extend-filesystems[1431]: Resized partition /dev/vda9 Jan 30 13:47:17.733394 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1391) Jan 30 13:47:17.733423 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:47:17.717827 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:47:17.733558 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:47:17.740724 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:47:17.742709 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:47:17.743371 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:47:17.745778 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:47:17.753677 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:47:17.756980 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:47:17.758830 jq[1450]: true Jan 30 13:47:17.761898 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:47:17.764961 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:47:17.765177 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:47:17.765550 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:47:17.765764 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:47:17.768450 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:47:17.790418 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:47:17.790458 update_engine[1449]: I20250130 13:47:17.788266 1449 main.cc:92] Flatcar Update Engine starting Jan 30 13:47:17.769574 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:47:17.790875 update_engine[1449]: I20250130 13:47:17.790779 1449 update_check_scheduler.cc:74] Next update check in 11m4s Jan 30 13:47:17.783034 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:47:17.791285 jq[1456]: true Jan 30 13:47:17.794391 extend-filesystems[1446]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:47:17.794391 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:47:17.794391 extend-filesystems[1446]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:47:17.797895 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Jan 30 13:47:17.799951 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:47:17.800236 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:47:17.813035 tar[1455]: linux-amd64/LICENSE Jan 30 13:47:17.813014 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:47:17.815023 tar[1455]: linux-amd64/helm Jan 30 13:47:17.814361 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:47:17.814382 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:47:17.815759 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:47:17.815780 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:47:17.825718 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:47:17.834544 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:47:17.847945 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:47:17.847833 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:47:17.849716 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:47:17.858798 systemd-logind[1448]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:47:17.858825 systemd-logind[1448]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:47:17.864735 systemd-logind[1448]: New seat seat0. Jan 30 13:47:17.869923 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:47:17.875424 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:47:17.886574 sshd_keygen[1454]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:47:17.911041 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:47:17.921582 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:47:17.926241 systemd[1]: Started sshd@0-10.0.0.77:22-10.0.0.1:46252.service - OpenSSH per-connection server daemon (10.0.0.1:46252). Jan 30 13:47:17.930689 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:47:17.931040 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:47:17.936018 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:47:17.956620 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:47:17.967104 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:47:17.970182 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:47:17.971960 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:47:17.996580 sshd[1507]: Accepted publickey for core from 10.0.0.1 port 46252 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:17.998364 sshd[1507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:17.998557 containerd[1459]: time="2025-01-30T13:47:17.998334672Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:47:18.008244 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:47:18.018794 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:47:18.024549 systemd-logind[1448]: New session 1 of user core. Jan 30 13:47:18.028663 containerd[1459]: time="2025-01-30T13:47:18.028616060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:18.030488 containerd[1459]: time="2025-01-30T13:47:18.030449589Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:47:18.030488 containerd[1459]: time="2025-01-30T13:47:18.030480397Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:47:18.030596 containerd[1459]: time="2025-01-30T13:47:18.030496337Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:47:18.031357 containerd[1459]: time="2025-01-30T13:47:18.030700690Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:47:18.031357 containerd[1459]: time="2025-01-30T13:47:18.030726458Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:18.031357 containerd[1459]: time="2025-01-30T13:47:18.030793474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:47:18.031357 containerd[1459]: time="2025-01-30T13:47:18.030807931Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:18.031357 containerd[1459]: time="2025-01-30T13:47:18.030970596Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:47:18.031357 containerd[1459]: time="2025-01-30T13:47:18.030983741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:18.031357 containerd[1459]: time="2025-01-30T13:47:18.030995322Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:47:18.031357 containerd[1459]: time="2025-01-30T13:47:18.031004269Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:18.031357 containerd[1459]: time="2025-01-30T13:47:18.031100930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:18.031357 containerd[1459]: time="2025-01-30T13:47:18.031321364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:18.031686 containerd[1459]: time="2025-01-30T13:47:18.031597361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:47:18.031686 containerd[1459]: time="2025-01-30T13:47:18.031617239Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:47:18.031737 containerd[1459]: time="2025-01-30T13:47:18.031715353Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:47:18.031789 containerd[1459]: time="2025-01-30T13:47:18.031770456Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:47:18.033571 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:47:18.038245 containerd[1459]: time="2025-01-30T13:47:18.038207237Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:47:18.038292 containerd[1459]: time="2025-01-30T13:47:18.038256089Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:47:18.038292 containerd[1459]: time="2025-01-30T13:47:18.038274974Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:47:18.038292 containerd[1459]: time="2025-01-30T13:47:18.038288900Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:47:18.038361 containerd[1459]: time="2025-01-30T13:47:18.038301865Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:47:18.038477 containerd[1459]: time="2025-01-30T13:47:18.038435345Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:47:18.038680 containerd[1459]: time="2025-01-30T13:47:18.038663663Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:47:18.039180 containerd[1459]: time="2025-01-30T13:47:18.038815819Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:47:18.039180 containerd[1459]: time="2025-01-30T13:47:18.038833261Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:47:18.039180 containerd[1459]: time="2025-01-30T13:47:18.038845454Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:47:18.039180 containerd[1459]: time="2025-01-30T13:47:18.038857767Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:47:18.039180 containerd[1459]: time="2025-01-30T13:47:18.038868818Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:47:18.039180 containerd[1459]: time="2025-01-30T13:47:18.038885990Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:47:18.039180 containerd[1459]: time="2025-01-30T13:47:18.038898413Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:47:18.039180 containerd[1459]: time="2025-01-30T13:47:18.038911358Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:47:18.039180 containerd[1459]: time="2025-01-30T13:47:18.038923821Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:47:18.039180 containerd[1459]: time="2025-01-30T13:47:18.038941725Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:47:18.039180 containerd[1459]: time="2025-01-30T13:47:18.038952214Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:47:18.039180 containerd[1459]: time="2025-01-30T13:47:18.038970008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.039180 containerd[1459]: time="2025-01-30T13:47:18.038982171Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.039180 containerd[1459]: time="2025-01-30T13:47:18.038997920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.039436 containerd[1459]: time="2025-01-30T13:47:18.039010614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.039436 containerd[1459]: time="2025-01-30T13:47:18.039022346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.039436 containerd[1459]: time="2025-01-30T13:47:18.039034489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.039436 containerd[1459]: time="2025-01-30T13:47:18.039045539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.039436 containerd[1459]: time="2025-01-30T13:47:18.039071207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.039436 containerd[1459]: time="2025-01-30T13:47:18.039084072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.039436 containerd[1459]: time="2025-01-30T13:47:18.039098138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.039436 containerd[1459]: time="2025-01-30T13:47:18.039109249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.039436 containerd[1459]: time="2025-01-30T13:47:18.039120159Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.039436 containerd[1459]: time="2025-01-30T13:47:18.039131380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.039436 containerd[1459]: time="2025-01-30T13:47:18.039150276Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:47:18.039665 containerd[1459]: time="2025-01-30T13:47:18.039649813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.039714 containerd[1459]: time="2025-01-30T13:47:18.039703213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.039759 containerd[1459]: time="2025-01-30T13:47:18.039747866Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:47:18.039839 containerd[1459]: time="2025-01-30T13:47:18.039826815Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:47:18.039902 containerd[1459]: time="2025-01-30T13:47:18.039887248Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:47:18.039945 containerd[1459]: time="2025-01-30T13:47:18.039934016Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:47:18.039990 containerd[1459]: time="2025-01-30T13:47:18.039977697Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:47:18.040030 containerd[1459]: time="2025-01-30T13:47:18.040019806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.040087 containerd[1459]: time="2025-01-30T13:47:18.040075020Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:47:18.040136 containerd[1459]: time="2025-01-30T13:47:18.040125855Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:47:18.040187 containerd[1459]: time="2025-01-30T13:47:18.040175458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:47:18.040502 containerd[1459]: time="2025-01-30T13:47:18.040451596Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:47:18.040679 containerd[1459]: time="2025-01-30T13:47:18.040666129Z" level=info msg="Connect containerd service" Jan 30 13:47:18.040751 containerd[1459]: time="2025-01-30T13:47:18.040739506Z" level=info msg="using legacy CRI server" Jan 30 13:47:18.040792 containerd[1459]: time="2025-01-30T13:47:18.040781535Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:47:18.040913 containerd[1459]: time="2025-01-30T13:47:18.040899336Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:47:18.041480 containerd[1459]: time="2025-01-30T13:47:18.041459246Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:47:18.041735 containerd[1459]: time="2025-01-30T13:47:18.041688285Z" level=info msg="Start subscribing containerd event" Jan 30 13:47:18.041859 containerd[1459]: time="2025-01-30T13:47:18.041806167Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:47:18.042290 containerd[1459]: time="2025-01-30T13:47:18.041868774Z" level=info msg="Start recovering state" Jan 30 13:47:18.042290 containerd[1459]: time="2025-01-30T13:47:18.041949235Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:47:18.042290 containerd[1459]: time="2025-01-30T13:47:18.042003487Z" level=info msg="Start event monitor" Jan 30 13:47:18.042290 containerd[1459]: time="2025-01-30T13:47:18.042026490Z" level=info msg="Start snapshots syncer" Jan 30 13:47:18.042290 containerd[1459]: time="2025-01-30T13:47:18.042039454Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:47:18.042290 containerd[1459]: time="2025-01-30T13:47:18.042049292Z" level=info msg="Start streaming server" Jan 30 13:47:18.042290 containerd[1459]: time="2025-01-30T13:47:18.042133591Z" level=info msg="containerd successfully booted in 0.044970s" Jan 30 13:47:18.042940 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:47:18.045462 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:47:18.060293 (systemd)[1522]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:47:18.184712 systemd[1522]: Queued start job for default target default.target. Jan 30 13:47:18.198133 systemd[1522]: Created slice app.slice - User Application Slice. Jan 30 13:47:18.198163 systemd[1522]: Reached target paths.target - Paths. Jan 30 13:47:18.198179 systemd[1522]: Reached target timers.target - Timers. Jan 30 13:47:18.200077 systemd[1522]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:47:18.215297 systemd[1522]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:47:18.215462 systemd[1522]: Reached target sockets.target - Sockets. Jan 30 13:47:18.215482 systemd[1522]: Reached target basic.target - Basic System. Jan 30 13:47:18.215591 systemd[1522]: Reached target default.target - Main User Target. Jan 30 13:47:18.215919 systemd[1522]: Startup finished in 148ms. Jan 30 13:47:18.216519 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:47:18.232823 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:47:18.244278 tar[1455]: linux-amd64/README.md Jan 30 13:47:18.259336 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:47:18.297379 systemd[1]: Started sshd@1-10.0.0.77:22-10.0.0.1:35816.service - OpenSSH per-connection server daemon (10.0.0.1:35816). Jan 30 13:47:18.336226 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 35816 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:18.338039 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:18.342174 systemd-logind[1448]: New session 2 of user core. Jan 30 13:47:18.351742 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:47:18.407093 sshd[1536]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:18.415845 systemd[1]: sshd@1-10.0.0.77:22-10.0.0.1:35816.service: Deactivated successfully. Jan 30 13:47:18.417337 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:47:18.418570 systemd-logind[1448]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:47:18.429785 systemd[1]: Started sshd@2-10.0.0.77:22-10.0.0.1:35832.service - OpenSSH per-connection server daemon (10.0.0.1:35832). Jan 30 13:47:18.432069 systemd-logind[1448]: Removed session 2. Jan 30 13:47:18.459890 sshd[1543]: Accepted publickey for core from 10.0.0.1 port 35832 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:18.461239 sshd[1543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:18.465031 systemd-logind[1448]: New session 3 of user core. Jan 30 13:47:18.474645 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:47:18.529597 sshd[1543]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:18.532860 systemd[1]: sshd@2-10.0.0.77:22-10.0.0.1:35832.service: Deactivated successfully. Jan 30 13:47:18.534412 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:47:18.534930 systemd-logind[1448]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:47:18.535669 systemd-logind[1448]: Removed session 3. Jan 30 13:47:18.651711 systemd-networkd[1397]: eth0: Gained IPv6LL Jan 30 13:47:18.655050 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:47:18.656890 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:47:18.674800 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:47:18.677416 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:47:18.679658 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:47:18.698362 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:47:18.698629 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:47:18.700466 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:47:18.702644 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:47:19.358863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:47:19.360606 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:47:19.364644 systemd[1]: Startup finished in 726ms (kernel) + 6.096s (initrd) + 4.378s (userspace) = 11.201s. Jan 30 13:47:19.374693 (kubelet)[1571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:47:19.807454 kubelet[1571]: E0130 13:47:19.807389 1571 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:47:19.811848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:47:19.812100 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:47:28.541503 systemd[1]: Started sshd@3-10.0.0.77:22-10.0.0.1:56846.service - OpenSSH per-connection server daemon (10.0.0.1:56846). Jan 30 13:47:28.575494 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 56846 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:28.576820 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:28.580270 systemd-logind[1448]: New session 4 of user core. Jan 30 13:47:28.589659 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:47:28.642238 sshd[1585]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:28.653245 systemd[1]: sshd@3-10.0.0.77:22-10.0.0.1:56846.service: Deactivated successfully. Jan 30 13:47:28.654822 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:47:28.656040 systemd-logind[1448]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:47:28.657174 systemd[1]: Started sshd@4-10.0.0.77:22-10.0.0.1:56862.service - OpenSSH per-connection server daemon (10.0.0.1:56862). Jan 30 13:47:28.657806 systemd-logind[1448]: Removed session 4. Jan 30 13:47:28.691131 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 56862 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:28.692464 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:28.695990 systemd-logind[1448]: New session 5 of user core. Jan 30 13:47:28.705623 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:47:28.754419 sshd[1592]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:28.766040 systemd[1]: sshd@4-10.0.0.77:22-10.0.0.1:56862.service: Deactivated successfully. Jan 30 13:47:28.767619 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:47:28.769051 systemd-logind[1448]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:47:28.770375 systemd[1]: Started sshd@5-10.0.0.77:22-10.0.0.1:56876.service - OpenSSH per-connection server daemon (10.0.0.1:56876). Jan 30 13:47:28.771201 systemd-logind[1448]: Removed session 5. Jan 30 13:47:28.805101 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 56876 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:28.807076 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:28.810916 systemd-logind[1448]: New session 6 of user core. Jan 30 13:47:28.823852 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:47:28.878921 sshd[1599]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:28.892524 systemd[1]: sshd@5-10.0.0.77:22-10.0.0.1:56876.service: Deactivated successfully. Jan 30 13:47:28.894330 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:47:28.895964 systemd-logind[1448]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:47:28.902836 systemd[1]: Started sshd@6-10.0.0.77:22-10.0.0.1:56884.service - OpenSSH per-connection server daemon (10.0.0.1:56884). Jan 30 13:47:28.903703 systemd-logind[1448]: Removed session 6. Jan 30 13:47:28.934595 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 56884 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:47:28.936209 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:47:28.940271 systemd-logind[1448]: New session 7 of user core. Jan 30 13:47:28.946649 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:47:29.118510 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:47:29.118969 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:47:29.397819 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:47:29.397919 (dockerd)[1627]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:47:29.671263 dockerd[1627]: time="2025-01-30T13:47:29.671111786Z" level=info msg="Starting up" Jan 30 13:47:29.797621 dockerd[1627]: time="2025-01-30T13:47:29.797559731Z" level=info msg="Loading containers: start." Jan 30 13:47:29.931570 kernel: Initializing XFRM netlink socket Jan 30 13:47:29.959137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:47:29.968702 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:47:30.007580 systemd-networkd[1397]: docker0: Link UP Jan 30 13:47:30.259208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:47:30.263511 (kubelet)[1735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:47:30.527706 kubelet[1735]: E0130 13:47:30.527497 1735 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:47:30.534083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:47:30.534288 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:47:30.544311 dockerd[1627]: time="2025-01-30T13:47:30.544246465Z" level=info msg="Loading containers: done." Jan 30 13:47:30.558437 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3437586200-merged.mount: Deactivated successfully. Jan 30 13:47:30.562378 dockerd[1627]: time="2025-01-30T13:47:30.562337312Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:47:30.562462 dockerd[1627]: time="2025-01-30T13:47:30.562440034Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:47:30.562614 dockerd[1627]: time="2025-01-30T13:47:30.562594073Z" level=info msg="Daemon has completed initialization" Jan 30 13:47:30.600506 dockerd[1627]: time="2025-01-30T13:47:30.600414770Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:47:30.600688 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:47:31.130463 containerd[1459]: time="2025-01-30T13:47:31.130417044Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 30 13:47:31.721218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1036341672.mount: Deactivated successfully. Jan 30 13:47:33.029738 containerd[1459]: time="2025-01-30T13:47:33.029662577Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:33.031861 containerd[1459]: time="2025-01-30T13:47:33.031798513Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 30 13:47:33.034785 containerd[1459]: time="2025-01-30T13:47:33.034750950Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:33.039663 containerd[1459]: time="2025-01-30T13:47:33.039589975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:33.040953 containerd[1459]: time="2025-01-30T13:47:33.040894792Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 1.91043125s" Jan 30 13:47:33.040995 containerd[1459]: time="2025-01-30T13:47:33.040960635Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 30 13:47:33.041645 containerd[1459]: time="2025-01-30T13:47:33.041617236Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 30 13:47:34.439772 containerd[1459]: time="2025-01-30T13:47:34.439694571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:34.440922 containerd[1459]: time="2025-01-30T13:47:34.440875996Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 30 13:47:34.442922 containerd[1459]: time="2025-01-30T13:47:34.442890835Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:34.446415 containerd[1459]: time="2025-01-30T13:47:34.446370600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:34.447450 containerd[1459]: time="2025-01-30T13:47:34.447418776Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.405771093s" Jan 30 13:47:34.447450 containerd[1459]: time="2025-01-30T13:47:34.447447610Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 30 13:47:34.447928 containerd[1459]: time="2025-01-30T13:47:34.447855535Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 30 13:47:36.402348 containerd[1459]: time="2025-01-30T13:47:36.402268938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:36.403333 containerd[1459]: time="2025-01-30T13:47:36.403264435Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 30 13:47:36.404966 containerd[1459]: time="2025-01-30T13:47:36.404923166Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:36.408240 containerd[1459]: time="2025-01-30T13:47:36.408187317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:36.409219 containerd[1459]: time="2025-01-30T13:47:36.409171142Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 1.961280901s" Jan 30 13:47:36.409279 containerd[1459]: time="2025-01-30T13:47:36.409216237Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 30 13:47:36.409752 containerd[1459]: time="2025-01-30T13:47:36.409728678Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 30 13:47:37.431631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4161631161.mount: Deactivated successfully. Jan 30 13:47:38.927746 containerd[1459]: time="2025-01-30T13:47:38.927684173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:38.928877 containerd[1459]: time="2025-01-30T13:47:38.928837676Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 30 13:47:38.930077 containerd[1459]: time="2025-01-30T13:47:38.930040161Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:38.932073 containerd[1459]: time="2025-01-30T13:47:38.932036635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:38.932642 containerd[1459]: time="2025-01-30T13:47:38.932609159Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 2.522850655s" Jan 30 13:47:38.932642 containerd[1459]: time="2025-01-30T13:47:38.932638254Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 30 13:47:38.933081 containerd[1459]: time="2025-01-30T13:47:38.933054013Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 30 13:47:39.523191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3937159830.mount: Deactivated successfully. Jan 30 13:47:40.755139 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:47:40.759090 containerd[1459]: time="2025-01-30T13:47:40.759039378Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:40.761748 containerd[1459]: time="2025-01-30T13:47:40.759911824Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 30 13:47:40.761782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:47:40.762274 containerd[1459]: time="2025-01-30T13:47:40.762238858Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:40.765586 containerd[1459]: time="2025-01-30T13:47:40.765547042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:40.766886 containerd[1459]: time="2025-01-30T13:47:40.766731313Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.833642665s" Jan 30 13:47:40.766886 containerd[1459]: time="2025-01-30T13:47:40.766771809Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 30 13:47:40.767322 containerd[1459]: time="2025-01-30T13:47:40.767290852Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:47:40.926713 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:47:40.933378 (kubelet)[1921]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:47:41.214361 kubelet[1921]: E0130 13:47:41.214148 1921 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:47:41.218376 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:47:41.218652 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:47:42.347683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2974075074.mount: Deactivated successfully. Jan 30 13:47:42.354200 containerd[1459]: time="2025-01-30T13:47:42.354135552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:42.354903 containerd[1459]: time="2025-01-30T13:47:42.354853850Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 13:47:42.355971 containerd[1459]: time="2025-01-30T13:47:42.355933284Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:42.358153 containerd[1459]: time="2025-01-30T13:47:42.358104065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:42.358864 containerd[1459]: time="2025-01-30T13:47:42.358833192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.591506313s" Jan 30 13:47:42.358864 containerd[1459]: time="2025-01-30T13:47:42.358860974Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:47:42.359302 containerd[1459]: time="2025-01-30T13:47:42.359256646Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 30 13:47:42.971338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2215457548.mount: Deactivated successfully. Jan 30 13:47:45.780384 containerd[1459]: time="2025-01-30T13:47:45.780286909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:45.781333 containerd[1459]: time="2025-01-30T13:47:45.781285562Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 30 13:47:45.782989 containerd[1459]: time="2025-01-30T13:47:45.782945104Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:45.786579 containerd[1459]: time="2025-01-30T13:47:45.786521882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:47:45.788437 containerd[1459]: time="2025-01-30T13:47:45.788380999Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.429067175s" Jan 30 13:47:45.788484 containerd[1459]: time="2025-01-30T13:47:45.788441282Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 30 13:47:47.407118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:47:47.419865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:47:47.444503 systemd[1]: Reloading requested from client PID 2018 ('systemctl') (unit session-7.scope)... Jan 30 13:47:47.444520 systemd[1]: Reloading... Jan 30 13:47:47.515619 zram_generator::config[2057]: No configuration found. Jan 30 13:47:47.882730 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:47:47.959183 systemd[1]: Reloading finished in 514 ms. Jan 30 13:47:48.015740 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:47:48.019758 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:47:48.019985 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:47:48.021573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:47:48.177969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:47:48.183653 (kubelet)[2107]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:47:48.220664 kubelet[2107]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:47:48.220664 kubelet[2107]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:47:48.220664 kubelet[2107]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:47:48.221166 kubelet[2107]: I0130 13:47:48.220723 2107 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:47:48.535635 kubelet[2107]: I0130 13:47:48.535589 2107 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:47:48.535635 kubelet[2107]: I0130 13:47:48.535617 2107 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:47:48.535869 kubelet[2107]: I0130 13:47:48.535849 2107 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:47:48.559293 kubelet[2107]: E0130 13:47:48.559247 2107 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:47:48.564624 kubelet[2107]: I0130 13:47:48.564373 2107 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:47:48.572600 kubelet[2107]: E0130 13:47:48.572573 2107 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:47:48.572600 kubelet[2107]: I0130 13:47:48.572600 2107 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:47:48.577176 kubelet[2107]: I0130 13:47:48.577151 2107 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:47:48.577402 kubelet[2107]: I0130 13:47:48.577365 2107 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:47:48.577552 kubelet[2107]: I0130 13:47:48.577390 2107 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:47:48.577631 kubelet[2107]: I0130 13:47:48.577555 2107 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:47:48.577631 kubelet[2107]: I0130 13:47:48.577564 2107 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:47:48.578155 kubelet[2107]: I0130 13:47:48.578129 2107 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:47:48.580748 kubelet[2107]: I0130 13:47:48.580720 2107 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:47:48.580748 kubelet[2107]: I0130 13:47:48.580737 2107 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:47:48.580819 kubelet[2107]: I0130 13:47:48.580756 2107 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:47:48.580819 kubelet[2107]: I0130 13:47:48.580765 2107 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:47:48.583178 kubelet[2107]: I0130 13:47:48.583151 2107 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:47:48.583927 kubelet[2107]: I0130 13:47:48.583562 2107 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:47:48.583927 kubelet[2107]: W0130 13:47:48.583616 2107 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:47:48.583927 kubelet[2107]: W0130 13:47:48.583890 2107 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 30 13:47:48.584002 kubelet[2107]: W0130 13:47:48.583912 2107 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 30 13:47:48.584002 kubelet[2107]: E0130 13:47:48.583941 2107 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:47:48.584002 kubelet[2107]: E0130 13:47:48.583948 2107 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:47:48.585204 kubelet[2107]: I0130 13:47:48.585182 2107 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:47:48.585269 kubelet[2107]: I0130 13:47:48.585214 2107 server.go:1287] "Started kubelet" Jan 30 13:47:48.586326 kubelet[2107]: I0130 13:47:48.586159 2107 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:47:48.586326 kubelet[2107]: I0130 13:47:48.586299 2107 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:47:48.587187 kubelet[2107]: I0130 13:47:48.586506 2107 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:47:48.587187 kubelet[2107]: I0130 13:47:48.586586 2107 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:47:48.587811 kubelet[2107]: I0130 13:47:48.587789 2107 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:47:48.590785 kubelet[2107]: E0130 13:47:48.590755 2107 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:47:48.591618 kubelet[2107]: I0130 13:47:48.591579 2107 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:47:48.594547 kubelet[2107]: E0130 13:47:48.593149 2107 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:47:48.594547 kubelet[2107]: I0130 13:47:48.593182 2107 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:47:48.594547 kubelet[2107]: I0130 13:47:48.593381 2107 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:47:48.594547 kubelet[2107]: I0130 13:47:48.593436 2107 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:47:48.594547 kubelet[2107]: W0130 13:47:48.594018 2107 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 30 13:47:48.594547 kubelet[2107]: E0130 13:47:48.594069 2107 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:47:48.594547 kubelet[2107]: E0130 13:47:48.594119 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="200ms" Jan 30 13:47:48.594794 kubelet[2107]: E0130 13:47:48.589742 2107 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7c7f94aa8890 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:47:48.585195664 +0000 UTC m=+0.397122387,LastTimestamp:2025-01-30 13:47:48.585195664 +0000 UTC m=+0.397122387,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:47:48.594794 kubelet[2107]: I0130 13:47:48.594289 2107 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:47:48.595260 kubelet[2107]: I0130 13:47:48.595240 2107 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:47:48.595260 kubelet[2107]: I0130 13:47:48.595254 2107 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:47:48.604641 kubelet[2107]: I0130 13:47:48.604586 2107 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:47:48.606043 kubelet[2107]: I0130 13:47:48.605976 2107 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:47:48.606043 kubelet[2107]: I0130 13:47:48.606002 2107 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:47:48.606043 kubelet[2107]: I0130 13:47:48.606018 2107 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:47:48.606043 kubelet[2107]: I0130 13:47:48.606025 2107 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:47:48.606154 kubelet[2107]: E0130 13:47:48.606070 2107 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:47:48.607438 kubelet[2107]: W0130 13:47:48.607396 2107 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 30 13:47:48.607495 kubelet[2107]: E0130 13:47:48.607450 2107 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:47:48.610925 kubelet[2107]: I0130 13:47:48.610910 2107 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:47:48.610925 kubelet[2107]: I0130 13:47:48.610921 2107 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:47:48.610997 kubelet[2107]: I0130 13:47:48.610936 2107 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:47:48.694249 kubelet[2107]: E0130 13:47:48.694206 2107 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:47:48.707087 kubelet[2107]: E0130 13:47:48.707041 2107 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:47:48.794568 kubelet[2107]: E0130 13:47:48.794413 2107 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:47:48.794725 kubelet[2107]: E0130 13:47:48.794688 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="400ms" Jan 30 13:47:48.895213 kubelet[2107]: E0130 13:47:48.895156 2107 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:47:48.907394 kubelet[2107]: E0130 13:47:48.907347 2107 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:47:48.995817 kubelet[2107]: E0130 13:47:48.995756 2107 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:47:49.096829 kubelet[2107]: E0130 13:47:49.096695 2107 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:47:49.139278 kubelet[2107]: I0130 13:47:49.139205 2107 policy_none.go:49] "None policy: Start" Jan 30 13:47:49.139278 kubelet[2107]: I0130 13:47:49.139269 2107 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:47:49.139278 kubelet[2107]: I0130 13:47:49.139285 2107 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:47:49.146731 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:47:49.172008 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:47:49.184454 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:47:49.185542 kubelet[2107]: I0130 13:47:49.185502 2107 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:47:49.185901 kubelet[2107]: I0130 13:47:49.185772 2107 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:47:49.185901 kubelet[2107]: I0130 13:47:49.185788 2107 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:47:49.186290 kubelet[2107]: I0130 13:47:49.186191 2107 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:47:49.186750 kubelet[2107]: E0130 13:47:49.186726 2107 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:47:49.186853 kubelet[2107]: E0130 13:47:49.186833 2107 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:47:49.195413 kubelet[2107]: E0130 13:47:49.195375 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="800ms" Jan 30 13:47:49.287736 kubelet[2107]: I0130 13:47:49.287708 2107 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:47:49.288121 kubelet[2107]: E0130 13:47:49.288000 2107 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Jan 30 13:47:49.315898 systemd[1]: Created slice kubepods-burstable-podd9d04b070db9d0204013f2bedc8eb6f2.slice - libcontainer container kubepods-burstable-podd9d04b070db9d0204013f2bedc8eb6f2.slice. Jan 30 13:47:49.323301 kubelet[2107]: E0130 13:47:49.323264 2107 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:47:49.325272 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 30 13:47:49.326895 kubelet[2107]: E0130 13:47:49.326877 2107 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:47:49.328869 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 30 13:47:49.330269 kubelet[2107]: E0130 13:47:49.330246 2107 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:47:49.397756 kubelet[2107]: I0130 13:47:49.397619 2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9d04b070db9d0204013f2bedc8eb6f2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9d04b070db9d0204013f2bedc8eb6f2\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:47:49.397756 kubelet[2107]: I0130 13:47:49.397675 2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9d04b070db9d0204013f2bedc8eb6f2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9d04b070db9d0204013f2bedc8eb6f2\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:47:49.397756 kubelet[2107]: I0130 13:47:49.397698 2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9d04b070db9d0204013f2bedc8eb6f2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d9d04b070db9d0204013f2bedc8eb6f2\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:47:49.397756 kubelet[2107]: I0130 13:47:49.397720 2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:47:49.397756 kubelet[2107]: I0130 13:47:49.397740 2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:47:49.397952 kubelet[2107]: I0130 13:47:49.397767 2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:47:49.397952 kubelet[2107]: I0130 13:47:49.397788 2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:47:49.397952 kubelet[2107]: I0130 13:47:49.397861 2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:47:49.397952 kubelet[2107]: I0130 13:47:49.397894 2107 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:47:49.490033 kubelet[2107]: I0130 13:47:49.489992 2107 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:47:49.490345 kubelet[2107]: E0130 13:47:49.490325 2107 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Jan 30 13:47:49.624734 kubelet[2107]: E0130 13:47:49.624695 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:49.625343 containerd[1459]: time="2025-01-30T13:47:49.625310160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d9d04b070db9d0204013f2bedc8eb6f2,Namespace:kube-system,Attempt:0,}" Jan 30 13:47:49.627675 kubelet[2107]: E0130 13:47:49.627648 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:49.628241 containerd[1459]: time="2025-01-30T13:47:49.628183799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 30 13:47:49.631392 kubelet[2107]: E0130 13:47:49.631373 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:49.631749 containerd[1459]: time="2025-01-30T13:47:49.631724590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 30 13:47:49.741899 kubelet[2107]: W0130 13:47:49.741796 2107 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 30 13:47:49.741899 kubelet[2107]: E0130 13:47:49.741843 2107 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:47:49.891428 kubelet[2107]: I0130 13:47:49.891398 2107 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:47:49.891746 kubelet[2107]: E0130 13:47:49.891725 2107 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Jan 30 13:47:49.978909 kubelet[2107]: W0130 13:47:49.978827 2107 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 30 13:47:49.979018 kubelet[2107]: E0130 13:47:49.978910 2107 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:47:49.996859 kubelet[2107]: E0130 13:47:49.996759 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="1.6s" Jan 30 13:47:50.065689 kubelet[2107]: W0130 13:47:50.065619 2107 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 30 13:47:50.065689 kubelet[2107]: E0130 13:47:50.065682 2107 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:47:50.122101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1740565411.mount: Deactivated successfully. Jan 30 13:47:50.130505 containerd[1459]: time="2025-01-30T13:47:50.130463433Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:47:50.131574 containerd[1459]: time="2025-01-30T13:47:50.131505019Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:47:50.132389 containerd[1459]: time="2025-01-30T13:47:50.132353042Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:47:50.133489 containerd[1459]: time="2025-01-30T13:47:50.133454252Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:47:50.134372 containerd[1459]: time="2025-01-30T13:47:50.134340970Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:47:50.135253 containerd[1459]: time="2025-01-30T13:47:50.135217399Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:47:50.136244 containerd[1459]: time="2025-01-30T13:47:50.136211603Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:47:50.140049 containerd[1459]: time="2025-01-30T13:47:50.140022422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:47:50.140880 containerd[1459]: time="2025-01-30T13:47:50.140846079Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.563212ms" Jan 30 13:47:50.142051 containerd[1459]: time="2025-01-30T13:47:50.142029298Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 516.643995ms" Jan 30 13:47:50.143212 containerd[1459]: time="2025-01-30T13:47:50.143169414Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 511.393126ms" Jan 30 13:47:50.191132 kubelet[2107]: W0130 13:47:50.191093 2107 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Jan 30 13:47:50.191236 kubelet[2107]: E0130 13:47:50.191139 2107 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:47:50.293303 containerd[1459]: time="2025-01-30T13:47:50.293015706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:50.293303 containerd[1459]: time="2025-01-30T13:47:50.293021206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:50.293303 containerd[1459]: time="2025-01-30T13:47:50.293096050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:50.293303 containerd[1459]: time="2025-01-30T13:47:50.293109637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:50.293303 containerd[1459]: time="2025-01-30T13:47:50.293086723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:47:50.293303 containerd[1459]: time="2025-01-30T13:47:50.293153691Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:50.293303 containerd[1459]: time="2025-01-30T13:47:50.293174451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:50.293303 containerd[1459]: time="2025-01-30T13:47:50.293220620Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:50.293303 containerd[1459]: time="2025-01-30T13:47:50.293255167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:50.293697 containerd[1459]: time="2025-01-30T13:47:50.293316244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:47:50.294085 containerd[1459]: time="2025-01-30T13:47:50.294040830Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:50.294237 containerd[1459]: time="2025-01-30T13:47:50.294192352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:47:50.321693 systemd[1]: Started cri-containerd-147a69c914adaf3e534c15e522d393320be287bb5817403efaed7ffe53cbe34f.scope - libcontainer container 147a69c914adaf3e534c15e522d393320be287bb5817403efaed7ffe53cbe34f. Jan 30 13:47:50.323411 systemd[1]: Started cri-containerd-1635387117217e1347207a2c7f5080546645e7c2c2aca7b3e81ed57adce11782.scope - libcontainer container 1635387117217e1347207a2c7f5080546645e7c2c2aca7b3e81ed57adce11782. Jan 30 13:47:50.324961 systemd[1]: Started cri-containerd-348420a03b3d322bc27600c03e3ea18e8eb06ae484deb80c0144d5ee3f690291.scope - libcontainer container 348420a03b3d322bc27600c03e3ea18e8eb06ae484deb80c0144d5ee3f690291. Jan 30 13:47:50.369128 containerd[1459]: time="2025-01-30T13:47:50.369083326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d9d04b070db9d0204013f2bedc8eb6f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"147a69c914adaf3e534c15e522d393320be287bb5817403efaed7ffe53cbe34f\"" Jan 30 13:47:50.370521 containerd[1459]: time="2025-01-30T13:47:50.370456160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1635387117217e1347207a2c7f5080546645e7c2c2aca7b3e81ed57adce11782\"" Jan 30 13:47:50.373224 kubelet[2107]: E0130 13:47:50.373185 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:50.373658 kubelet[2107]: E0130 13:47:50.373433 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:50.375812 containerd[1459]: time="2025-01-30T13:47:50.375715640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"348420a03b3d322bc27600c03e3ea18e8eb06ae484deb80c0144d5ee3f690291\"" Jan 30 13:47:50.376496 kubelet[2107]: E0130 13:47:50.376430 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:50.378564 containerd[1459]: time="2025-01-30T13:47:50.377673681Z" level=info msg="CreateContainer within sandbox \"1635387117217e1347207a2c7f5080546645e7c2c2aca7b3e81ed57adce11782\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:47:50.379064 containerd[1459]: time="2025-01-30T13:47:50.378843183Z" level=info msg="CreateContainer within sandbox \"147a69c914adaf3e534c15e522d393320be287bb5817403efaed7ffe53cbe34f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:47:50.379064 containerd[1459]: time="2025-01-30T13:47:50.378988863Z" level=info msg="CreateContainer within sandbox \"348420a03b3d322bc27600c03e3ea18e8eb06ae484deb80c0144d5ee3f690291\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:47:50.402171 containerd[1459]: time="2025-01-30T13:47:50.402121134Z" level=info msg="CreateContainer within sandbox \"1635387117217e1347207a2c7f5080546645e7c2c2aca7b3e81ed57adce11782\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"343ae9fb007124487dbae5a92cd302c07414fc21a744ae782e9f4f953d95d1fd\"" Jan 30 13:47:50.402772 containerd[1459]: time="2025-01-30T13:47:50.402749835Z" level=info msg="StartContainer for \"343ae9fb007124487dbae5a92cd302c07414fc21a744ae782e9f4f953d95d1fd\"" Jan 30 13:47:50.408314 containerd[1459]: time="2025-01-30T13:47:50.408277801Z" level=info msg="CreateContainer within sandbox \"348420a03b3d322bc27600c03e3ea18e8eb06ae484deb80c0144d5ee3f690291\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f9648306eb70c682790d61a44b6997c73ceb372b91b7fc02bf7528cbb166b423\"" Jan 30 13:47:50.408930 containerd[1459]: time="2025-01-30T13:47:50.408909508Z" level=info msg="StartContainer for \"f9648306eb70c682790d61a44b6997c73ceb372b91b7fc02bf7528cbb166b423\"" Jan 30 13:47:50.409114 containerd[1459]: time="2025-01-30T13:47:50.409083974Z" level=info msg="CreateContainer within sandbox \"147a69c914adaf3e534c15e522d393320be287bb5817403efaed7ffe53cbe34f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"adfc067db861b89129c6255308874c495739ef610e54475c8969c205c20fad91\"" Jan 30 13:47:50.409434 containerd[1459]: time="2025-01-30T13:47:50.409412577Z" level=info msg="StartContainer for \"adfc067db861b89129c6255308874c495739ef610e54475c8969c205c20fad91\"" Jan 30 13:47:50.431698 systemd[1]: Started cri-containerd-343ae9fb007124487dbae5a92cd302c07414fc21a744ae782e9f4f953d95d1fd.scope - libcontainer container 343ae9fb007124487dbae5a92cd302c07414fc21a744ae782e9f4f953d95d1fd. Jan 30 13:47:50.436446 systemd[1]: Started cri-containerd-adfc067db861b89129c6255308874c495739ef610e54475c8969c205c20fad91.scope - libcontainer container adfc067db861b89129c6255308874c495739ef610e54475c8969c205c20fad91. Jan 30 13:47:50.438329 systemd[1]: Started cri-containerd-f9648306eb70c682790d61a44b6997c73ceb372b91b7fc02bf7528cbb166b423.scope - libcontainer container f9648306eb70c682790d61a44b6997c73ceb372b91b7fc02bf7528cbb166b423. Jan 30 13:47:50.480850 containerd[1459]: time="2025-01-30T13:47:50.480560873Z" level=info msg="StartContainer for \"343ae9fb007124487dbae5a92cd302c07414fc21a744ae782e9f4f953d95d1fd\" returns successfully" Jan 30 13:47:50.486643 containerd[1459]: time="2025-01-30T13:47:50.486510222Z" level=info msg="StartContainer for \"adfc067db861b89129c6255308874c495739ef610e54475c8969c205c20fad91\" returns successfully" Jan 30 13:47:50.486880 containerd[1459]: time="2025-01-30T13:47:50.486659809Z" level=info msg="StartContainer for \"f9648306eb70c682790d61a44b6997c73ceb372b91b7fc02bf7528cbb166b423\" returns successfully" Jan 30 13:47:50.613069 kubelet[2107]: E0130 13:47:50.612779 2107 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:47:50.613069 kubelet[2107]: E0130 13:47:50.612903 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:50.617974 kubelet[2107]: E0130 13:47:50.617707 2107 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:47:50.617974 kubelet[2107]: E0130 13:47:50.617805 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:50.619490 kubelet[2107]: E0130 13:47:50.619389 2107 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:47:50.619647 kubelet[2107]: E0130 13:47:50.619596 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:50.693399 kubelet[2107]: I0130 13:47:50.692880 2107 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:47:51.575213 kubelet[2107]: I0130 13:47:51.575154 2107 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 13:47:51.575213 kubelet[2107]: E0130 13:47:51.575209 2107 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 30 13:47:51.600485 kubelet[2107]: E0130 13:47:51.600438 2107 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:47:51.620983 kubelet[2107]: E0130 13:47:51.620939 2107 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:47:51.621124 kubelet[2107]: E0130 13:47:51.621033 2107 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 30 13:47:51.621124 kubelet[2107]: E0130 13:47:51.621057 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:51.621209 kubelet[2107]: E0130 13:47:51.621163 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:51.701446 kubelet[2107]: E0130 13:47:51.701384 2107 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:47:51.758629 kubelet[2107]: E0130 13:47:51.758563 2107 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 30 13:47:51.801825 kubelet[2107]: E0130 13:47:51.801778 2107 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:47:51.902430 kubelet[2107]: E0130 13:47:51.902298 2107 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:47:52.003123 kubelet[2107]: E0130 13:47:52.003060 2107 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:47:52.104226 kubelet[2107]: E0130 13:47:52.104163 2107 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:47:52.194446 kubelet[2107]: I0130 13:47:52.194323 2107 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:47:52.199224 kubelet[2107]: E0130 13:47:52.199176 2107 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 13:47:52.199224 kubelet[2107]: I0130 13:47:52.199211 2107 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:47:52.200678 kubelet[2107]: E0130 13:47:52.200639 2107 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:47:52.200678 kubelet[2107]: I0130 13:47:52.200677 2107 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:47:52.201829 kubelet[2107]: E0130 13:47:52.201810 2107 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 30 13:47:52.583609 kubelet[2107]: I0130 13:47:52.583564 2107 apiserver.go:52] "Watching apiserver" Jan 30 13:47:52.593825 kubelet[2107]: I0130 13:47:52.593780 2107 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:47:52.621192 kubelet[2107]: I0130 13:47:52.621157 2107 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:47:52.625683 kubelet[2107]: E0130 13:47:52.625653 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:53.622733 kubelet[2107]: E0130 13:47:53.622701 2107 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:53.628987 systemd[1]: Reloading requested from client PID 2386 ('systemctl') (unit session-7.scope)... Jan 30 13:47:53.629004 systemd[1]: Reloading... Jan 30 13:47:53.714577 zram_generator::config[2425]: No configuration found. Jan 30 13:47:53.826289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:47:53.917427 systemd[1]: Reloading finished in 287 ms. Jan 30 13:47:53.963620 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:47:53.987991 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:47:53.988298 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:47:54.000204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:47:54.156433 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:47:54.162737 (kubelet)[2470]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:47:54.202572 kubelet[2470]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:47:54.202572 kubelet[2470]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 30 13:47:54.202572 kubelet[2470]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:47:54.202572 kubelet[2470]: I0130 13:47:54.202517 2470 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:47:54.211294 kubelet[2470]: I0130 13:47:54.211241 2470 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 30 13:47:54.211294 kubelet[2470]: I0130 13:47:54.211281 2470 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:47:54.211622 kubelet[2470]: I0130 13:47:54.211595 2470 server.go:954] "Client rotation is on, will bootstrap in background" Jan 30 13:47:54.213151 kubelet[2470]: I0130 13:47:54.213124 2470 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:47:54.215954 kubelet[2470]: I0130 13:47:54.215915 2470 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:47:54.219448 kubelet[2470]: E0130 13:47:54.219410 2470 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:47:54.219554 kubelet[2470]: I0130 13:47:54.219438 2470 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:47:54.225411 kubelet[2470]: I0130 13:47:54.225364 2470 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:47:54.225661 kubelet[2470]: I0130 13:47:54.225621 2470 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:47:54.225828 kubelet[2470]: I0130 13:47:54.225651 2470 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:47:54.225828 kubelet[2470]: I0130 13:47:54.225823 2470 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:47:54.225948 kubelet[2470]: I0130 13:47:54.225832 2470 container_manager_linux.go:304] "Creating device plugin manager" Jan 30 13:47:54.225948 kubelet[2470]: I0130 13:47:54.225872 2470 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:47:54.226064 kubelet[2470]: I0130 13:47:54.226038 2470 kubelet.go:446] "Attempting to sync node with API server" Jan 30 13:47:54.226064 kubelet[2470]: I0130 13:47:54.226054 2470 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:47:54.226108 kubelet[2470]: I0130 13:47:54.226069 2470 kubelet.go:352] "Adding apiserver pod source" Jan 30 13:47:54.226108 kubelet[2470]: I0130 13:47:54.226082 2470 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:47:54.226882 kubelet[2470]: I0130 13:47:54.226752 2470 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:47:54.227143 kubelet[2470]: I0130 13:47:54.227121 2470 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:47:54.227599 kubelet[2470]: I0130 13:47:54.227578 2470 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 30 13:47:54.227674 kubelet[2470]: I0130 13:47:54.227609 2470 server.go:1287] "Started kubelet" Jan 30 13:47:54.228319 kubelet[2470]: I0130 13:47:54.228246 2470 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:47:54.228799 kubelet[2470]: I0130 13:47:54.228763 2470 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:47:54.228964 kubelet[2470]: I0130 13:47:54.228911 2470 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:47:54.229003 kubelet[2470]: I0130 13:47:54.228982 2470 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:47:54.230125 kubelet[2470]: I0130 13:47:54.230055 2470 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:47:54.230514 kubelet[2470]: I0130 13:47:54.230501 2470 server.go:490] "Adding debug handlers to kubelet server" Jan 30 13:47:54.234205 kubelet[2470]: E0130 13:47:54.234162 2470 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:47:54.234205 kubelet[2470]: I0130 13:47:54.234204 2470 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 30 13:47:54.234388 kubelet[2470]: I0130 13:47:54.234381 2470 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:47:54.234514 kubelet[2470]: I0130 13:47:54.234493 2470 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:47:54.240315 kubelet[2470]: I0130 13:47:54.240286 2470 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:47:54.240315 kubelet[2470]: I0130 13:47:54.240306 2470 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:47:54.240399 kubelet[2470]: I0130 13:47:54.240384 2470 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:47:54.244912 kubelet[2470]: I0130 13:47:54.244736 2470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:47:54.246967 kubelet[2470]: E0130 13:47:54.246931 2470 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:47:54.249292 kubelet[2470]: I0130 13:47:54.249111 2470 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:47:54.249292 kubelet[2470]: I0130 13:47:54.249146 2470 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 30 13:47:54.249292 kubelet[2470]: I0130 13:47:54.249170 2470 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 30 13:47:54.249292 kubelet[2470]: I0130 13:47:54.249177 2470 kubelet.go:2388] "Starting kubelet main sync loop" Jan 30 13:47:54.249292 kubelet[2470]: E0130 13:47:54.249227 2470 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:47:54.275464 kubelet[2470]: I0130 13:47:54.275428 2470 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 30 13:47:54.275464 kubelet[2470]: I0130 13:47:54.275452 2470 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 30 13:47:54.275464 kubelet[2470]: I0130 13:47:54.275474 2470 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:47:54.275699 kubelet[2470]: I0130 13:47:54.275679 2470 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:47:54.275746 kubelet[2470]: I0130 13:47:54.275696 2470 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:47:54.275746 kubelet[2470]: I0130 13:47:54.275716 2470 policy_none.go:49] "None policy: Start" Jan 30 13:47:54.275746 kubelet[2470]: I0130 13:47:54.275726 2470 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 30 13:47:54.275746 kubelet[2470]: I0130 13:47:54.275736 2470 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:47:54.275844 kubelet[2470]: I0130 13:47:54.275830 2470 state_mem.go:75] "Updated machine memory state" Jan 30 13:47:54.279629 kubelet[2470]: I0130 13:47:54.279602 2470 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:47:54.279964 kubelet[2470]: I0130 13:47:54.279797 2470 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:47:54.279964 kubelet[2470]: I0130 13:47:54.279815 2470 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:47:54.280046 kubelet[2470]: I0130 13:47:54.280025 2470 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:47:54.281350 kubelet[2470]: E0130 13:47:54.280742 2470 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 30 13:47:54.350357 kubelet[2470]: I0130 13:47:54.350314 2470 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 30 13:47:54.350357 kubelet[2470]: I0130 13:47:54.350335 2470 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:47:54.350357 kubelet[2470]: I0130 13:47:54.350363 2470 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:47:54.355935 kubelet[2470]: E0130 13:47:54.355909 2470 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 30 13:47:54.385049 kubelet[2470]: I0130 13:47:54.385020 2470 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 30 13:47:54.392034 kubelet[2470]: I0130 13:47:54.391992 2470 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 30 13:47:54.392142 kubelet[2470]: I0130 13:47:54.392080 2470 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 30 13:47:54.535947 kubelet[2470]: I0130 13:47:54.535902 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:47:54.535947 kubelet[2470]: I0130 13:47:54.535947 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:47:54.536077 kubelet[2470]: I0130 13:47:54.535967 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:47:54.536077 kubelet[2470]: I0130 13:47:54.535989 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:47:54.536077 kubelet[2470]: I0130 13:47:54.536032 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d9d04b070db9d0204013f2bedc8eb6f2-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9d04b070db9d0204013f2bedc8eb6f2\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:47:54.536163 kubelet[2470]: I0130 13:47:54.536071 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d9d04b070db9d0204013f2bedc8eb6f2-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d9d04b070db9d0204013f2bedc8eb6f2\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:47:54.536163 kubelet[2470]: I0130 13:47:54.536104 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d9d04b070db9d0204013f2bedc8eb6f2-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d9d04b070db9d0204013f2bedc8eb6f2\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:47:54.536163 kubelet[2470]: I0130 13:47:54.536126 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:47:54.536163 kubelet[2470]: I0130 13:47:54.536145 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:47:54.655344 kubelet[2470]: E0130 13:47:54.655305 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:54.656344 kubelet[2470]: E0130 13:47:54.656293 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:54.656344 kubelet[2470]: E0130 13:47:54.656294 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:55.226991 kubelet[2470]: I0130 13:47:55.226927 2470 apiserver.go:52] "Watching apiserver" Jan 30 13:47:55.234628 kubelet[2470]: I0130 13:47:55.234574 2470 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:47:55.260938 kubelet[2470]: E0130 13:47:55.260305 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:55.260938 kubelet[2470]: E0130 13:47:55.260559 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:55.260938 kubelet[2470]: I0130 13:47:55.260682 2470 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 30 13:47:55.268368 kubelet[2470]: E0130 13:47:55.268330 2470 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:47:55.268500 kubelet[2470]: E0130 13:47:55.268468 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:55.291916 kubelet[2470]: I0130 13:47:55.291853 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.29183595 podStartE2EDuration="1.29183595s" podCreationTimestamp="2025-01-30 13:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:47:55.291176559 +0000 UTC m=+1.124090912" watchObservedRunningTime="2025-01-30 13:47:55.29183595 +0000 UTC m=+1.124750313" Jan 30 13:47:55.292128 kubelet[2470]: I0130 13:47:55.291988 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.291983532 podStartE2EDuration="3.291983532s" podCreationTimestamp="2025-01-30 13:47:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:47:55.282224323 +0000 UTC m=+1.115138686" watchObservedRunningTime="2025-01-30 13:47:55.291983532 +0000 UTC m=+1.124897895" Jan 30 13:47:55.298924 kubelet[2470]: I0130 13:47:55.298853 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.298832567 podStartE2EDuration="1.298832567s" podCreationTimestamp="2025-01-30 13:47:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:47:55.298370363 +0000 UTC m=+1.131284726" watchObservedRunningTime="2025-01-30 13:47:55.298832567 +0000 UTC m=+1.131746920" Jan 30 13:47:55.459465 sudo[1609]: pam_unix(sudo:session): session closed for user root Jan 30 13:47:55.461373 sshd[1606]: pam_unix(sshd:session): session closed for user core Jan 30 13:47:55.465413 systemd[1]: sshd@6-10.0.0.77:22-10.0.0.1:56884.service: Deactivated successfully. Jan 30 13:47:55.467401 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:47:55.467628 systemd[1]: session-7.scope: Consumed 2.967s CPU time, 155.6M memory peak, 0B memory swap peak. Jan 30 13:47:55.468302 systemd-logind[1448]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:47:55.469153 systemd-logind[1448]: Removed session 7. Jan 30 13:47:56.261118 kubelet[2470]: E0130 13:47:56.261087 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:56.261497 kubelet[2470]: E0130 13:47:56.261181 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:56.667618 kubelet[2470]: E0130 13:47:56.667513 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:47:57.262580 kubelet[2470]: E0130 13:47:57.262544 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:00.010507 kubelet[2470]: I0130 13:48:00.010468 2470 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:48:00.010988 kubelet[2470]: I0130 13:48:00.010879 2470 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:48:00.011039 containerd[1459]: time="2025-01-30T13:48:00.010731825Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:48:01.048204 systemd[1]: Created slice kubepods-burstable-pod97237290_82f8_4d15_9902_d9f2e7236284.slice - libcontainer container kubepods-burstable-pod97237290_82f8_4d15_9902_d9f2e7236284.slice. Jan 30 13:48:01.055370 systemd[1]: Created slice kubepods-besteffort-pod44cb5e12_1bfd_4665_859a_38e1db9818db.slice - libcontainer container kubepods-besteffort-pod44cb5e12_1bfd_4665_859a_38e1db9818db.slice. Jan 30 13:48:01.077894 kubelet[2470]: I0130 13:48:01.077856 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/97237290-82f8-4d15-9902-d9f2e7236284-flannel-cfg\") pod \"kube-flannel-ds-mzp59\" (UID: \"97237290-82f8-4d15-9902-d9f2e7236284\") " pod="kube-flannel/kube-flannel-ds-mzp59" Jan 30 13:48:01.077894 kubelet[2470]: I0130 13:48:01.077898 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/44cb5e12-1bfd-4665-859a-38e1db9818db-kube-proxy\") pod \"kube-proxy-qwx65\" (UID: \"44cb5e12-1bfd-4665-859a-38e1db9818db\") " pod="kube-system/kube-proxy-qwx65" Jan 30 13:48:01.078581 kubelet[2470]: I0130 13:48:01.077920 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/97237290-82f8-4d15-9902-d9f2e7236284-cni-plugin\") pod \"kube-flannel-ds-mzp59\" (UID: \"97237290-82f8-4d15-9902-d9f2e7236284\") " pod="kube-flannel/kube-flannel-ds-mzp59" Jan 30 13:48:01.078581 kubelet[2470]: I0130 13:48:01.077937 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/97237290-82f8-4d15-9902-d9f2e7236284-cni\") pod \"kube-flannel-ds-mzp59\" (UID: \"97237290-82f8-4d15-9902-d9f2e7236284\") " pod="kube-flannel/kube-flannel-ds-mzp59" Jan 30 13:48:01.078581 kubelet[2470]: I0130 13:48:01.077955 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-st4dq\" (UniqueName: \"kubernetes.io/projected/44cb5e12-1bfd-4665-859a-38e1db9818db-kube-api-access-st4dq\") pod \"kube-proxy-qwx65\" (UID: \"44cb5e12-1bfd-4665-859a-38e1db9818db\") " pod="kube-system/kube-proxy-qwx65" Jan 30 13:48:01.078581 kubelet[2470]: I0130 13:48:01.077972 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97237290-82f8-4d15-9902-d9f2e7236284-run\") pod \"kube-flannel-ds-mzp59\" (UID: \"97237290-82f8-4d15-9902-d9f2e7236284\") " pod="kube-flannel/kube-flannel-ds-mzp59" Jan 30 13:48:01.078581 kubelet[2470]: I0130 13:48:01.077991 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8wnb\" (UniqueName: \"kubernetes.io/projected/97237290-82f8-4d15-9902-d9f2e7236284-kube-api-access-l8wnb\") pod \"kube-flannel-ds-mzp59\" (UID: \"97237290-82f8-4d15-9902-d9f2e7236284\") " pod="kube-flannel/kube-flannel-ds-mzp59" Jan 30 13:48:01.078762 kubelet[2470]: I0130 13:48:01.078018 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44cb5e12-1bfd-4665-859a-38e1db9818db-xtables-lock\") pod \"kube-proxy-qwx65\" (UID: \"44cb5e12-1bfd-4665-859a-38e1db9818db\") " pod="kube-system/kube-proxy-qwx65" Jan 30 13:48:01.078762 kubelet[2470]: I0130 13:48:01.078059 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/97237290-82f8-4d15-9902-d9f2e7236284-xtables-lock\") pod \"kube-flannel-ds-mzp59\" (UID: \"97237290-82f8-4d15-9902-d9f2e7236284\") " pod="kube-flannel/kube-flannel-ds-mzp59" Jan 30 13:48:01.078762 kubelet[2470]: I0130 13:48:01.078091 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44cb5e12-1bfd-4665-859a-38e1db9818db-lib-modules\") pod \"kube-proxy-qwx65\" (UID: \"44cb5e12-1bfd-4665-859a-38e1db9818db\") " pod="kube-system/kube-proxy-qwx65" Jan 30 13:48:01.352637 kubelet[2470]: E0130 13:48:01.352509 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:01.353098 containerd[1459]: time="2025-01-30T13:48:01.353047196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mzp59,Uid:97237290-82f8-4d15-9902-d9f2e7236284,Namespace:kube-flannel,Attempt:0,}" Jan 30 13:48:01.365879 kubelet[2470]: E0130 13:48:01.365831 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:01.366680 containerd[1459]: time="2025-01-30T13:48:01.366509146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qwx65,Uid:44cb5e12-1bfd-4665-859a-38e1db9818db,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:01.375189 containerd[1459]: time="2025-01-30T13:48:01.375086556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:01.375189 containerd[1459]: time="2025-01-30T13:48:01.375151979Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:01.375189 containerd[1459]: time="2025-01-30T13:48:01.375166057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:01.375407 containerd[1459]: time="2025-01-30T13:48:01.375268170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:01.397153 containerd[1459]: time="2025-01-30T13:48:01.396903392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:01.397153 containerd[1459]: time="2025-01-30T13:48:01.397008202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:01.397153 containerd[1459]: time="2025-01-30T13:48:01.397060190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:01.397346 containerd[1459]: time="2025-01-30T13:48:01.397135073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:01.397678 systemd[1]: Started cri-containerd-3764512e90d127462a26d5465a6de491c505e21c19c38d9474b896ff51950f41.scope - libcontainer container 3764512e90d127462a26d5465a6de491c505e21c19c38d9474b896ff51950f41. Jan 30 13:48:01.416656 systemd[1]: Started cri-containerd-71ca64013b375c0ecee767eff041160b966a8fed4b5291e0bb24ad287633d216.scope - libcontainer container 71ca64013b375c0ecee767eff041160b966a8fed4b5291e0bb24ad287633d216. Jan 30 13:48:01.441583 containerd[1459]: time="2025-01-30T13:48:01.439170128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qwx65,Uid:44cb5e12-1bfd-4665-859a-38e1db9818db,Namespace:kube-system,Attempt:0,} returns sandbox id \"71ca64013b375c0ecee767eff041160b966a8fed4b5291e0bb24ad287633d216\"" Jan 30 13:48:01.441718 kubelet[2470]: E0130 13:48:01.439902 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:01.442373 containerd[1459]: time="2025-01-30T13:48:01.442308291Z" level=info msg="CreateContainer within sandbox \"71ca64013b375c0ecee767eff041160b966a8fed4b5291e0bb24ad287633d216\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:48:01.449341 containerd[1459]: time="2025-01-30T13:48:01.449285420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-mzp59,Uid:97237290-82f8-4d15-9902-d9f2e7236284,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"3764512e90d127462a26d5465a6de491c505e21c19c38d9474b896ff51950f41\"" Jan 30 13:48:01.450092 kubelet[2470]: E0130 13:48:01.449937 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:01.450819 containerd[1459]: time="2025-01-30T13:48:01.450785620Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 30 13:48:01.461727 containerd[1459]: time="2025-01-30T13:48:01.461686024Z" level=info msg="CreateContainer within sandbox \"71ca64013b375c0ecee767eff041160b966a8fed4b5291e0bb24ad287633d216\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cb751bad3ecd3f403008e851104d456fcaa866ea112452224e007612ab3e8694\"" Jan 30 13:48:01.462146 containerd[1459]: time="2025-01-30T13:48:01.462121742Z" level=info msg="StartContainer for \"cb751bad3ecd3f403008e851104d456fcaa866ea112452224e007612ab3e8694\"" Jan 30 13:48:01.487667 systemd[1]: Started cri-containerd-cb751bad3ecd3f403008e851104d456fcaa866ea112452224e007612ab3e8694.scope - libcontainer container cb751bad3ecd3f403008e851104d456fcaa866ea112452224e007612ab3e8694. Jan 30 13:48:01.514273 containerd[1459]: time="2025-01-30T13:48:01.514232714Z" level=info msg="StartContainer for \"cb751bad3ecd3f403008e851104d456fcaa866ea112452224e007612ab3e8694\" returns successfully" Jan 30 13:48:02.270669 kubelet[2470]: E0130 13:48:02.270637 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:02.278094 kubelet[2470]: I0130 13:48:02.277970 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qwx65" podStartSLOduration=1.277953482 podStartE2EDuration="1.277953482s" podCreationTimestamp="2025-01-30 13:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:48:02.277751148 +0000 UTC m=+8.110665511" watchObservedRunningTime="2025-01-30 13:48:02.277953482 +0000 UTC m=+8.110867845" Jan 30 13:48:02.763635 kubelet[2470]: E0130 13:48:02.763606 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:03.166341 update_engine[1449]: I20250130 13:48:03.166153 1449 update_attempter.cc:509] Updating boot flags... Jan 30 13:48:03.203578 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2790) Jan 30 13:48:03.244558 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2791) Jan 30 13:48:03.278548 kubelet[2470]: E0130 13:48:03.275785 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:03.303604 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2100319912.mount: Deactivated successfully. Jan 30 13:48:03.595881 containerd[1459]: time="2025-01-30T13:48:03.595832201Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:03.596680 containerd[1459]: time="2025-01-30T13:48:03.596646246Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 30 13:48:03.597744 containerd[1459]: time="2025-01-30T13:48:03.597718439Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:03.599874 containerd[1459]: time="2025-01-30T13:48:03.599838562Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:03.600457 containerd[1459]: time="2025-01-30T13:48:03.600421057Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.149608395s" Jan 30 13:48:03.600457 containerd[1459]: time="2025-01-30T13:48:03.600448599Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 30 13:48:03.602044 containerd[1459]: time="2025-01-30T13:48:03.602017125Z" level=info msg="CreateContainer within sandbox \"3764512e90d127462a26d5465a6de491c505e21c19c38d9474b896ff51950f41\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 30 13:48:03.613496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2532860686.mount: Deactivated successfully. Jan 30 13:48:03.614613 containerd[1459]: time="2025-01-30T13:48:03.614583756Z" level=info msg="CreateContainer within sandbox \"3764512e90d127462a26d5465a6de491c505e21c19c38d9474b896ff51950f41\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"59f5f1cdad8f78030fc71906e9e733c997ad8bdf68ed406c81c2ae6ffe81a6a2\"" Jan 30 13:48:03.615016 containerd[1459]: time="2025-01-30T13:48:03.615001019Z" level=info msg="StartContainer for \"59f5f1cdad8f78030fc71906e9e733c997ad8bdf68ed406c81c2ae6ffe81a6a2\"" Jan 30 13:48:03.643666 systemd[1]: Started cri-containerd-59f5f1cdad8f78030fc71906e9e733c997ad8bdf68ed406c81c2ae6ffe81a6a2.scope - libcontainer container 59f5f1cdad8f78030fc71906e9e733c997ad8bdf68ed406c81c2ae6ffe81a6a2. Jan 30 13:48:03.668102 systemd[1]: cri-containerd-59f5f1cdad8f78030fc71906e9e733c997ad8bdf68ed406c81c2ae6ffe81a6a2.scope: Deactivated successfully. Jan 30 13:48:03.679485 containerd[1459]: time="2025-01-30T13:48:03.679438660Z" level=info msg="StartContainer for \"59f5f1cdad8f78030fc71906e9e733c997ad8bdf68ed406c81c2ae6ffe81a6a2\" returns successfully" Jan 30 13:48:03.706761 containerd[1459]: time="2025-01-30T13:48:03.706697906Z" level=info msg="shim disconnected" id=59f5f1cdad8f78030fc71906e9e733c997ad8bdf68ed406c81c2ae6ffe81a6a2 namespace=k8s.io Jan 30 13:48:03.706761 containerd[1459]: time="2025-01-30T13:48:03.706752239Z" level=warning msg="cleaning up after shim disconnected" id=59f5f1cdad8f78030fc71906e9e733c997ad8bdf68ed406c81c2ae6ffe81a6a2 namespace=k8s.io Jan 30 13:48:03.706761 containerd[1459]: time="2025-01-30T13:48:03.706760075Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:48:04.192396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59f5f1cdad8f78030fc71906e9e733c997ad8bdf68ed406c81c2ae6ffe81a6a2-rootfs.mount: Deactivated successfully. Jan 30 13:48:04.278283 kubelet[2470]: E0130 13:48:04.278214 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:04.278990 kubelet[2470]: E0130 13:48:04.278249 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:04.279627 containerd[1459]: time="2025-01-30T13:48:04.279594476Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 30 13:48:05.688066 kubelet[2470]: E0130 13:48:05.688036 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:06.015937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3898852965.mount: Deactivated successfully. Jan 30 13:48:06.671396 kubelet[2470]: E0130 13:48:06.671366 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:07.538084 containerd[1459]: time="2025-01-30T13:48:07.538014438Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:07.538739 containerd[1459]: time="2025-01-30T13:48:07.538665390Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866357" Jan 30 13:48:07.539839 containerd[1459]: time="2025-01-30T13:48:07.539805747Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:07.542438 containerd[1459]: time="2025-01-30T13:48:07.542399886Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:07.543511 containerd[1459]: time="2025-01-30T13:48:07.543462296Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.263832222s" Jan 30 13:48:07.543588 containerd[1459]: time="2025-01-30T13:48:07.543509566Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 30 13:48:07.545518 containerd[1459]: time="2025-01-30T13:48:07.545469905Z" level=info msg="CreateContainer within sandbox \"3764512e90d127462a26d5465a6de491c505e21c19c38d9474b896ff51950f41\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:48:07.558311 containerd[1459]: time="2025-01-30T13:48:07.558249968Z" level=info msg="CreateContainer within sandbox \"3764512e90d127462a26d5465a6de491c505e21c19c38d9474b896ff51950f41\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8dc33855de2bf549190aa63f7ad7dc8fa474161fae737692362cd573511385e9\"" Jan 30 13:48:07.558703 containerd[1459]: time="2025-01-30T13:48:07.558670975Z" level=info msg="StartContainer for \"8dc33855de2bf549190aa63f7ad7dc8fa474161fae737692362cd573511385e9\"" Jan 30 13:48:07.592764 systemd[1]: Started cri-containerd-8dc33855de2bf549190aa63f7ad7dc8fa474161fae737692362cd573511385e9.scope - libcontainer container 8dc33855de2bf549190aa63f7ad7dc8fa474161fae737692362cd573511385e9. Jan 30 13:48:07.620967 systemd[1]: cri-containerd-8dc33855de2bf549190aa63f7ad7dc8fa474161fae737692362cd573511385e9.scope: Deactivated successfully. Jan 30 13:48:07.661413 kubelet[2470]: I0130 13:48:07.661383 2470 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 30 13:48:07.893076 containerd[1459]: time="2025-01-30T13:48:07.892305740Z" level=info msg="StartContainer for \"8dc33855de2bf549190aa63f7ad7dc8fa474161fae737692362cd573511385e9\" returns successfully" Jan 30 13:48:07.908093 systemd[1]: Created slice kubepods-burstable-podb68fb48f_815d_40cf_8af1_0d9ea68f90b3.slice - libcontainer container kubepods-burstable-podb68fb48f_815d_40cf_8af1_0d9ea68f90b3.slice. Jan 30 13:48:07.914881 systemd[1]: Created slice kubepods-burstable-pod769809ce_02d5_4e80_9575_78db2ea3a7f3.slice - libcontainer container kubepods-burstable-pod769809ce_02d5_4e80_9575_78db2ea3a7f3.slice. Jan 30 13:48:07.922681 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8dc33855de2bf549190aa63f7ad7dc8fa474161fae737692362cd573511385e9-rootfs.mount: Deactivated successfully. Jan 30 13:48:07.927254 kubelet[2470]: I0130 13:48:07.927193 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbclt\" (UniqueName: \"kubernetes.io/projected/769809ce-02d5-4e80-9575-78db2ea3a7f3-kube-api-access-qbclt\") pod \"coredns-668d6bf9bc-c4vw6\" (UID: \"769809ce-02d5-4e80-9575-78db2ea3a7f3\") " pod="kube-system/coredns-668d6bf9bc-c4vw6" Jan 30 13:48:07.927254 kubelet[2470]: I0130 13:48:07.927243 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b68fb48f-815d-40cf-8af1-0d9ea68f90b3-config-volume\") pod \"coredns-668d6bf9bc-4mhrz\" (UID: \"b68fb48f-815d-40cf-8af1-0d9ea68f90b3\") " pod="kube-system/coredns-668d6bf9bc-4mhrz" Jan 30 13:48:07.927440 kubelet[2470]: I0130 13:48:07.927278 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfs4z\" (UniqueName: \"kubernetes.io/projected/b68fb48f-815d-40cf-8af1-0d9ea68f90b3-kube-api-access-vfs4z\") pod \"coredns-668d6bf9bc-4mhrz\" (UID: \"b68fb48f-815d-40cf-8af1-0d9ea68f90b3\") " pod="kube-system/coredns-668d6bf9bc-4mhrz" Jan 30 13:48:07.927440 kubelet[2470]: I0130 13:48:07.927303 2470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/769809ce-02d5-4e80-9575-78db2ea3a7f3-config-volume\") pod \"coredns-668d6bf9bc-c4vw6\" (UID: \"769809ce-02d5-4e80-9575-78db2ea3a7f3\") " pod="kube-system/coredns-668d6bf9bc-c4vw6" Jan 30 13:48:07.928100 containerd[1459]: time="2025-01-30T13:48:07.928035450Z" level=info msg="shim disconnected" id=8dc33855de2bf549190aa63f7ad7dc8fa474161fae737692362cd573511385e9 namespace=k8s.io Jan 30 13:48:07.928170 containerd[1459]: time="2025-01-30T13:48:07.928104992Z" level=warning msg="cleaning up after shim disconnected" id=8dc33855de2bf549190aa63f7ad7dc8fa474161fae737692362cd573511385e9 namespace=k8s.io Jan 30 13:48:07.928170 containerd[1459]: time="2025-01-30T13:48:07.928116133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:48:08.212906 kubelet[2470]: E0130 13:48:08.212738 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:08.213409 containerd[1459]: time="2025-01-30T13:48:08.213364054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4mhrz,Uid:b68fb48f-815d-40cf-8af1-0d9ea68f90b3,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:08.218897 kubelet[2470]: E0130 13:48:08.218860 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:08.219903 containerd[1459]: time="2025-01-30T13:48:08.219842437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c4vw6,Uid:769809ce-02d5-4e80-9575-78db2ea3a7f3,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:08.254889 containerd[1459]: time="2025-01-30T13:48:08.254812631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4mhrz,Uid:b68fb48f-815d-40cf-8af1-0d9ea68f90b3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6311be9b5f5a71b1b337c8619a3caead146e2c22223effd16ed8c64ab3750936\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 13:48:08.255035 kubelet[2470]: E0130 13:48:08.254964 2470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6311be9b5f5a71b1b337c8619a3caead146e2c22223effd16ed8c64ab3750936\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 13:48:08.255035 kubelet[2470]: E0130 13:48:08.255024 2470 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6311be9b5f5a71b1b337c8619a3caead146e2c22223effd16ed8c64ab3750936\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-4mhrz" Jan 30 13:48:08.255125 kubelet[2470]: E0130 13:48:08.255047 2470 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6311be9b5f5a71b1b337c8619a3caead146e2c22223effd16ed8c64ab3750936\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-4mhrz" Jan 30 13:48:08.255125 kubelet[2470]: E0130 13:48:08.255112 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4mhrz_kube-system(b68fb48f-815d-40cf-8af1-0d9ea68f90b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4mhrz_kube-system(b68fb48f-815d-40cf-8af1-0d9ea68f90b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6311be9b5f5a71b1b337c8619a3caead146e2c22223effd16ed8c64ab3750936\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-4mhrz" podUID="b68fb48f-815d-40cf-8af1-0d9ea68f90b3" Jan 30 13:48:08.258635 containerd[1459]: time="2025-01-30T13:48:08.258592269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c4vw6,Uid:769809ce-02d5-4e80-9575-78db2ea3a7f3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0b642bc622a4ced83be3c63fb6f37dd7def97b7a7bf7c1aa0d60d92f37ff24c8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 13:48:08.258790 kubelet[2470]: E0130 13:48:08.258753 2470 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b642bc622a4ced83be3c63fb6f37dd7def97b7a7bf7c1aa0d60d92f37ff24c8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jan 30 13:48:08.258824 kubelet[2470]: E0130 13:48:08.258803 2470 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b642bc622a4ced83be3c63fb6f37dd7def97b7a7bf7c1aa0d60d92f37ff24c8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-c4vw6" Jan 30 13:48:08.258852 kubelet[2470]: E0130 13:48:08.258822 2470 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b642bc622a4ced83be3c63fb6f37dd7def97b7a7bf7c1aa0d60d92f37ff24c8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-c4vw6" Jan 30 13:48:08.258876 kubelet[2470]: E0130 13:48:08.258859 2470 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-c4vw6_kube-system(769809ce-02d5-4e80-9575-78db2ea3a7f3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-c4vw6_kube-system(769809ce-02d5-4e80-9575-78db2ea3a7f3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b642bc622a4ced83be3c63fb6f37dd7def97b7a7bf7c1aa0d60d92f37ff24c8\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-c4vw6" podUID="769809ce-02d5-4e80-9575-78db2ea3a7f3" Jan 30 13:48:08.284978 kubelet[2470]: E0130 13:48:08.284948 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:08.286354 containerd[1459]: time="2025-01-30T13:48:08.286318805Z" level=info msg="CreateContainer within sandbox \"3764512e90d127462a26d5465a6de491c505e21c19c38d9474b896ff51950f41\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 30 13:48:08.299953 containerd[1459]: time="2025-01-30T13:48:08.299910720Z" level=info msg="CreateContainer within sandbox \"3764512e90d127462a26d5465a6de491c505e21c19c38d9474b896ff51950f41\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"21b7b982674980a471d4c1b0e1bf60f7e0ea0ca68ffc0e05dfa840135d0ea333\"" Jan 30 13:48:08.300646 containerd[1459]: time="2025-01-30T13:48:08.300329482Z" level=info msg="StartContainer for \"21b7b982674980a471d4c1b0e1bf60f7e0ea0ca68ffc0e05dfa840135d0ea333\"" Jan 30 13:48:08.331661 systemd[1]: Started cri-containerd-21b7b982674980a471d4c1b0e1bf60f7e0ea0ca68ffc0e05dfa840135d0ea333.scope - libcontainer container 21b7b982674980a471d4c1b0e1bf60f7e0ea0ca68ffc0e05dfa840135d0ea333. Jan 30 13:48:08.361224 containerd[1459]: time="2025-01-30T13:48:08.361170483Z" level=info msg="StartContainer for \"21b7b982674980a471d4c1b0e1bf60f7e0ea0ca68ffc0e05dfa840135d0ea333\" returns successfully" Jan 30 13:48:09.288614 kubelet[2470]: E0130 13:48:09.288582 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:09.299516 kubelet[2470]: I0130 13:48:09.299460 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-mzp59" podStartSLOduration=2.205622213 podStartE2EDuration="8.299443507s" podCreationTimestamp="2025-01-30 13:48:01 +0000 UTC" firstStartedPulling="2025-01-30 13:48:01.450392885 +0000 UTC m=+7.283307248" lastFinishedPulling="2025-01-30 13:48:07.544214179 +0000 UTC m=+13.377128542" observedRunningTime="2025-01-30 13:48:09.298956697 +0000 UTC m=+15.131871060" watchObservedRunningTime="2025-01-30 13:48:09.299443507 +0000 UTC m=+15.132357860" Jan 30 13:48:09.407507 systemd-networkd[1397]: flannel.1: Link UP Jan 30 13:48:09.407518 systemd-networkd[1397]: flannel.1: Gained carrier Jan 30 13:48:10.289897 kubelet[2470]: E0130 13:48:10.289866 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:10.427727 systemd-networkd[1397]: flannel.1: Gained IPv6LL Jan 30 13:48:19.327602 systemd[1]: Started sshd@7-10.0.0.77:22-10.0.0.1:46318.service - OpenSSH per-connection server daemon (10.0.0.1:46318). Jan 30 13:48:19.363859 sshd[3149]: Accepted publickey for core from 10.0.0.1 port 46318 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:19.365398 sshd[3149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:19.369375 systemd-logind[1448]: New session 8 of user core. Jan 30 13:48:19.374659 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:48:19.485216 sshd[3149]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:19.488815 systemd[1]: sshd@7-10.0.0.77:22-10.0.0.1:46318.service: Deactivated successfully. Jan 30 13:48:19.490957 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:48:19.491540 systemd-logind[1448]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:48:19.492317 systemd-logind[1448]: Removed session 8. Jan 30 13:48:21.250243 kubelet[2470]: E0130 13:48:21.250193 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:21.250243 kubelet[2470]: E0130 13:48:21.250229 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:21.251283 containerd[1459]: time="2025-01-30T13:48:21.250587387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c4vw6,Uid:769809ce-02d5-4e80-9575-78db2ea3a7f3,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:21.251283 containerd[1459]: time="2025-01-30T13:48:21.250587527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4mhrz,Uid:b68fb48f-815d-40cf-8af1-0d9ea68f90b3,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:21.300423 systemd-networkd[1397]: cni0: Link UP Jan 30 13:48:21.300431 systemd-networkd[1397]: cni0: Gained carrier Jan 30 13:48:21.304439 systemd-networkd[1397]: cni0: Lost carrier Jan 30 13:48:21.310956 systemd-networkd[1397]: veth2728954a: Link UP Jan 30 13:48:21.313160 kernel: cni0: port 1(veth2728954a) entered blocking state Jan 30 13:48:21.313224 kernel: cni0: port 1(veth2728954a) entered disabled state Jan 30 13:48:21.313909 kernel: veth2728954a: entered allmulticast mode Jan 30 13:48:21.315547 kernel: veth2728954a: entered promiscuous mode Jan 30 13:48:21.317160 kernel: cni0: port 1(veth2728954a) entered blocking state Jan 30 13:48:21.317198 kernel: cni0: port 1(veth2728954a) entered forwarding state Jan 30 13:48:21.317224 kernel: cni0: port 1(veth2728954a) entered disabled state Jan 30 13:48:21.319242 systemd-networkd[1397]: veth9b8426b4: Link UP Jan 30 13:48:21.320597 kernel: cni0: port 2(veth9b8426b4) entered blocking state Jan 30 13:48:21.320637 kernel: cni0: port 2(veth9b8426b4) entered disabled state Jan 30 13:48:21.323162 kernel: veth9b8426b4: entered allmulticast mode Jan 30 13:48:21.323198 kernel: veth9b8426b4: entered promiscuous mode Jan 30 13:48:21.325559 kernel: cni0: port 2(veth9b8426b4) entered blocking state Jan 30 13:48:21.325615 kernel: cni0: port 2(veth9b8426b4) entered forwarding state Jan 30 13:48:21.325638 kernel: cni0: port 2(veth9b8426b4) entered disabled state Jan 30 13:48:21.331839 kernel: cni0: port 1(veth2728954a) entered blocking state Jan 30 13:48:21.331886 kernel: cni0: port 1(veth2728954a) entered forwarding state Jan 30 13:48:21.332061 systemd-networkd[1397]: veth2728954a: Gained carrier Jan 30 13:48:21.332400 systemd-networkd[1397]: cni0: Gained carrier Jan 30 13:48:21.334212 systemd-networkd[1397]: veth9b8426b4: Gained carrier Jan 30 13:48:21.334647 kernel: cni0: port 2(veth9b8426b4) entered blocking state Jan 30 13:48:21.334694 kernel: cni0: port 2(veth9b8426b4) entered forwarding state Jan 30 13:48:21.335916 containerd[1459]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000024938), "name":"cbr0", "type":"bridge"} Jan 30 13:48:21.335916 containerd[1459]: delegateAdd: netconf sent to delegate plugin: Jan 30 13:48:21.336688 containerd[1459]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Jan 30 13:48:21.336688 containerd[1459]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018938), "name":"cbr0", "type":"bridge"} Jan 30 13:48:21.336688 containerd[1459]: delegateAdd: netconf sent to delegate plugin: Jan 30 13:48:21.362221 containerd[1459]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-30T13:48:21.362083994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:21.362221 containerd[1459]: time="2025-01-30T13:48:21.362154166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:21.362221 containerd[1459]: time="2025-01-30T13:48:21.362172812Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:21.362453 containerd[1459]: time="2025-01-30T13:48:21.362285253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:21.363473 containerd[1459]: time="2025-01-30T13:48:21.363374503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:48:21.363473 containerd[1459]: time="2025-01-30T13:48:21.363445477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:48:21.363687 containerd[1459]: time="2025-01-30T13:48:21.363463440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:21.363845 containerd[1459]: time="2025-01-30T13:48:21.363750000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:48:21.382715 systemd[1]: Started cri-containerd-ea4c30d28f3f949b95da66d685ccbe9105c055accb5e102e6ed60710c760ab9b.scope - libcontainer container ea4c30d28f3f949b95da66d685ccbe9105c055accb5e102e6ed60710c760ab9b. Jan 30 13:48:21.386293 systemd[1]: Started cri-containerd-8ad73fc99c999a45d681cd2846d49ceb4d4ad463c51a94518708b2ed2d9c982a.scope - libcontainer container 8ad73fc99c999a45d681cd2846d49ceb4d4ad463c51a94518708b2ed2d9c982a. Jan 30 13:48:21.396418 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:48:21.400327 systemd-resolved[1328]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:48:21.425878 containerd[1459]: time="2025-01-30T13:48:21.425840266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4mhrz,Uid:b68fb48f-815d-40cf-8af1-0d9ea68f90b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea4c30d28f3f949b95da66d685ccbe9105c055accb5e102e6ed60710c760ab9b\"" Jan 30 13:48:21.426961 kubelet[2470]: E0130 13:48:21.426937 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:21.429143 containerd[1459]: time="2025-01-30T13:48:21.429094461Z" level=info msg="CreateContainer within sandbox \"ea4c30d28f3f949b95da66d685ccbe9105c055accb5e102e6ed60710c760ab9b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:48:21.433985 containerd[1459]: time="2025-01-30T13:48:21.433925925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-c4vw6,Uid:769809ce-02d5-4e80-9575-78db2ea3a7f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ad73fc99c999a45d681cd2846d49ceb4d4ad463c51a94518708b2ed2d9c982a\"" Jan 30 13:48:21.435319 kubelet[2470]: E0130 13:48:21.435293 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:21.438197 containerd[1459]: time="2025-01-30T13:48:21.438161326Z" level=info msg="CreateContainer within sandbox \"8ad73fc99c999a45d681cd2846d49ceb4d4ad463c51a94518708b2ed2d9c982a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:48:21.452275 containerd[1459]: time="2025-01-30T13:48:21.452232392Z" level=info msg="CreateContainer within sandbox \"ea4c30d28f3f949b95da66d685ccbe9105c055accb5e102e6ed60710c760ab9b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d606c94b0c020892aa8d4a3d6617e75fb2ac17661c89035bc14250f89c5f22db\"" Jan 30 13:48:21.453124 containerd[1459]: time="2025-01-30T13:48:21.453087281Z" level=info msg="StartContainer for \"d606c94b0c020892aa8d4a3d6617e75fb2ac17661c89035bc14250f89c5f22db\"" Jan 30 13:48:21.459706 containerd[1459]: time="2025-01-30T13:48:21.459657036Z" level=info msg="CreateContainer within sandbox \"8ad73fc99c999a45d681cd2846d49ceb4d4ad463c51a94518708b2ed2d9c982a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bac8ca160220d83c431b376c215f8f544e19d945179113f3d4f7f99e6647d35d\"" Jan 30 13:48:21.460395 containerd[1459]: time="2025-01-30T13:48:21.460251415Z" level=info msg="StartContainer for \"bac8ca160220d83c431b376c215f8f544e19d945179113f3d4f7f99e6647d35d\"" Jan 30 13:48:21.489722 systemd[1]: Started cri-containerd-d606c94b0c020892aa8d4a3d6617e75fb2ac17661c89035bc14250f89c5f22db.scope - libcontainer container d606c94b0c020892aa8d4a3d6617e75fb2ac17661c89035bc14250f89c5f22db. Jan 30 13:48:21.492872 systemd[1]: Started cri-containerd-bac8ca160220d83c431b376c215f8f544e19d945179113f3d4f7f99e6647d35d.scope - libcontainer container bac8ca160220d83c431b376c215f8f544e19d945179113f3d4f7f99e6647d35d. Jan 30 13:48:21.523869 containerd[1459]: time="2025-01-30T13:48:21.523826044Z" level=info msg="StartContainer for \"d606c94b0c020892aa8d4a3d6617e75fb2ac17661c89035bc14250f89c5f22db\" returns successfully" Jan 30 13:48:21.524027 containerd[1459]: time="2025-01-30T13:48:21.523903761Z" level=info msg="StartContainer for \"bac8ca160220d83c431b376c215f8f544e19d945179113f3d4f7f99e6647d35d\" returns successfully" Jan 30 13:48:22.311806 kubelet[2470]: E0130 13:48:22.311768 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:22.313226 kubelet[2470]: E0130 13:48:22.313208 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:22.320268 kubelet[2470]: I0130 13:48:22.320101 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-c4vw6" podStartSLOduration=21.320085173 podStartE2EDuration="21.320085173s" podCreationTimestamp="2025-01-30 13:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:48:22.319817561 +0000 UTC m=+28.152731924" watchObservedRunningTime="2025-01-30 13:48:22.320085173 +0000 UTC m=+28.152999546" Jan 30 13:48:22.343152 kubelet[2470]: I0130 13:48:22.343090 2470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4mhrz" podStartSLOduration=21.343067671 podStartE2EDuration="21.343067671s" podCreationTimestamp="2025-01-30 13:48:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:48:22.330104959 +0000 UTC m=+28.163019312" watchObservedRunningTime="2025-01-30 13:48:22.343067671 +0000 UTC m=+28.175982035" Jan 30 13:48:22.971672 systemd-networkd[1397]: cni0: Gained IPv6LL Jan 30 13:48:23.035618 systemd-networkd[1397]: veth2728954a: Gained IPv6LL Jan 30 13:48:23.035957 systemd-networkd[1397]: veth9b8426b4: Gained IPv6LL Jan 30 13:48:23.314325 kubelet[2470]: E0130 13:48:23.314298 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:23.314799 kubelet[2470]: E0130 13:48:23.314306 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:24.315065 kubelet[2470]: E0130 13:48:24.315034 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:24.315622 kubelet[2470]: E0130 13:48:24.315112 2470 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:48:24.496405 systemd[1]: Started sshd@8-10.0.0.77:22-10.0.0.1:46328.service - OpenSSH per-connection server daemon (10.0.0.1:46328). Jan 30 13:48:24.532124 sshd[3446]: Accepted publickey for core from 10.0.0.1 port 46328 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:24.533837 sshd[3446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:24.537745 systemd-logind[1448]: New session 9 of user core. Jan 30 13:48:24.542724 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:48:24.652788 sshd[3446]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:24.657447 systemd[1]: sshd@8-10.0.0.77:22-10.0.0.1:46328.service: Deactivated successfully. Jan 30 13:48:24.659426 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:48:24.659984 systemd-logind[1448]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:48:24.660769 systemd-logind[1448]: Removed session 9. Jan 30 13:48:29.666566 systemd[1]: Started sshd@9-10.0.0.77:22-10.0.0.1:50852.service - OpenSSH per-connection server daemon (10.0.0.1:50852). Jan 30 13:48:29.700892 sshd[3498]: Accepted publickey for core from 10.0.0.1 port 50852 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:29.702372 sshd[3498]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:29.705878 systemd-logind[1448]: New session 10 of user core. Jan 30 13:48:29.715662 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:48:29.821369 sshd[3498]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:29.831104 systemd[1]: sshd@9-10.0.0.77:22-10.0.0.1:50852.service: Deactivated successfully. Jan 30 13:48:29.832994 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:48:29.834630 systemd-logind[1448]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:48:29.847773 systemd[1]: Started sshd@10-10.0.0.77:22-10.0.0.1:50854.service - OpenSSH per-connection server daemon (10.0.0.1:50854). Jan 30 13:48:29.848719 systemd-logind[1448]: Removed session 10. Jan 30 13:48:29.878887 sshd[3514]: Accepted publickey for core from 10.0.0.1 port 50854 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:29.880348 sshd[3514]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:29.884118 systemd-logind[1448]: New session 11 of user core. Jan 30 13:48:29.897668 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:48:30.038421 sshd[3514]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:30.050123 systemd[1]: sshd@10-10.0.0.77:22-10.0.0.1:50854.service: Deactivated successfully. Jan 30 13:48:30.052709 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:48:30.055160 systemd-logind[1448]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:48:30.062018 systemd[1]: Started sshd@11-10.0.0.77:22-10.0.0.1:50864.service - OpenSSH per-connection server daemon (10.0.0.1:50864). Jan 30 13:48:30.063603 systemd-logind[1448]: Removed session 11. Jan 30 13:48:30.093072 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 50864 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:30.094972 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:30.099551 systemd-logind[1448]: New session 12 of user core. Jan 30 13:48:30.110663 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:48:30.222829 sshd[3527]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:30.227200 systemd[1]: sshd@11-10.0.0.77:22-10.0.0.1:50864.service: Deactivated successfully. Jan 30 13:48:30.229260 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:48:30.230006 systemd-logind[1448]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:48:30.231010 systemd-logind[1448]: Removed session 12. Jan 30 13:48:35.234006 systemd[1]: Started sshd@12-10.0.0.77:22-10.0.0.1:50870.service - OpenSSH per-connection server daemon (10.0.0.1:50870). Jan 30 13:48:35.267960 sshd[3565]: Accepted publickey for core from 10.0.0.1 port 50870 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:35.269301 sshd[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:35.272779 systemd-logind[1448]: New session 13 of user core. Jan 30 13:48:35.279653 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:48:35.376451 sshd[3565]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:35.388371 systemd[1]: sshd@12-10.0.0.77:22-10.0.0.1:50870.service: Deactivated successfully. Jan 30 13:48:35.390148 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:48:35.391590 systemd-logind[1448]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:48:35.402778 systemd[1]: Started sshd@13-10.0.0.77:22-10.0.0.1:50882.service - OpenSSH per-connection server daemon (10.0.0.1:50882). Jan 30 13:48:35.403652 systemd-logind[1448]: Removed session 13. Jan 30 13:48:35.433145 sshd[3579]: Accepted publickey for core from 10.0.0.1 port 50882 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:35.434449 sshd[3579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:35.438152 systemd-logind[1448]: New session 14 of user core. Jan 30 13:48:35.446654 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:48:35.591422 sshd[3579]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:35.602204 systemd[1]: sshd@13-10.0.0.77:22-10.0.0.1:50882.service: Deactivated successfully. Jan 30 13:48:35.604040 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:48:35.605410 systemd-logind[1448]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:48:35.606713 systemd[1]: Started sshd@14-10.0.0.77:22-10.0.0.1:50888.service - OpenSSH per-connection server daemon (10.0.0.1:50888). Jan 30 13:48:35.607579 systemd-logind[1448]: Removed session 14. Jan 30 13:48:35.641146 sshd[3592]: Accepted publickey for core from 10.0.0.1 port 50888 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:35.642473 sshd[3592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:35.646187 systemd-logind[1448]: New session 15 of user core. Jan 30 13:48:35.653644 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:48:36.371982 sshd[3592]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:36.382476 systemd[1]: sshd@14-10.0.0.77:22-10.0.0.1:50888.service: Deactivated successfully. Jan 30 13:48:36.384626 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:48:36.386890 systemd-logind[1448]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:48:36.396893 systemd[1]: Started sshd@15-10.0.0.77:22-10.0.0.1:50904.service - OpenSSH per-connection server daemon (10.0.0.1:50904). Jan 30 13:48:36.398123 systemd-logind[1448]: Removed session 15. Jan 30 13:48:36.429234 sshd[3611]: Accepted publickey for core from 10.0.0.1 port 50904 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:36.431046 sshd[3611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:36.435130 systemd-logind[1448]: New session 16 of user core. Jan 30 13:48:36.443659 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:48:36.656948 sshd[3611]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:36.668637 systemd[1]: sshd@15-10.0.0.77:22-10.0.0.1:50904.service: Deactivated successfully. Jan 30 13:48:36.670852 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:48:36.672448 systemd-logind[1448]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:48:36.679834 systemd[1]: Started sshd@16-10.0.0.77:22-10.0.0.1:50914.service - OpenSSH per-connection server daemon (10.0.0.1:50914). Jan 30 13:48:36.680921 systemd-logind[1448]: Removed session 16. Jan 30 13:48:36.711588 sshd[3625]: Accepted publickey for core from 10.0.0.1 port 50914 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:36.712921 sshd[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:36.716729 systemd-logind[1448]: New session 17 of user core. Jan 30 13:48:36.727649 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:48:36.945697 sshd[3625]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:36.949923 systemd[1]: sshd@16-10.0.0.77:22-10.0.0.1:50914.service: Deactivated successfully. Jan 30 13:48:36.952123 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:48:36.952854 systemd-logind[1448]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:48:36.953828 systemd-logind[1448]: Removed session 17. Jan 30 13:48:41.959997 systemd[1]: Started sshd@17-10.0.0.77:22-10.0.0.1:34128.service - OpenSSH per-connection server daemon (10.0.0.1:34128). Jan 30 13:48:41.994782 sshd[3662]: Accepted publickey for core from 10.0.0.1 port 34128 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:41.996074 sshd[3662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:41.999670 systemd-logind[1448]: New session 18 of user core. Jan 30 13:48:42.006661 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:48:42.103349 sshd[3662]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:42.106672 systemd[1]: sshd@17-10.0.0.77:22-10.0.0.1:34128.service: Deactivated successfully. Jan 30 13:48:42.108332 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:48:42.108903 systemd-logind[1448]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:48:42.109717 systemd-logind[1448]: Removed session 18. Jan 30 13:48:47.115573 systemd[1]: Started sshd@18-10.0.0.77:22-10.0.0.1:34134.service - OpenSSH per-connection server daemon (10.0.0.1:34134). Jan 30 13:48:47.151643 sshd[3697]: Accepted publickey for core from 10.0.0.1 port 34134 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:47.153401 sshd[3697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:47.158066 systemd-logind[1448]: New session 19 of user core. Jan 30 13:48:47.167890 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:48:47.271438 sshd[3697]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:47.275363 systemd[1]: sshd@18-10.0.0.77:22-10.0.0.1:34134.service: Deactivated successfully. Jan 30 13:48:47.277324 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:48:47.278021 systemd-logind[1448]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:48:47.279014 systemd-logind[1448]: Removed session 19. Jan 30 13:48:52.282927 systemd[1]: Started sshd@19-10.0.0.77:22-10.0.0.1:51276.service - OpenSSH per-connection server daemon (10.0.0.1:51276). Jan 30 13:48:52.317804 sshd[3732]: Accepted publickey for core from 10.0.0.1 port 51276 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:52.319381 sshd[3732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:52.323023 systemd-logind[1448]: New session 20 of user core. Jan 30 13:48:52.332655 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:48:52.430577 sshd[3732]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:52.434512 systemd[1]: sshd@19-10.0.0.77:22-10.0.0.1:51276.service: Deactivated successfully. Jan 30 13:48:52.436871 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:48:52.437546 systemd-logind[1448]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:48:52.438422 systemd-logind[1448]: Removed session 20. Jan 30 13:48:57.451106 systemd[1]: Started sshd@20-10.0.0.77:22-10.0.0.1:42186.service - OpenSSH per-connection server daemon (10.0.0.1:42186). Jan 30 13:48:57.486277 sshd[3770]: Accepted publickey for core from 10.0.0.1 port 42186 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:48:57.487702 sshd[3770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:57.491567 systemd-logind[1448]: New session 21 of user core. Jan 30 13:48:57.501731 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:48:57.609803 sshd[3770]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:57.620383 systemd[1]: sshd@20-10.0.0.77:22-10.0.0.1:42186.service: Deactivated successfully. Jan 30 13:48:57.622818 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:48:57.623648 systemd-logind[1448]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:48:57.625611 systemd-logind[1448]: Removed session 21.