Mar 19 11:46:17.873882 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Mar 19 10:13:43 -00 2025 Mar 19 11:46:17.873906 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 11:46:17.873917 kernel: BIOS-provided physical RAM map: Mar 19 11:46:17.873924 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 19 11:46:17.873930 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 19 11:46:17.873937 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 19 11:46:17.873944 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 19 11:46:17.873951 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 19 11:46:17.873957 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 19 11:46:17.873966 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 19 11:46:17.873972 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 19 11:46:17.873979 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 19 11:46:17.873985 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 19 11:46:17.873992 kernel: NX (Execute Disable) protection: active Mar 19 11:46:17.873999 kernel: APIC: Static calls initialized Mar 19 11:46:17.874009 kernel: SMBIOS 2.8 present. Mar 19 11:46:17.874016 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 19 11:46:17.874023 kernel: Hypervisor detected: KVM Mar 19 11:46:17.874030 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 19 11:46:17.874037 kernel: kvm-clock: using sched offset of 2367263474 cycles Mar 19 11:46:17.874044 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 19 11:46:17.874052 kernel: tsc: Detected 2794.748 MHz processor Mar 19 11:46:17.874059 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 19 11:46:17.874067 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 19 11:46:17.874074 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 19 11:46:17.874083 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 19 11:46:17.874091 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 19 11:46:17.874098 kernel: Using GB pages for direct mapping Mar 19 11:46:17.874105 kernel: ACPI: Early table checksum verification disabled Mar 19 11:46:17.874112 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 19 11:46:17.874119 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:46:17.874126 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:46:17.874133 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:46:17.874140 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 19 11:46:17.874150 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:46:17.874157 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:46:17.874164 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:46:17.874171 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 19 11:46:17.874178 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Mar 19 11:46:17.874185 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Mar 19 11:46:17.874196 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 19 11:46:17.874205 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Mar 19 11:46:17.874213 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Mar 19 11:46:17.874220 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Mar 19 11:46:17.874227 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Mar 19 11:46:17.874234 kernel: No NUMA configuration found Mar 19 11:46:17.874242 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 19 11:46:17.874262 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 19 11:46:17.874273 kernel: Zone ranges: Mar 19 11:46:17.874280 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 19 11:46:17.874287 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 19 11:46:17.874295 kernel: Normal empty Mar 19 11:46:17.874302 kernel: Movable zone start for each node Mar 19 11:46:17.874309 kernel: Early memory node ranges Mar 19 11:46:17.874316 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 19 11:46:17.874324 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 19 11:46:17.874331 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 19 11:46:17.874340 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 19 11:46:17.874348 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 19 11:46:17.874355 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 19 11:46:17.874362 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 19 11:46:17.874370 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 19 11:46:17.874377 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 19 11:46:17.874384 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 19 11:46:17.874391 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 19 11:46:17.874399 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 19 11:46:17.874408 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 19 11:46:17.874415 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 19 11:46:17.874423 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 19 11:46:17.874430 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 19 11:46:17.874437 kernel: TSC deadline timer available Mar 19 11:46:17.874444 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 19 11:46:17.874452 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 19 11:46:17.874459 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 19 11:46:17.874466 kernel: kvm-guest: setup PV sched yield Mar 19 11:46:17.874473 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 19 11:46:17.874483 kernel: Booting paravirtualized kernel on KVM Mar 19 11:46:17.874490 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 19 11:46:17.874498 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 19 11:46:17.874505 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 19 11:46:17.874513 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 19 11:46:17.874520 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 19 11:46:17.874527 kernel: kvm-guest: PV spinlocks enabled Mar 19 11:46:17.874534 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 19 11:46:17.874543 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 11:46:17.874553 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 19 11:46:17.874560 kernel: random: crng init done Mar 19 11:46:17.874568 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 19 11:46:17.874575 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 19 11:46:17.874582 kernel: Fallback order for Node 0: 0 Mar 19 11:46:17.874590 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 19 11:46:17.874597 kernel: Policy zone: DMA32 Mar 19 11:46:17.874604 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 19 11:46:17.874614 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43480K init, 1592K bss, 138948K reserved, 0K cma-reserved) Mar 19 11:46:17.874621 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 19 11:46:17.874629 kernel: ftrace: allocating 37910 entries in 149 pages Mar 19 11:46:17.874636 kernel: ftrace: allocated 149 pages with 4 groups Mar 19 11:46:17.874643 kernel: Dynamic Preempt: voluntary Mar 19 11:46:17.874650 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 19 11:46:17.874658 kernel: rcu: RCU event tracing is enabled. Mar 19 11:46:17.874666 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 19 11:46:17.874673 kernel: Trampoline variant of Tasks RCU enabled. Mar 19 11:46:17.874683 kernel: Rude variant of Tasks RCU enabled. Mar 19 11:46:17.874690 kernel: Tracing variant of Tasks RCU enabled. Mar 19 11:46:17.874698 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 19 11:46:17.874705 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 19 11:46:17.874712 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 19 11:46:17.874719 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 19 11:46:17.874727 kernel: Console: colour VGA+ 80x25 Mar 19 11:46:17.874734 kernel: printk: console [ttyS0] enabled Mar 19 11:46:17.874741 kernel: ACPI: Core revision 20230628 Mar 19 11:46:17.874751 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 19 11:46:17.874758 kernel: APIC: Switch to symmetric I/O mode setup Mar 19 11:46:17.874765 kernel: x2apic enabled Mar 19 11:46:17.874773 kernel: APIC: Switched APIC routing to: physical x2apic Mar 19 11:46:17.874780 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 19 11:46:17.874793 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 19 11:46:17.874801 kernel: kvm-guest: setup PV IPIs Mar 19 11:46:17.874818 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 19 11:46:17.874826 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 19 11:46:17.874833 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Mar 19 11:46:17.874842 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 19 11:46:17.874849 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 19 11:46:17.874859 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 19 11:46:17.874867 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 19 11:46:17.874874 kernel: Spectre V2 : Mitigation: Retpolines Mar 19 11:46:17.874882 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 19 11:46:17.874892 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 19 11:46:17.874900 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 19 11:46:17.874907 kernel: RETBleed: Mitigation: untrained return thunk Mar 19 11:46:17.874915 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 19 11:46:17.874922 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 19 11:46:17.874930 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 19 11:46:17.874938 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 19 11:46:17.874946 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 19 11:46:17.874954 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 19 11:46:17.874964 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 19 11:46:17.874971 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 19 11:46:17.874979 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 19 11:46:17.874987 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 19 11:46:17.874994 kernel: Freeing SMP alternatives memory: 32K Mar 19 11:46:17.875002 kernel: pid_max: default: 32768 minimum: 301 Mar 19 11:46:17.875009 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 19 11:46:17.875017 kernel: landlock: Up and running. Mar 19 11:46:17.875025 kernel: SELinux: Initializing. Mar 19 11:46:17.875034 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:46:17.875042 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 19 11:46:17.875050 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 19 11:46:17.875057 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 19 11:46:17.875065 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 19 11:46:17.875073 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 19 11:46:17.875081 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 19 11:46:17.875088 kernel: ... version: 0 Mar 19 11:46:17.875098 kernel: ... bit width: 48 Mar 19 11:46:17.875106 kernel: ... generic registers: 6 Mar 19 11:46:17.875113 kernel: ... value mask: 0000ffffffffffff Mar 19 11:46:17.875121 kernel: ... max period: 00007fffffffffff Mar 19 11:46:17.875128 kernel: ... fixed-purpose events: 0 Mar 19 11:46:17.875136 kernel: ... event mask: 000000000000003f Mar 19 11:46:17.875143 kernel: signal: max sigframe size: 1776 Mar 19 11:46:17.875151 kernel: rcu: Hierarchical SRCU implementation. Mar 19 11:46:17.875159 kernel: rcu: Max phase no-delay instances is 400. Mar 19 11:46:17.875166 kernel: smp: Bringing up secondary CPUs ... Mar 19 11:46:17.875176 kernel: smpboot: x86: Booting SMP configuration: Mar 19 11:46:17.875183 kernel: .... node #0, CPUs: #1 #2 #3 Mar 19 11:46:17.875191 kernel: smp: Brought up 1 node, 4 CPUs Mar 19 11:46:17.875198 kernel: smpboot: Max logical packages: 1 Mar 19 11:46:17.875206 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Mar 19 11:46:17.875214 kernel: devtmpfs: initialized Mar 19 11:46:17.875221 kernel: x86/mm: Memory block size: 128MB Mar 19 11:46:17.875229 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 19 11:46:17.875236 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 19 11:46:17.875247 kernel: pinctrl core: initialized pinctrl subsystem Mar 19 11:46:17.875267 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 19 11:46:17.875277 kernel: audit: initializing netlink subsys (disabled) Mar 19 11:46:17.875287 kernel: audit: type=2000 audit(1742384777.121:1): state=initialized audit_enabled=0 res=1 Mar 19 11:46:17.875295 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 19 11:46:17.875302 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 19 11:46:17.875310 kernel: cpuidle: using governor menu Mar 19 11:46:17.875318 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 19 11:46:17.875325 kernel: dca service started, version 1.12.1 Mar 19 11:46:17.875336 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 19 11:46:17.875344 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 19 11:46:17.875352 kernel: PCI: Using configuration type 1 for base access Mar 19 11:46:17.875359 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 19 11:46:17.875367 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 19 11:46:17.875374 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 19 11:46:17.875382 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 19 11:46:17.875390 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 19 11:46:17.875397 kernel: ACPI: Added _OSI(Module Device) Mar 19 11:46:17.875407 kernel: ACPI: Added _OSI(Processor Device) Mar 19 11:46:17.875415 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 19 11:46:17.875422 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 19 11:46:17.875430 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 19 11:46:17.875437 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 19 11:46:17.875445 kernel: ACPI: Interpreter enabled Mar 19 11:46:17.875452 kernel: ACPI: PM: (supports S0 S3 S5) Mar 19 11:46:17.875460 kernel: ACPI: Using IOAPIC for interrupt routing Mar 19 11:46:17.875468 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 19 11:46:17.875478 kernel: PCI: Using E820 reservations for host bridge windows Mar 19 11:46:17.875486 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 19 11:46:17.875493 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 19 11:46:17.875676 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 19 11:46:17.875828 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 19 11:46:17.875957 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 19 11:46:17.875967 kernel: PCI host bridge to bus 0000:00 Mar 19 11:46:17.876103 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 19 11:46:17.876217 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 19 11:46:17.876345 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 19 11:46:17.876459 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 19 11:46:17.876570 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 19 11:46:17.876683 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 19 11:46:17.876804 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 19 11:46:17.876950 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 19 11:46:17.877084 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 19 11:46:17.877207 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 19 11:46:17.877346 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 19 11:46:17.877472 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 19 11:46:17.877593 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 19 11:46:17.877732 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 19 11:46:17.877865 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 19 11:46:17.877988 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 19 11:46:17.878110 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 19 11:46:17.878241 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 19 11:46:17.878484 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 19 11:46:17.878607 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 19 11:46:17.878752 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 19 11:46:17.878910 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 19 11:46:17.879037 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 19 11:46:17.879160 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 19 11:46:17.879298 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 19 11:46:17.879421 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 19 11:46:17.879551 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 19 11:46:17.879679 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 19 11:46:17.879880 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 19 11:46:17.880048 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 19 11:46:17.880172 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 19 11:46:17.880320 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 19 11:46:17.880444 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 19 11:46:17.880455 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 19 11:46:17.880467 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 19 11:46:17.880474 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 19 11:46:17.880482 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 19 11:46:17.880490 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 19 11:46:17.880497 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 19 11:46:17.880505 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 19 11:46:17.880512 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 19 11:46:17.880520 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 19 11:46:17.880528 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 19 11:46:17.880538 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 19 11:46:17.880545 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 19 11:46:17.880553 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 19 11:46:17.880560 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 19 11:46:17.880568 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 19 11:46:17.880576 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 19 11:46:17.880583 kernel: iommu: Default domain type: Translated Mar 19 11:46:17.880591 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 19 11:46:17.880598 kernel: PCI: Using ACPI for IRQ routing Mar 19 11:46:17.880609 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 19 11:46:17.880616 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 19 11:46:17.880624 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 19 11:46:17.880747 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 19 11:46:17.880878 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 19 11:46:17.881001 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 19 11:46:17.881011 kernel: vgaarb: loaded Mar 19 11:46:17.881019 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 19 11:46:17.881035 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 19 11:46:17.881051 kernel: clocksource: Switched to clocksource kvm-clock Mar 19 11:46:17.881071 kernel: VFS: Disk quotas dquot_6.6.0 Mar 19 11:46:17.881092 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 19 11:46:17.881109 kernel: pnp: PnP ACPI init Mar 19 11:46:17.881345 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 19 11:46:17.881358 kernel: pnp: PnP ACPI: found 6 devices Mar 19 11:46:17.881366 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 19 11:46:17.881377 kernel: NET: Registered PF_INET protocol family Mar 19 11:46:17.881385 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 19 11:46:17.881393 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 19 11:46:17.881403 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 19 11:46:17.881414 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 19 11:46:17.881425 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 19 11:46:17.881435 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 19 11:46:17.881446 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:46:17.881464 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 19 11:46:17.881476 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 19 11:46:17.881491 kernel: NET: Registered PF_XDP protocol family Mar 19 11:46:17.881618 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 19 11:46:17.881732 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 19 11:46:17.881855 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 19 11:46:17.881972 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 19 11:46:17.882112 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 19 11:46:17.882224 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 19 11:46:17.882238 kernel: PCI: CLS 0 bytes, default 64 Mar 19 11:46:17.882246 kernel: Initialise system trusted keyrings Mar 19 11:46:17.882317 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 19 11:46:17.882325 kernel: Key type asymmetric registered Mar 19 11:46:17.882333 kernel: Asymmetric key parser 'x509' registered Mar 19 11:46:17.882340 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 19 11:46:17.882348 kernel: io scheduler mq-deadline registered Mar 19 11:46:17.882356 kernel: io scheduler kyber registered Mar 19 11:46:17.882363 kernel: io scheduler bfq registered Mar 19 11:46:17.882374 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 19 11:46:17.882382 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 19 11:46:17.882390 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 19 11:46:17.882398 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 19 11:46:17.882405 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 19 11:46:17.882413 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 19 11:46:17.882421 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 19 11:46:17.882428 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 19 11:46:17.882436 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 19 11:46:17.882568 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 19 11:46:17.882579 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 19 11:46:17.882749 kernel: rtc_cmos 00:04: registered as rtc0 Mar 19 11:46:17.882883 kernel: rtc_cmos 00:04: setting system clock to 2025-03-19T11:46:17 UTC (1742384777) Mar 19 11:46:17.882997 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 19 11:46:17.883007 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 19 11:46:17.883015 kernel: NET: Registered PF_INET6 protocol family Mar 19 11:46:17.883023 kernel: Segment Routing with IPv6 Mar 19 11:46:17.883035 kernel: In-situ OAM (IOAM) with IPv6 Mar 19 11:46:17.883043 kernel: NET: Registered PF_PACKET protocol family Mar 19 11:46:17.883051 kernel: Key type dns_resolver registered Mar 19 11:46:17.883058 kernel: IPI shorthand broadcast: enabled Mar 19 11:46:17.883066 kernel: sched_clock: Marking stable (604003614, 113704735)->(768990582, -51282233) Mar 19 11:46:17.883074 kernel: registered taskstats version 1 Mar 19 11:46:17.883081 kernel: Loading compiled-in X.509 certificates Mar 19 11:46:17.883089 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: ea8d6696bd19c98b32173a761210456cdad6b56b' Mar 19 11:46:17.883097 kernel: Key type .fscrypt registered Mar 19 11:46:17.883107 kernel: Key type fscrypt-provisioning registered Mar 19 11:46:17.883114 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 19 11:46:17.883122 kernel: ima: Allocated hash algorithm: sha1 Mar 19 11:46:17.883130 kernel: ima: No architecture policies found Mar 19 11:46:17.883137 kernel: clk: Disabling unused clocks Mar 19 11:46:17.883145 kernel: Freeing unused kernel image (initmem) memory: 43480K Mar 19 11:46:17.883152 kernel: Write protecting the kernel read-only data: 38912k Mar 19 11:46:17.883160 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 19 11:46:17.883167 kernel: Run /init as init process Mar 19 11:46:17.883177 kernel: with arguments: Mar 19 11:46:17.883185 kernel: /init Mar 19 11:46:17.883192 kernel: with environment: Mar 19 11:46:17.883200 kernel: HOME=/ Mar 19 11:46:17.883207 kernel: TERM=linux Mar 19 11:46:17.883215 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 19 11:46:17.883224 systemd[1]: Successfully made /usr/ read-only. Mar 19 11:46:17.883234 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:46:17.883246 systemd[1]: Detected virtualization kvm. Mar 19 11:46:17.883267 systemd[1]: Detected architecture x86-64. Mar 19 11:46:17.883275 systemd[1]: Running in initrd. Mar 19 11:46:17.883283 systemd[1]: No hostname configured, using default hostname. Mar 19 11:46:17.883292 systemd[1]: Hostname set to . Mar 19 11:46:17.883300 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:46:17.883308 systemd[1]: Queued start job for default target initrd.target. Mar 19 11:46:17.883316 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:46:17.883327 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:46:17.883348 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 19 11:46:17.883359 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:46:17.883367 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 19 11:46:17.883377 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 19 11:46:17.883389 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 19 11:46:17.883397 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 19 11:46:17.883406 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:46:17.883414 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:46:17.883422 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:46:17.883431 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:46:17.883439 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:46:17.883447 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:46:17.883458 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:46:17.883466 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:46:17.883475 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 19 11:46:17.883483 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 19 11:46:17.883492 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:46:17.883500 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:46:17.883508 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:46:17.883517 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:46:17.883525 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 19 11:46:17.883537 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:46:17.883545 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 19 11:46:17.883553 systemd[1]: Starting systemd-fsck-usr.service... Mar 19 11:46:17.883561 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:46:17.883570 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:46:17.883594 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:46:17.883609 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 19 11:46:17.883621 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:46:17.883632 systemd[1]: Finished systemd-fsck-usr.service. Mar 19 11:46:17.883641 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 19 11:46:17.883677 systemd-journald[194]: Collecting audit messages is disabled. Mar 19 11:46:17.883696 systemd-journald[194]: Journal started Mar 19 11:46:17.883718 systemd-journald[194]: Runtime Journal (/run/log/journal/870144a0b4324827868a6f57a9e41b70) is 6M, max 48.4M, 42.3M free. Mar 19 11:46:17.881506 systemd-modules-load[195]: Inserted module 'overlay' Mar 19 11:46:17.907508 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:46:17.910273 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 19 11:46:17.912165 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 19 11:46:17.913111 kernel: Bridge firewalling registered Mar 19 11:46:17.916736 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:46:17.918129 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:46:17.921697 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 19 11:46:17.929407 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:46:17.930123 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:46:17.932923 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:46:17.935411 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:46:17.941295 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:46:17.946202 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:46:17.948045 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:46:17.960449 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:46:17.960706 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:46:17.964994 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 19 11:46:17.981530 dracut-cmdline[233]: dracut-dracut-053 Mar 19 11:46:17.984522 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=08c32ef14ad6302a92b1d281c48443f5b56d59f0d37d38df628e5b6f012967bc Mar 19 11:46:18.000477 systemd-resolved[229]: Positive Trust Anchors: Mar 19 11:46:18.000490 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:46:18.000520 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:46:18.003019 systemd-resolved[229]: Defaulting to hostname 'linux'. Mar 19 11:46:18.004063 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:46:18.010906 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:46:18.094298 kernel: SCSI subsystem initialized Mar 19 11:46:18.103293 kernel: Loading iSCSI transport class v2.0-870. Mar 19 11:46:18.114279 kernel: iscsi: registered transport (tcp) Mar 19 11:46:18.137304 kernel: iscsi: registered transport (qla4xxx) Mar 19 11:46:18.137382 kernel: QLogic iSCSI HBA Driver Mar 19 11:46:18.192837 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 19 11:46:18.203431 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 19 11:46:18.230881 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 19 11:46:18.230941 kernel: device-mapper: uevent: version 1.0.3 Mar 19 11:46:18.232018 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 19 11:46:18.275314 kernel: raid6: avx2x4 gen() 25910 MB/s Mar 19 11:46:18.292302 kernel: raid6: avx2x2 gen() 27080 MB/s Mar 19 11:46:18.309607 kernel: raid6: avx2x1 gen() 22565 MB/s Mar 19 11:46:18.309675 kernel: raid6: using algorithm avx2x2 gen() 27080 MB/s Mar 19 11:46:18.327605 kernel: raid6: .... xor() 15344 MB/s, rmw enabled Mar 19 11:46:18.327697 kernel: raid6: using avx2x2 recovery algorithm Mar 19 11:46:18.349299 kernel: xor: automatically using best checksumming function avx Mar 19 11:46:18.502298 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 19 11:46:18.516984 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:46:18.526422 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:46:18.546228 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 19 11:46:18.551802 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:46:18.563506 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 19 11:46:18.580133 dracut-pre-trigger[423]: rd.md=0: removing MD RAID activation Mar 19 11:46:18.617519 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:46:18.634424 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:46:18.698729 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:46:18.709445 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 19 11:46:18.721519 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 19 11:46:18.724520 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:46:18.727268 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:46:18.730235 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:46:18.740443 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 19 11:46:18.743348 kernel: cryptd: max_cpu_qlen set to 1000 Mar 19 11:46:18.751276 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 19 11:46:18.776191 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 19 11:46:18.776384 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 19 11:46:18.776398 kernel: GPT:9289727 != 19775487 Mar 19 11:46:18.776409 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 19 11:46:18.776419 kernel: GPT:9289727 != 19775487 Mar 19 11:46:18.776429 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 19 11:46:18.776440 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:46:18.776450 kernel: AVX2 version of gcm_enc/dec engaged. Mar 19 11:46:18.776460 kernel: libata version 3.00 loaded. Mar 19 11:46:18.776475 kernel: AES CTR mode by8 optimization enabled Mar 19 11:46:18.757443 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:46:18.760347 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:46:18.760458 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:46:18.761891 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:46:18.763092 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:46:18.786511 kernel: ahci 0000:00:1f.2: version 3.0 Mar 19 11:46:18.813613 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 19 11:46:18.813629 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 19 11:46:18.813797 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 19 11:46:18.813944 kernel: scsi host0: ahci Mar 19 11:46:18.814100 kernel: scsi host1: ahci Mar 19 11:46:18.814277 kernel: scsi host2: ahci Mar 19 11:46:18.814424 kernel: scsi host3: ahci Mar 19 11:46:18.814567 kernel: scsi host4: ahci Mar 19 11:46:18.814709 kernel: BTRFS: device fsid 8d57424d-5abc-4888-810f-658d040a58e4 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (476) Mar 19 11:46:18.814721 kernel: scsi host5: ahci Mar 19 11:46:18.814877 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 19 11:46:18.814888 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 19 11:46:18.814902 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 19 11:46:18.814912 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 19 11:46:18.814922 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 19 11:46:18.814933 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 19 11:46:18.814943 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (480) Mar 19 11:46:18.763221 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:46:18.767075 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:46:18.783228 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:46:18.827825 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 19 11:46:18.852671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:46:18.863629 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 19 11:46:18.872266 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 19 11:46:18.873667 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 19 11:46:18.893029 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 19 11:46:18.904501 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 19 11:46:18.907745 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 19 11:46:18.915924 disk-uuid[561]: Primary Header is updated. Mar 19 11:46:18.915924 disk-uuid[561]: Secondary Entries is updated. Mar 19 11:46:18.915924 disk-uuid[561]: Secondary Header is updated. Mar 19 11:46:18.920273 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:46:18.924273 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:46:18.928117 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:46:19.120514 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 19 11:46:19.120590 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 19 11:46:19.120602 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 19 11:46:19.121268 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 19 11:46:19.123203 kernel: ata3.00: applying bridge limits Mar 19 11:46:19.123217 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 19 11:46:19.123283 kernel: ata3.00: configured for UDMA/100 Mar 19 11:46:19.124283 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 19 11:46:19.129278 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 19 11:46:19.129308 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 19 11:46:19.178300 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 19 11:46:19.192167 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 19 11:46:19.192188 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 19 11:46:19.926285 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 19 11:46:19.926818 disk-uuid[566]: The operation has completed successfully. Mar 19 11:46:19.962270 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 19 11:46:19.962408 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 19 11:46:20.010434 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 19 11:46:20.016955 sh[595]: Success Mar 19 11:46:20.058304 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 19 11:46:20.096975 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 19 11:46:20.110132 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 19 11:46:20.115087 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 19 11:46:20.127582 kernel: BTRFS info (device dm-0): first mount of filesystem 8d57424d-5abc-4888-810f-658d040a58e4 Mar 19 11:46:20.127633 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:46:20.127659 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 19 11:46:20.128676 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 19 11:46:20.129479 kernel: BTRFS info (device dm-0): using free space tree Mar 19 11:46:20.134985 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 19 11:46:20.135922 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 19 11:46:20.152566 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 19 11:46:20.154674 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 19 11:46:20.170014 kernel: BTRFS info (device vda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:46:20.170086 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:46:20.170098 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:46:20.175510 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:46:20.185326 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 19 11:46:20.186926 kernel: BTRFS info (device vda6): last unmount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:46:20.196609 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 19 11:46:20.205423 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 19 11:46:20.263320 ignition[698]: Ignition 2.20.0 Mar 19 11:46:20.263332 ignition[698]: Stage: fetch-offline Mar 19 11:46:20.263371 ignition[698]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:46:20.263382 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:46:20.263474 ignition[698]: parsed url from cmdline: "" Mar 19 11:46:20.263479 ignition[698]: no config URL provided Mar 19 11:46:20.263486 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Mar 19 11:46:20.263499 ignition[698]: no config at "/usr/lib/ignition/user.ign" Mar 19 11:46:20.263527 ignition[698]: op(1): [started] loading QEMU firmware config module Mar 19 11:46:20.263533 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 19 11:46:20.277070 ignition[698]: op(1): [finished] loading QEMU firmware config module Mar 19 11:46:20.282045 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:46:20.294451 ignition[698]: parsing config with SHA512: 19a4c495591f93ad1de09553f2cb86c04ef14728b0f69eb6e62159bc1d5c123dffff772294b99e11eed13ded584b00263cfd4180b5ff96281265abe5f5c58c22 Mar 19 11:46:20.296676 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:46:20.299341 unknown[698]: fetched base config from "system" Mar 19 11:46:20.299554 unknown[698]: fetched user config from "qemu" Mar 19 11:46:20.299924 ignition[698]: fetch-offline: fetch-offline passed Mar 19 11:46:20.300005 ignition[698]: Ignition finished successfully Mar 19 11:46:20.301826 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:46:20.326205 systemd-networkd[785]: lo: Link UP Mar 19 11:46:20.326215 systemd-networkd[785]: lo: Gained carrier Mar 19 11:46:20.328304 systemd-networkd[785]: Enumeration completed Mar 19 11:46:20.328745 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:46:20.328751 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:46:20.329539 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:46:20.329815 systemd-networkd[785]: eth0: Link UP Mar 19 11:46:20.329820 systemd-networkd[785]: eth0: Gained carrier Mar 19 11:46:20.329828 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:46:20.335231 systemd[1]: Reached target network.target - Network. Mar 19 11:46:20.340056 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 19 11:46:20.356528 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 19 11:46:20.357718 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 19 11:46:20.377048 ignition[789]: Ignition 2.20.0 Mar 19 11:46:20.377060 ignition[789]: Stage: kargs Mar 19 11:46:20.377226 ignition[789]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:46:20.377238 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:46:20.378022 ignition[789]: kargs: kargs passed Mar 19 11:46:20.378066 ignition[789]: Ignition finished successfully Mar 19 11:46:20.384460 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 19 11:46:20.395587 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 19 11:46:20.407161 ignition[798]: Ignition 2.20.0 Mar 19 11:46:20.407173 ignition[798]: Stage: disks Mar 19 11:46:20.407354 ignition[798]: no configs at "/usr/lib/ignition/base.d" Mar 19 11:46:20.407366 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:46:20.408304 ignition[798]: disks: disks passed Mar 19 11:46:20.408352 ignition[798]: Ignition finished successfully Mar 19 11:46:20.413924 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 19 11:46:20.416090 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 19 11:46:20.416174 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 19 11:46:20.420529 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:46:20.420602 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:46:20.422505 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:46:20.437484 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 19 11:46:20.472779 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 19 11:46:20.547563 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 19 11:46:21.133375 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 19 11:46:21.221283 kernel: EXT4-fs (vda9): mounted filesystem 303a73dd-e104-408b-9302-bf91b04ba1ca r/w with ordered data mode. Quota mode: none. Mar 19 11:46:21.222400 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 19 11:46:21.223118 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 19 11:46:21.232338 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:46:21.234487 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 19 11:46:21.234830 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 19 11:46:21.234868 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 19 11:46:21.246145 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (817) Mar 19 11:46:21.246174 kernel: BTRFS info (device vda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:46:21.246189 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:46:21.246203 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:46:21.234892 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:46:21.248338 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:46:21.249750 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:46:21.268534 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 19 11:46:21.269898 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 19 11:46:21.306675 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Mar 19 11:46:21.312139 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Mar 19 11:46:21.316144 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Mar 19 11:46:21.319770 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Mar 19 11:46:21.393385 systemd-networkd[785]: eth0: Gained IPv6LL Mar 19 11:46:21.412399 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 19 11:46:21.420461 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 19 11:46:21.424396 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 19 11:46:21.428279 kernel: BTRFS info (device vda6): last unmount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:46:21.445941 ignition[929]: INFO : Ignition 2.20.0 Mar 19 11:46:21.445941 ignition[929]: INFO : Stage: mount Mar 19 11:46:21.447754 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:46:21.447754 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:46:21.447754 ignition[929]: INFO : mount: mount passed Mar 19 11:46:21.447754 ignition[929]: INFO : Ignition finished successfully Mar 19 11:46:21.448979 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 19 11:46:21.457383 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 19 11:46:21.459480 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 19 11:46:22.125765 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 19 11:46:22.139390 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 19 11:46:22.158280 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (943) Mar 19 11:46:22.162671 kernel: BTRFS info (device vda6): first mount of filesystem 3c2c2d54-a06e-4f36-8d13-ab30a5d0eab5 Mar 19 11:46:22.162698 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 19 11:46:22.162709 kernel: BTRFS info (device vda6): using free space tree Mar 19 11:46:22.166271 kernel: BTRFS info (device vda6): auto enabling async discard Mar 19 11:46:22.166984 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 19 11:46:22.192944 ignition[960]: INFO : Ignition 2.20.0 Mar 19 11:46:22.192944 ignition[960]: INFO : Stage: files Mar 19 11:46:22.194776 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:46:22.194776 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:46:22.194776 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Mar 19 11:46:22.194776 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 19 11:46:22.194776 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 19 11:46:22.200829 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 19 11:46:22.200829 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 19 11:46:22.203793 unknown[960]: wrote ssh authorized keys file for user: core Mar 19 11:46:22.204966 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 19 11:46:22.207291 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 19 11:46:22.209212 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Mar 19 11:46:22.245840 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 19 11:46:22.330907 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 19 11:46:22.333308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 19 11:46:22.333308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 19 11:46:22.333308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:46:22.333308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 19 11:46:22.333308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:46:22.333308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 19 11:46:22.333308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:46:22.333308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 19 11:46:22.333308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:46:22.333308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 19 11:46:22.333308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 19 11:46:22.333308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 19 11:46:22.333308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 19 11:46:22.333308 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 19 11:46:22.686627 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 19 11:46:23.079914 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 19 11:46:23.079914 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 19 11:46:23.084492 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:46:23.084492 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 19 11:46:23.084492 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 19 11:46:23.084492 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 19 11:46:23.084492 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 19 11:46:23.084492 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 19 11:46:23.084492 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 19 11:46:23.084492 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 19 11:46:23.102604 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 19 11:46:23.102604 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 19 11:46:23.102604 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 19 11:46:23.102604 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 19 11:46:23.102604 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 19 11:46:23.102604 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:46:23.102604 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 19 11:46:23.102604 ignition[960]: INFO : files: files passed Mar 19 11:46:23.102604 ignition[960]: INFO : Ignition finished successfully Mar 19 11:46:23.104503 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 19 11:46:23.124433 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 19 11:46:23.126379 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 19 11:46:23.129812 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 19 11:46:23.130868 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 19 11:46:23.136421 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Mar 19 11:46:23.140599 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:46:23.140599 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:46:23.143835 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 19 11:46:23.147697 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:46:23.150693 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 19 11:46:23.166415 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 19 11:46:23.192608 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 19 11:46:23.193739 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 19 11:46:23.196520 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 19 11:46:23.198751 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 19 11:46:23.200952 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 19 11:46:23.212510 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 19 11:46:23.227878 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:46:23.235577 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 19 11:46:23.245140 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:46:23.247548 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:46:23.249958 systemd[1]: Stopped target timers.target - Timer Units. Mar 19 11:46:23.251838 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 19 11:46:23.252865 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 19 11:46:23.255714 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 19 11:46:23.257845 systemd[1]: Stopped target basic.target - Basic System. Mar 19 11:46:23.259741 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 19 11:46:23.262151 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 19 11:46:23.264588 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 19 11:46:23.267152 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 19 11:46:23.269339 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 19 11:46:23.271869 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 19 11:46:23.273988 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 19 11:46:23.276049 systemd[1]: Stopped target swap.target - Swaps. Mar 19 11:46:23.277735 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 19 11:46:23.278799 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 19 11:46:23.281281 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:46:23.283788 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:46:23.286579 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 19 11:46:23.287658 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:46:23.290597 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 19 11:46:23.291677 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 19 11:46:23.294134 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 19 11:46:23.295288 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 19 11:46:23.297723 systemd[1]: Stopped target paths.target - Path Units. Mar 19 11:46:23.299618 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 19 11:46:23.300760 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:46:23.303713 systemd[1]: Stopped target slices.target - Slice Units. Mar 19 11:46:23.305604 systemd[1]: Stopped target sockets.target - Socket Units. Mar 19 11:46:23.307621 systemd[1]: iscsid.socket: Deactivated successfully. Mar 19 11:46:23.308629 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 19 11:46:23.310854 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 19 11:46:23.311843 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 19 11:46:23.314062 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 19 11:46:23.315293 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 19 11:46:23.317946 systemd[1]: ignition-files.service: Deactivated successfully. Mar 19 11:46:23.318969 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 19 11:46:23.335474 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 19 11:46:23.337505 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 19 11:46:23.338796 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:46:23.342930 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 19 11:46:23.345120 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 19 11:46:23.346514 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:46:23.349309 ignition[1016]: INFO : Ignition 2.20.0 Mar 19 11:46:23.350513 ignition[1016]: INFO : Stage: umount Mar 19 11:46:23.350513 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 19 11:46:23.350513 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 19 11:46:23.349455 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 19 11:46:23.350396 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 19 11:46:23.351846 ignition[1016]: INFO : umount: umount passed Mar 19 11:46:23.351846 ignition[1016]: INFO : Ignition finished successfully Mar 19 11:46:23.360027 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 19 11:46:23.360154 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 19 11:46:23.366404 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 19 11:46:23.367819 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 19 11:46:23.371695 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 19 11:46:23.373905 systemd[1]: Stopped target network.target - Network. Mar 19 11:46:23.377022 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 19 11:46:23.378442 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 19 11:46:23.381476 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 19 11:46:23.381554 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 19 11:46:23.385142 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 19 11:46:23.386283 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 19 11:46:23.388759 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 19 11:46:23.388823 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 19 11:46:23.393087 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 19 11:46:23.395701 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 19 11:46:23.404176 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 19 11:46:23.405534 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 19 11:46:23.411086 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 19 11:46:23.412983 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 19 11:46:23.414240 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 19 11:46:23.417989 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 19 11:46:23.420004 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 19 11:46:23.420960 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:46:23.437425 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 19 11:46:23.439809 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 19 11:46:23.441067 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 19 11:46:23.444225 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 19 11:46:23.444306 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:46:23.448122 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 19 11:46:23.449376 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 19 11:46:23.452001 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 19 11:46:23.453263 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:46:23.456361 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:46:23.460840 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 19 11:46:23.460920 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:46:23.468531 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 19 11:46:23.468725 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:46:23.473438 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 19 11:46:23.473521 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 19 11:46:23.473801 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 19 11:46:23.473839 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:46:23.474095 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 19 11:46:23.474144 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 19 11:46:23.475129 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 19 11:46:23.475178 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 19 11:46:23.475959 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 19 11:46:23.476007 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 19 11:46:23.486623 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 19 11:46:23.486767 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 19 11:46:23.486819 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:46:23.491160 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 19 11:46:23.491210 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:46:23.496325 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 19 11:46:23.496391 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 19 11:46:23.500778 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 19 11:46:23.500907 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 19 11:46:23.502113 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 19 11:46:23.502216 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 19 11:46:23.570852 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 19 11:46:23.571033 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 19 11:46:23.572498 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 19 11:46:23.575476 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 19 11:46:23.575535 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 19 11:46:23.597518 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 19 11:46:23.607063 systemd[1]: Switching root. Mar 19 11:46:23.637180 systemd-journald[194]: Journal stopped Mar 19 11:46:25.012348 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 19 11:46:25.012418 kernel: SELinux: policy capability network_peer_controls=1 Mar 19 11:46:25.012438 kernel: SELinux: policy capability open_perms=1 Mar 19 11:46:25.012455 kernel: SELinux: policy capability extended_socket_class=1 Mar 19 11:46:25.012472 kernel: SELinux: policy capability always_check_network=0 Mar 19 11:46:25.012484 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 19 11:46:25.012497 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 19 11:46:25.012508 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 19 11:46:25.012526 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 19 11:46:25.012538 kernel: audit: type=1403 audit(1742384784.123:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 19 11:46:25.012556 systemd[1]: Successfully loaded SELinux policy in 42.087ms. Mar 19 11:46:25.012584 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.038ms. Mar 19 11:46:25.012597 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 19 11:46:25.012622 systemd[1]: Detected virtualization kvm. Mar 19 11:46:25.012634 systemd[1]: Detected architecture x86-64. Mar 19 11:46:25.012648 systemd[1]: Detected first boot. Mar 19 11:46:25.012674 systemd[1]: Initializing machine ID from VM UUID. Mar 19 11:46:25.012687 zram_generator::config[1064]: No configuration found. Mar 19 11:46:25.012700 kernel: Guest personality initialized and is inactive Mar 19 11:46:25.012712 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 19 11:46:25.012723 kernel: Initialized host personality Mar 19 11:46:25.012737 kernel: NET: Registered PF_VSOCK protocol family Mar 19 11:46:25.012753 systemd[1]: Populated /etc with preset unit settings. Mar 19 11:46:25.012767 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 19 11:46:25.012779 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 19 11:46:25.012791 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 19 11:46:25.012803 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 19 11:46:25.012815 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 19 11:46:25.012827 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 19 11:46:25.012838 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 19 11:46:25.012852 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 19 11:46:25.012864 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 19 11:46:25.012876 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 19 11:46:25.012889 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 19 11:46:25.012901 systemd[1]: Created slice user.slice - User and Session Slice. Mar 19 11:46:25.012913 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 19 11:46:25.012925 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 19 11:46:25.012937 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 19 11:46:25.012950 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 19 11:46:25.012964 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 19 11:46:25.012976 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 19 11:46:25.012988 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 19 11:46:25.013002 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 19 11:46:25.013014 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 19 11:46:25.013025 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 19 11:46:25.013037 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 19 11:46:25.013051 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 19 11:46:25.013064 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 19 11:46:25.013076 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 19 11:46:25.013088 systemd[1]: Reached target slices.target - Slice Units. Mar 19 11:46:25.013100 systemd[1]: Reached target swap.target - Swaps. Mar 19 11:46:25.013112 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 19 11:46:25.013124 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 19 11:46:25.013136 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 19 11:46:25.013147 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 19 11:46:25.013159 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 19 11:46:25.013174 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 19 11:46:25.013197 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 19 11:46:25.013221 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 19 11:46:25.013233 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 19 11:46:25.013245 systemd[1]: Mounting media.mount - External Media Directory... Mar 19 11:46:25.013271 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:25.013283 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 19 11:46:25.013297 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 19 11:46:25.013319 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 19 11:46:25.013341 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 19 11:46:25.013354 systemd[1]: Reached target machines.target - Containers. Mar 19 11:46:25.013366 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 19 11:46:25.013379 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:46:25.013391 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 19 11:46:25.013403 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 19 11:46:25.013415 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:46:25.013427 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:46:25.013443 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:46:25.013455 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 19 11:46:25.013467 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:46:25.013479 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 19 11:46:25.013491 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 19 11:46:25.013504 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 19 11:46:25.013516 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 19 11:46:25.013528 systemd[1]: Stopped systemd-fsck-usr.service. Mar 19 11:46:25.013543 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:46:25.013555 kernel: loop: module loaded Mar 19 11:46:25.013566 kernel: fuse: init (API version 7.39) Mar 19 11:46:25.013580 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 19 11:46:25.013592 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 19 11:46:25.013613 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 19 11:46:25.013625 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 19 11:46:25.013637 kernel: ACPI: bus type drm_connector registered Mar 19 11:46:25.013658 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 19 11:46:25.013683 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 19 11:46:25.013698 systemd[1]: verity-setup.service: Deactivated successfully. Mar 19 11:46:25.013709 systemd[1]: Stopped verity-setup.service. Mar 19 11:46:25.013743 systemd-journald[1140]: Collecting audit messages is disabled. Mar 19 11:46:25.013774 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:25.013787 systemd-journald[1140]: Journal started Mar 19 11:46:25.013809 systemd-journald[1140]: Runtime Journal (/run/log/journal/870144a0b4324827868a6f57a9e41b70) is 6M, max 48.4M, 42.3M free. Mar 19 11:46:24.747440 systemd[1]: Queued start job for default target multi-user.target. Mar 19 11:46:24.760270 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 19 11:46:24.760770 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 19 11:46:25.029287 systemd[1]: Started systemd-journald.service - Journal Service. Mar 19 11:46:25.033723 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 19 11:46:25.049263 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 19 11:46:25.050564 systemd[1]: Mounted media.mount - External Media Directory. Mar 19 11:46:25.051674 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 19 11:46:25.052902 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 19 11:46:25.054124 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 19 11:46:25.055489 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 19 11:46:25.057083 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 19 11:46:25.058658 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 19 11:46:25.058941 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 19 11:46:25.060800 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:46:25.061044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:46:25.062555 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:46:25.062780 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:46:25.064186 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:46:25.064420 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:46:25.066084 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 19 11:46:25.066313 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 19 11:46:25.067843 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:46:25.068053 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:46:25.069591 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 19 11:46:25.071072 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 19 11:46:25.072692 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 19 11:46:25.074407 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 19 11:46:25.088551 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 19 11:46:25.100478 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 19 11:46:25.103387 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 19 11:46:25.104576 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 19 11:46:25.104634 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 19 11:46:25.115580 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 19 11:46:25.118340 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 19 11:46:25.123418 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 19 11:46:25.124784 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:46:25.126752 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 19 11:46:25.129077 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 19 11:46:25.130586 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:46:25.133374 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 19 11:46:25.135147 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:46:25.139447 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 19 11:46:25.142133 systemd-journald[1140]: Time spent on flushing to /var/log/journal/870144a0b4324827868a6f57a9e41b70 is 15.276ms for 963 entries. Mar 19 11:46:25.142133 systemd-journald[1140]: System Journal (/var/log/journal/870144a0b4324827868a6f57a9e41b70) is 8M, max 195.6M, 187.6M free. Mar 19 11:46:25.167851 systemd-journald[1140]: Received client request to flush runtime journal. Mar 19 11:46:25.167890 kernel: loop0: detected capacity change from 0 to 218376 Mar 19 11:46:25.143522 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 19 11:46:25.147911 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 19 11:46:25.158010 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 19 11:46:25.162237 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 19 11:46:25.165578 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 19 11:46:25.167642 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 19 11:46:25.169565 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 19 11:46:25.171312 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 19 11:46:25.173611 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 19 11:46:25.183180 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 19 11:46:25.192418 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 19 11:46:25.196303 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 19 11:46:25.197563 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 19 11:46:25.201970 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 19 11:46:25.215156 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 19 11:46:25.221994 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 19 11:46:25.224892 udevadm[1200]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 19 11:46:25.227279 kernel: loop1: detected capacity change from 0 to 147912 Mar 19 11:46:25.244178 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Mar 19 11:46:25.244201 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Mar 19 11:46:25.252862 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 19 11:46:25.266346 kernel: loop2: detected capacity change from 0 to 138176 Mar 19 11:46:25.307293 kernel: loop3: detected capacity change from 0 to 218376 Mar 19 11:46:25.317273 kernel: loop4: detected capacity change from 0 to 147912 Mar 19 11:46:25.328272 kernel: loop5: detected capacity change from 0 to 138176 Mar 19 11:46:25.338336 (sd-merge)[1208]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 19 11:46:25.338971 (sd-merge)[1208]: Merged extensions into '/usr'. Mar 19 11:46:25.343042 systemd[1]: Reload requested from client PID 1184 ('systemd-sysext') (unit systemd-sysext.service)... Mar 19 11:46:25.343059 systemd[1]: Reloading... Mar 19 11:46:25.412280 zram_generator::config[1239]: No configuration found. Mar 19 11:46:25.488180 ldconfig[1179]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 19 11:46:25.534710 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:46:25.599867 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 19 11:46:25.600118 systemd[1]: Reloading finished in 256 ms. Mar 19 11:46:25.624180 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 19 11:46:25.625992 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 19 11:46:25.642008 systemd[1]: Starting ensure-sysext.service... Mar 19 11:46:25.644383 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 19 11:46:25.655946 systemd[1]: Reload requested from client PID 1273 ('systemctl') (unit ensure-sysext.service)... Mar 19 11:46:25.655965 systemd[1]: Reloading... Mar 19 11:46:25.670539 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 19 11:46:25.670833 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 19 11:46:25.671798 systemd-tmpfiles[1274]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 19 11:46:25.672085 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Mar 19 11:46:25.672169 systemd-tmpfiles[1274]: ACLs are not supported, ignoring. Mar 19 11:46:25.706294 zram_generator::config[1302]: No configuration found. Mar 19 11:46:25.709764 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:46:25.709777 systemd-tmpfiles[1274]: Skipping /boot Mar 19 11:46:25.722498 systemd-tmpfiles[1274]: Detected autofs mount point /boot during canonicalization of boot. Mar 19 11:46:25.722512 systemd-tmpfiles[1274]: Skipping /boot Mar 19 11:46:25.819352 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:46:25.888017 systemd[1]: Reloading finished in 231 ms. Mar 19 11:46:25.901472 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 19 11:46:25.921138 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 19 11:46:25.930514 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:46:25.933334 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 19 11:46:25.936031 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 19 11:46:25.940831 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 19 11:46:25.945797 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 19 11:46:25.953337 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 19 11:46:25.957468 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:25.957652 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:46:25.958979 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:46:25.961750 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:46:25.964716 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:46:25.966162 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:46:25.966384 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:46:25.976912 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 19 11:46:25.978313 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:25.980459 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 19 11:46:25.984503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:46:25.984817 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:46:25.986795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:46:25.987294 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:46:25.988550 systemd-udevd[1347]: Using default interface naming scheme 'v255'. Mar 19 11:46:25.990557 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:46:25.991036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:46:26.003915 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 19 11:46:26.010379 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:26.010587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:46:26.016727 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:46:26.019983 augenrules[1377]: No rules Mar 19 11:46:26.019876 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:46:26.026417 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:46:26.027922 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:46:26.028077 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:46:26.030086 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 19 11:46:26.032330 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:26.034037 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 19 11:46:26.038008 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 19 11:46:26.039814 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:46:26.040096 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:46:26.043232 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:46:26.043635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:46:26.045918 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:46:26.046123 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:46:26.048092 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:46:26.048349 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:46:26.051042 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 19 11:46:26.057085 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 19 11:46:26.075061 systemd[1]: Finished ensure-sysext.service. Mar 19 11:46:26.078114 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:26.085488 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 19 11:46:26.086658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 19 11:46:26.091443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 19 11:46:26.095372 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 19 11:46:26.099480 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 19 11:46:26.111776 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 19 11:46:26.115414 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 19 11:46:26.115453 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 19 11:46:26.122009 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 19 11:46:26.128283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1396) Mar 19 11:46:26.131446 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 19 11:46:26.133359 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 19 11:46:26.133392 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 19 11:46:26.134247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 19 11:46:26.134534 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 19 11:46:26.137110 systemd-resolved[1345]: Positive Trust Anchors: Mar 19 11:46:26.137478 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 19 11:46:26.137724 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 19 11:46:26.139676 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 19 11:46:26.139933 systemd-resolved[1345]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 19 11:46:26.139960 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 19 11:46:26.141538 systemd-resolved[1345]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 19 11:46:26.142589 augenrules[1416]: /sbin/augenrules: No change Mar 19 11:46:26.143354 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 19 11:46:26.143562 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 19 11:46:26.152139 systemd-resolved[1345]: Defaulting to hostname 'linux'. Mar 19 11:46:26.154213 augenrules[1452]: No rules Mar 19 11:46:26.153183 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 19 11:46:26.159759 systemd[1]: audit-rules.service: Deactivated successfully. Mar 19 11:46:26.160277 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 19 11:46:26.164806 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 19 11:46:26.188536 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 19 11:46:26.189903 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 19 11:46:26.200398 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 19 11:46:26.202276 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 19 11:46:26.202558 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 19 11:46:26.202637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 19 11:46:26.211335 kernel: ACPI: button: Power Button [PWRF] Mar 19 11:46:26.220886 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 19 11:46:26.232548 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 19 11:46:26.234622 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 19 11:46:26.235180 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 19 11:46:26.238321 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 19 11:46:26.238865 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 19 11:46:26.240378 systemd[1]: Reached target time-set.target - System Time Set. Mar 19 11:46:26.244659 systemd-networkd[1430]: lo: Link UP Mar 19 11:46:26.244666 systemd-networkd[1430]: lo: Gained carrier Mar 19 11:46:26.246762 systemd-networkd[1430]: Enumeration completed Mar 19 11:46:26.246893 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 19 11:46:26.248446 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:46:26.248525 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 19 11:46:26.249246 systemd[1]: Reached target network.target - Network. Mar 19 11:46:26.250487 systemd-networkd[1430]: eth0: Link UP Mar 19 11:46:26.251321 systemd-networkd[1430]: eth0: Gained carrier Mar 19 11:46:26.252385 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 19 11:46:26.261413 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 19 11:46:26.266405 systemd-networkd[1430]: eth0: DHCPv4 address 10.0.0.120/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 19 11:46:26.267104 systemd-timesyncd[1431]: Network configuration changed, trying to establish connection. Mar 19 11:46:26.267657 systemd-timesyncd[1431]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 19 11:46:26.267694 systemd-timesyncd[1431]: Initial clock synchronization to Wed 2025-03-19 11:46:26.534986 UTC. Mar 19 11:46:26.274434 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 19 11:46:26.288486 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 19 11:46:26.305702 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 19 11:46:26.317650 kernel: mousedev: PS/2 mouse device common for all mice Mar 19 11:46:26.342706 kernel: kvm_amd: TSC scaling supported Mar 19 11:46:26.342773 kernel: kvm_amd: Nested Virtualization enabled Mar 19 11:46:26.342787 kernel: kvm_amd: Nested Paging enabled Mar 19 11:46:26.344984 kernel: kvm_amd: LBR virtualization supported Mar 19 11:46:26.345013 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 19 11:46:26.345039 kernel: kvm_amd: Virtual GIF supported Mar 19 11:46:26.365273 kernel: EDAC MC: Ver: 3.0.0 Mar 19 11:46:26.406993 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 19 11:46:26.408839 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 19 11:46:26.427381 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 19 11:46:26.435417 lvm[1477]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:46:26.465636 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 19 11:46:26.467245 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 19 11:46:26.468412 systemd[1]: Reached target sysinit.target - System Initialization. Mar 19 11:46:26.469607 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 19 11:46:26.470876 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 19 11:46:26.472367 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 19 11:46:26.473606 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 19 11:46:26.474862 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 19 11:46:26.476154 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 19 11:46:26.476186 systemd[1]: Reached target paths.target - Path Units. Mar 19 11:46:26.477119 systemd[1]: Reached target timers.target - Timer Units. Mar 19 11:46:26.478967 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 19 11:46:26.482276 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 19 11:46:26.485909 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 19 11:46:26.487333 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 19 11:46:26.488620 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 19 11:46:26.497663 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 19 11:46:26.499076 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 19 11:46:26.501448 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 19 11:46:26.503047 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 19 11:46:26.504202 systemd[1]: Reached target sockets.target - Socket Units. Mar 19 11:46:26.505165 systemd[1]: Reached target basic.target - Basic System. Mar 19 11:46:26.506119 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:46:26.506151 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 19 11:46:26.507136 systemd[1]: Starting containerd.service - containerd container runtime... Mar 19 11:46:26.509308 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 19 11:46:26.513599 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 19 11:46:26.517147 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 19 11:46:26.517451 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 19 11:46:26.518576 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 19 11:46:26.521363 jq[1484]: false Mar 19 11:46:26.520423 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 19 11:46:26.523696 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 19 11:46:26.526290 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 19 11:46:26.534451 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 19 11:46:26.541540 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 19 11:46:26.543240 extend-filesystems[1485]: Found loop3 Mar 19 11:46:26.544154 extend-filesystems[1485]: Found loop4 Mar 19 11:46:26.544154 extend-filesystems[1485]: Found loop5 Mar 19 11:46:26.544154 extend-filesystems[1485]: Found sr0 Mar 19 11:46:26.544154 extend-filesystems[1485]: Found vda Mar 19 11:46:26.544154 extend-filesystems[1485]: Found vda1 Mar 19 11:46:26.544154 extend-filesystems[1485]: Found vda2 Mar 19 11:46:26.544154 extend-filesystems[1485]: Found vda3 Mar 19 11:46:26.544154 extend-filesystems[1485]: Found usr Mar 19 11:46:26.544154 extend-filesystems[1485]: Found vda4 Mar 19 11:46:26.544154 extend-filesystems[1485]: Found vda6 Mar 19 11:46:26.544154 extend-filesystems[1485]: Found vda7 Mar 19 11:46:26.544154 extend-filesystems[1485]: Found vda9 Mar 19 11:46:26.544154 extend-filesystems[1485]: Checking size of /dev/vda9 Mar 19 11:46:26.546909 dbus-daemon[1483]: [system] SELinux support is enabled Mar 19 11:46:26.548313 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 19 11:46:26.552810 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 19 11:46:26.558484 extend-filesystems[1485]: Resized partition /dev/vda9 Mar 19 11:46:26.561459 systemd[1]: Starting update-engine.service - Update Engine... Mar 19 11:46:26.565467 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 19 11:46:26.570435 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 19 11:46:26.574275 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1400) Mar 19 11:46:26.576425 extend-filesystems[1505]: resize2fs 1.47.1 (20-May-2024) Mar 19 11:46:26.583366 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 19 11:46:26.583623 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 19 11:46:26.586809 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 19 11:46:26.587544 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 19 11:46:26.587889 systemd[1]: motdgen.service: Deactivated successfully. Mar 19 11:46:26.588030 jq[1506]: true Mar 19 11:46:26.589356 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 19 11:46:26.595682 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 19 11:46:26.595928 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 19 11:46:26.598719 update_engine[1503]: I20250319 11:46:26.598644 1503 main.cc:92] Flatcar Update Engine starting Mar 19 11:46:26.607351 update_engine[1503]: I20250319 11:46:26.605394 1503 update_check_scheduler.cc:74] Next update check in 7m27s Mar 19 11:46:26.612434 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 19 11:46:26.614310 jq[1510]: true Mar 19 11:46:26.617787 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 19 11:46:26.648872 tar[1509]: linux-amd64/LICENSE Mar 19 11:46:26.633381 systemd[1]: Started update-engine.service - Update Engine. Mar 19 11:46:26.635228 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 19 11:46:26.635267 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 19 11:46:26.650242 extend-filesystems[1505]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 19 11:46:26.650242 extend-filesystems[1505]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 19 11:46:26.650242 extend-filesystems[1505]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 19 11:46:26.653869 tar[1509]: linux-amd64/helm Mar 19 11:46:26.637088 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 19 11:46:26.654044 extend-filesystems[1485]: Resized filesystem in /dev/vda9 Mar 19 11:46:26.637104 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 19 11:46:26.647305 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 19 11:46:26.649306 systemd-logind[1495]: Watching system buttons on /dev/input/event1 (Power Button) Mar 19 11:46:26.649327 systemd-logind[1495]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 19 11:46:26.651327 systemd-logind[1495]: New seat seat0. Mar 19 11:46:26.651922 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 19 11:46:26.652277 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 19 11:46:26.668084 systemd[1]: Started systemd-logind.service - User Login Management. Mar 19 11:46:26.681725 sshd_keygen[1501]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 19 11:46:26.689713 locksmithd[1527]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 19 11:46:26.697005 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Mar 19 11:46:26.697910 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 19 11:46:26.701196 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 19 11:46:26.714684 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 19 11:46:26.729670 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 19 11:46:26.738150 systemd[1]: issuegen.service: Deactivated successfully. Mar 19 11:46:26.738461 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 19 11:46:26.746506 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 19 11:46:26.758175 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 19 11:46:26.770656 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 19 11:46:26.773358 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 19 11:46:26.774847 systemd[1]: Reached target getty.target - Login Prompts. Mar 19 11:46:26.843573 containerd[1514]: time="2025-03-19T11:46:26.843463018Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 19 11:46:26.865154 containerd[1514]: time="2025-03-19T11:46:26.864996852Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:46:26.867097 containerd[1514]: time="2025-03-19T11:46:26.867040655Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:46:26.867097 containerd[1514]: time="2025-03-19T11:46:26.867090639Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 19 11:46:26.867172 containerd[1514]: time="2025-03-19T11:46:26.867113121Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 19 11:46:26.867377 containerd[1514]: time="2025-03-19T11:46:26.867354073Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 19 11:46:26.867408 containerd[1514]: time="2025-03-19T11:46:26.867377016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 19 11:46:26.867479 containerd[1514]: time="2025-03-19T11:46:26.867456114Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:46:26.867479 containerd[1514]: time="2025-03-19T11:46:26.867473787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:46:26.867861 containerd[1514]: time="2025-03-19T11:46:26.867820297Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:46:26.867861 containerd[1514]: time="2025-03-19T11:46:26.867848510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 19 11:46:26.867909 containerd[1514]: time="2025-03-19T11:46:26.867867385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:46:26.867909 containerd[1514]: time="2025-03-19T11:46:26.867881061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 19 11:46:26.868038 containerd[1514]: time="2025-03-19T11:46:26.868005695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:46:26.868360 containerd[1514]: time="2025-03-19T11:46:26.868328290Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 19 11:46:26.868543 containerd[1514]: time="2025-03-19T11:46:26.868513016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 19 11:46:26.868543 containerd[1514]: time="2025-03-19T11:46:26.868532613Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 19 11:46:26.868679 containerd[1514]: time="2025-03-19T11:46:26.868650935Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 19 11:46:26.868738 containerd[1514]: time="2025-03-19T11:46:26.868718371Z" level=info msg="metadata content store policy set" policy=shared Mar 19 11:46:26.921910 containerd[1514]: time="2025-03-19T11:46:26.921841676Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 19 11:46:26.921910 containerd[1514]: time="2025-03-19T11:46:26.921919903Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 19 11:46:26.922062 containerd[1514]: time="2025-03-19T11:46:26.921938588Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 19 11:46:26.922062 containerd[1514]: time="2025-03-19T11:46:26.921959457Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 19 11:46:26.922062 containerd[1514]: time="2025-03-19T11:46:26.921975587Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 19 11:46:26.922218 containerd[1514]: time="2025-03-19T11:46:26.922190040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 19 11:46:26.922509 containerd[1514]: time="2025-03-19T11:46:26.922478020Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 19 11:46:26.922643 containerd[1514]: time="2025-03-19T11:46:26.922611470Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 19 11:46:26.922643 containerd[1514]: time="2025-03-19T11:46:26.922630816Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 19 11:46:26.922703 containerd[1514]: time="2025-03-19T11:46:26.922646295Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 19 11:46:26.922703 containerd[1514]: time="2025-03-19T11:46:26.922664670Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 19 11:46:26.922703 containerd[1514]: time="2025-03-19T11:46:26.922680910Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 19 11:46:26.922703 containerd[1514]: time="2025-03-19T11:46:26.922695107Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 19 11:46:26.922810 containerd[1514]: time="2025-03-19T11:46:26.922710225Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 19 11:46:26.922810 containerd[1514]: time="2025-03-19T11:46:26.922726786Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 19 11:46:26.922810 containerd[1514]: time="2025-03-19T11:46:26.922742095Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 19 11:46:26.922810 containerd[1514]: time="2025-03-19T11:46:26.922756231Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 19 11:46:26.922810 containerd[1514]: time="2025-03-19T11:46:26.922768665Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 19 11:46:26.922810 containerd[1514]: time="2025-03-19T11:46:26.922791658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.922810 containerd[1514]: time="2025-03-19T11:46:26.922806826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.922979 containerd[1514]: time="2025-03-19T11:46:26.922820221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.922979 containerd[1514]: time="2025-03-19T11:46:26.922835300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.922979 containerd[1514]: time="2025-03-19T11:46:26.922849536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.922979 containerd[1514]: time="2025-03-19T11:46:26.922864444Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.922979 containerd[1514]: time="2025-03-19T11:46:26.922879112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.922979 containerd[1514]: time="2025-03-19T11:46:26.922899079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.922979 containerd[1514]: time="2025-03-19T11:46:26.922917654Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.922979 containerd[1514]: time="2025-03-19T11:46:26.922936750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.922979 containerd[1514]: time="2025-03-19T11:46:26.922952519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.922979 containerd[1514]: time="2025-03-19T11:46:26.922973639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.923332 containerd[1514]: time="2025-03-19T11:46:26.922994128Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.923332 containerd[1514]: time="2025-03-19T11:46:26.923012452Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 19 11:46:26.923332 containerd[1514]: time="2025-03-19T11:46:26.923036878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.923332 containerd[1514]: time="2025-03-19T11:46:26.923054150Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.923332 containerd[1514]: time="2025-03-19T11:46:26.923068427Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 19 11:46:26.923905 containerd[1514]: time="2025-03-19T11:46:26.923865662Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 19 11:46:26.923905 containerd[1514]: time="2025-03-19T11:46:26.923891390Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 19 11:46:26.923905 containerd[1514]: time="2025-03-19T11:46:26.923902180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 19 11:46:26.923905 containerd[1514]: time="2025-03-19T11:46:26.923914494Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 19 11:46:26.923905 containerd[1514]: time="2025-03-19T11:46:26.923924703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.924134 containerd[1514]: time="2025-03-19T11:46:26.923937577Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 19 11:46:26.924134 containerd[1514]: time="2025-03-19T11:46:26.923950301Z" level=info msg="NRI interface is disabled by configuration." Mar 19 11:46:26.924134 containerd[1514]: time="2025-03-19T11:46:26.923975368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 19 11:46:26.924604 containerd[1514]: time="2025-03-19T11:46:26.924503558Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 19 11:46:26.924604 containerd[1514]: time="2025-03-19T11:46:26.924600430Z" level=info msg="Connect containerd service" Mar 19 11:46:26.924766 containerd[1514]: time="2025-03-19T11:46:26.924636167Z" level=info msg="using legacy CRI server" Mar 19 11:46:26.924766 containerd[1514]: time="2025-03-19T11:46:26.924648260Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 19 11:46:26.925147 containerd[1514]: time="2025-03-19T11:46:26.925105938Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 19 11:46:26.928997 containerd[1514]: time="2025-03-19T11:46:26.928945949Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 19 11:46:26.929200 containerd[1514]: time="2025-03-19T11:46:26.929170951Z" level=info msg="Start subscribing containerd event" Mar 19 11:46:26.929244 containerd[1514]: time="2025-03-19T11:46:26.929215835Z" level=info msg="Start recovering state" Mar 19 11:46:26.929613 containerd[1514]: time="2025-03-19T11:46:26.929307067Z" level=info msg="Start event monitor" Mar 19 11:46:26.929613 containerd[1514]: time="2025-03-19T11:46:26.929330711Z" level=info msg="Start snapshots syncer" Mar 19 11:46:26.929613 containerd[1514]: time="2025-03-19T11:46:26.929340469Z" level=info msg="Start cni network conf syncer for default" Mar 19 11:46:26.929613 containerd[1514]: time="2025-03-19T11:46:26.929348013Z" level=info msg="Start streaming server" Mar 19 11:46:26.929693 containerd[1514]: time="2025-03-19T11:46:26.929651072Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 19 11:46:26.929742 containerd[1514]: time="2025-03-19T11:46:26.929726613Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 19 11:46:26.930011 containerd[1514]: time="2025-03-19T11:46:26.929805581Z" level=info msg="containerd successfully booted in 0.087516s" Mar 19 11:46:26.929898 systemd[1]: Started containerd.service - containerd container runtime. Mar 19 11:46:27.061473 tar[1509]: linux-amd64/README.md Mar 19 11:46:27.079635 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 19 11:46:27.795371 systemd-networkd[1430]: eth0: Gained IPv6LL Mar 19 11:46:27.799011 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 19 11:46:27.807383 systemd[1]: Reached target network-online.target - Network is Online. Mar 19 11:46:27.826568 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 19 11:46:27.867548 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:46:27.870670 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 19 11:46:27.891804 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 19 11:46:27.892165 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 19 11:46:27.894080 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 19 11:46:27.901657 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 19 11:46:29.185620 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:46:29.187671 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 19 11:46:29.189063 systemd[1]: Startup finished in 737ms (kernel) + 6.426s (initrd) + 5.106s (userspace) = 12.270s. Mar 19 11:46:29.218792 (kubelet)[1596]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:46:30.655900 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 19 11:46:30.658010 systemd[1]: Started sshd@0-10.0.0.120:22-10.0.0.1:45444.service - OpenSSH per-connection server daemon (10.0.0.1:45444). Mar 19 11:46:30.738633 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 45444 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:46:30.740973 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:46:30.753904 systemd-logind[1495]: New session 1 of user core. Mar 19 11:46:30.755533 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 19 11:46:30.765644 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 19 11:46:30.780145 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 19 11:46:30.783594 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 19 11:46:30.800952 (systemd)[1612]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 19 11:46:30.803708 systemd-logind[1495]: New session c1 of user core. Mar 19 11:46:30.934137 kubelet[1596]: E0319 11:46:30.933994 1596 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:46:30.967776 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:46:30.968052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:46:30.968620 systemd[1]: kubelet.service: Consumed 2.827s CPU time, 254.8M memory peak. Mar 19 11:46:30.974916 systemd[1612]: Queued start job for default target default.target. Mar 19 11:46:30.981716 systemd[1612]: Created slice app.slice - User Application Slice. Mar 19 11:46:30.981744 systemd[1612]: Reached target paths.target - Paths. Mar 19 11:46:30.981797 systemd[1612]: Reached target timers.target - Timers. Mar 19 11:46:30.983564 systemd[1612]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 19 11:46:30.996388 systemd[1612]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 19 11:46:30.996540 systemd[1612]: Reached target sockets.target - Sockets. Mar 19 11:46:30.996594 systemd[1612]: Reached target basic.target - Basic System. Mar 19 11:46:30.996643 systemd[1612]: Reached target default.target - Main User Target. Mar 19 11:46:30.996677 systemd[1612]: Startup finished in 185ms. Mar 19 11:46:30.997135 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 19 11:46:30.998869 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 19 11:46:31.069670 systemd[1]: Started sshd@1-10.0.0.120:22-10.0.0.1:45446.service - OpenSSH per-connection server daemon (10.0.0.1:45446). Mar 19 11:46:31.105421 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 45446 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:46:31.107139 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:46:31.112090 systemd-logind[1495]: New session 2 of user core. Mar 19 11:46:31.122610 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 19 11:46:31.177050 sshd[1626]: Connection closed by 10.0.0.1 port 45446 Mar 19 11:46:31.177556 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Mar 19 11:46:31.196688 systemd[1]: sshd@1-10.0.0.120:22-10.0.0.1:45446.service: Deactivated successfully. Mar 19 11:46:31.198953 systemd[1]: session-2.scope: Deactivated successfully. Mar 19 11:46:31.200659 systemd-logind[1495]: Session 2 logged out. Waiting for processes to exit. Mar 19 11:46:31.209756 systemd[1]: Started sshd@2-10.0.0.120:22-10.0.0.1:45448.service - OpenSSH per-connection server daemon (10.0.0.1:45448). Mar 19 11:46:31.210756 systemd-logind[1495]: Removed session 2. Mar 19 11:46:31.241304 sshd[1631]: Accepted publickey for core from 10.0.0.1 port 45448 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:46:31.242794 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:46:31.247220 systemd-logind[1495]: New session 3 of user core. Mar 19 11:46:31.256419 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 19 11:46:31.306378 sshd[1634]: Connection closed by 10.0.0.1 port 45448 Mar 19 11:46:31.306732 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Mar 19 11:46:31.328261 systemd[1]: sshd@2-10.0.0.120:22-10.0.0.1:45448.service: Deactivated successfully. Mar 19 11:46:31.330212 systemd[1]: session-3.scope: Deactivated successfully. Mar 19 11:46:31.331569 systemd-logind[1495]: Session 3 logged out. Waiting for processes to exit. Mar 19 11:46:31.332810 systemd[1]: Started sshd@3-10.0.0.120:22-10.0.0.1:45460.service - OpenSSH per-connection server daemon (10.0.0.1:45460). Mar 19 11:46:31.333681 systemd-logind[1495]: Removed session 3. Mar 19 11:46:31.368997 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 45460 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:46:31.371102 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:46:31.375930 systemd-logind[1495]: New session 4 of user core. Mar 19 11:46:31.385493 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 19 11:46:31.442689 sshd[1642]: Connection closed by 10.0.0.1 port 45460 Mar 19 11:46:31.443121 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Mar 19 11:46:31.456882 systemd[1]: sshd@3-10.0.0.120:22-10.0.0.1:45460.service: Deactivated successfully. Mar 19 11:46:31.459087 systemd[1]: session-4.scope: Deactivated successfully. Mar 19 11:46:31.460835 systemd-logind[1495]: Session 4 logged out. Waiting for processes to exit. Mar 19 11:46:31.468841 systemd[1]: Started sshd@4-10.0.0.120:22-10.0.0.1:45466.service - OpenSSH per-connection server daemon (10.0.0.1:45466). Mar 19 11:46:31.470068 systemd-logind[1495]: Removed session 4. Mar 19 11:46:31.500925 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 45466 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:46:31.502691 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:46:31.507636 systemd-logind[1495]: New session 5 of user core. Mar 19 11:46:31.517480 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 19 11:46:31.578359 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 19 11:46:31.578807 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 19 11:46:32.830539 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 19 11:46:32.830687 (dockerd)[1670]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 19 11:46:33.531531 dockerd[1670]: time="2025-03-19T11:46:33.531443786Z" level=info msg="Starting up" Mar 19 11:46:34.204143 dockerd[1670]: time="2025-03-19T11:46:34.204056950Z" level=info msg="Loading containers: start." Mar 19 11:46:34.448293 kernel: Initializing XFRM netlink socket Mar 19 11:46:34.547498 systemd-networkd[1430]: docker0: Link UP Mar 19 11:46:34.603394 dockerd[1670]: time="2025-03-19T11:46:34.603323782Z" level=info msg="Loading containers: done." Mar 19 11:46:34.622695 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck339611731-merged.mount: Deactivated successfully. Mar 19 11:46:34.625661 dockerd[1670]: time="2025-03-19T11:46:34.625593859Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 19 11:46:34.625800 dockerd[1670]: time="2025-03-19T11:46:34.625771763Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 19 11:46:34.625983 dockerd[1670]: time="2025-03-19T11:46:34.625956609Z" level=info msg="Daemon has completed initialization" Mar 19 11:46:34.702484 dockerd[1670]: time="2025-03-19T11:46:34.702382312Z" level=info msg="API listen on /run/docker.sock" Mar 19 11:46:34.702667 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 19 11:46:35.849246 containerd[1514]: time="2025-03-19T11:46:35.849205221Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 19 11:46:37.368700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697595428.mount: Deactivated successfully. Mar 19 11:46:38.573356 containerd[1514]: time="2025-03-19T11:46:38.573285989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:38.574576 containerd[1514]: time="2025-03-19T11:46:38.574539725Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=28682430" Mar 19 11:46:38.576173 containerd[1514]: time="2025-03-19T11:46:38.576142576Z" level=info msg="ImageCreate event name:\"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:38.579890 containerd[1514]: time="2025-03-19T11:46:38.579859352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:38.580969 containerd[1514]: time="2025-03-19T11:46:38.580941720Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"28679230\" in 2.731696696s" Mar 19 11:46:38.581023 containerd[1514]: time="2025-03-19T11:46:38.580971564Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\"" Mar 19 11:46:38.581734 containerd[1514]: time="2025-03-19T11:46:38.581698729Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 19 11:46:39.968290 containerd[1514]: time="2025-03-19T11:46:39.968222109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:39.969018 containerd[1514]: time="2025-03-19T11:46:39.968983136Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=24779684" Mar 19 11:46:39.970082 containerd[1514]: time="2025-03-19T11:46:39.970052580Z" level=info msg="ImageCreate event name:\"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:39.973691 containerd[1514]: time="2025-03-19T11:46:39.973635555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:39.975117 containerd[1514]: time="2025-03-19T11:46:39.975075859Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"26267292\" in 1.393343361s" Mar 19 11:46:39.975117 containerd[1514]: time="2025-03-19T11:46:39.975113420Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\"" Mar 19 11:46:39.975778 containerd[1514]: time="2025-03-19T11:46:39.975750851Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 19 11:46:41.007290 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 19 11:46:41.015500 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:46:41.177749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:46:41.181735 (kubelet)[1937]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:46:42.129605 containerd[1514]: time="2025-03-19T11:46:42.129530177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:42.130678 containerd[1514]: time="2025-03-19T11:46:42.130574113Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=19171419" Mar 19 11:46:42.131952 containerd[1514]: time="2025-03-19T11:46:42.131901018Z" level=info msg="ImageCreate event name:\"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:42.134989 containerd[1514]: time="2025-03-19T11:46:42.134942351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:42.136021 containerd[1514]: time="2025-03-19T11:46:42.135977743Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"20659045\" in 2.160187705s" Mar 19 11:46:42.136074 containerd[1514]: time="2025-03-19T11:46:42.136025194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\"" Mar 19 11:46:42.136581 containerd[1514]: time="2025-03-19T11:46:42.136543670Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 19 11:46:42.160807 kubelet[1937]: E0319 11:46:42.160733 1937 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:46:42.167529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:46:42.167750 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:46:42.168151 systemd[1]: kubelet.service: Consumed 312ms CPU time, 105M memory peak. Mar 19 11:46:43.130165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2374779900.mount: Deactivated successfully. Mar 19 11:46:44.943132 containerd[1514]: time="2025-03-19T11:46:44.943022940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:44.996390 containerd[1514]: time="2025-03-19T11:46:44.996309840Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=30918185" Mar 19 11:46:45.042734 containerd[1514]: time="2025-03-19T11:46:45.042653831Z" level=info msg="ImageCreate event name:\"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:45.076367 containerd[1514]: time="2025-03-19T11:46:45.076278919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:45.077177 containerd[1514]: time="2025-03-19T11:46:45.077137330Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"30917204\" in 2.940556319s" Mar 19 11:46:45.077177 containerd[1514]: time="2025-03-19T11:46:45.077173958Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 19 11:46:45.077707 containerd[1514]: time="2025-03-19T11:46:45.077674934Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 19 11:46:46.846949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount177572605.mount: Deactivated successfully. Mar 19 11:46:48.651273 containerd[1514]: time="2025-03-19T11:46:48.651191751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:48.652224 containerd[1514]: time="2025-03-19T11:46:48.652173782Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Mar 19 11:46:48.653548 containerd[1514]: time="2025-03-19T11:46:48.653518766Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:48.656634 containerd[1514]: time="2025-03-19T11:46:48.656588443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:48.657810 containerd[1514]: time="2025-03-19T11:46:48.657776948Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 3.580069725s" Mar 19 11:46:48.657810 containerd[1514]: time="2025-03-19T11:46:48.657810458Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Mar 19 11:46:48.658729 containerd[1514]: time="2025-03-19T11:46:48.658705611Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 19 11:46:49.144422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2782951433.mount: Deactivated successfully. Mar 19 11:46:49.151851 containerd[1514]: time="2025-03-19T11:46:49.151799440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:49.152520 containerd[1514]: time="2025-03-19T11:46:49.152477408Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 19 11:46:49.153631 containerd[1514]: time="2025-03-19T11:46:49.153603869Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:49.156024 containerd[1514]: time="2025-03-19T11:46:49.155995123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:49.156814 containerd[1514]: time="2025-03-19T11:46:49.156788641Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 498.053517ms" Mar 19 11:46:49.156889 containerd[1514]: time="2025-03-19T11:46:49.156817546Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 19 11:46:49.157294 containerd[1514]: time="2025-03-19T11:46:49.157268839Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 19 11:46:49.749284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3187127514.mount: Deactivated successfully. Mar 19 11:46:52.257179 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 19 11:46:52.268522 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:46:52.981196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:46:52.985226 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 19 11:46:53.387121 containerd[1514]: time="2025-03-19T11:46:53.387075694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:53.388693 containerd[1514]: time="2025-03-19T11:46:53.388032820Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Mar 19 11:46:53.389351 containerd[1514]: time="2025-03-19T11:46:53.389305875Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:53.394876 containerd[1514]: time="2025-03-19T11:46:53.392931919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:46:53.394876 containerd[1514]: time="2025-03-19T11:46:53.394400985Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.237072954s" Mar 19 11:46:53.394876 containerd[1514]: time="2025-03-19T11:46:53.394422848Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Mar 19 11:46:53.417456 kubelet[2075]: E0319 11:46:53.417412 2075 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 19 11:46:53.422208 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 19 11:46:53.422431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 19 11:46:53.423243 systemd[1]: kubelet.service: Consumed 617ms CPU time, 106.2M memory peak. Mar 19 11:46:55.920176 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:46:55.920451 systemd[1]: kubelet.service: Consumed 617ms CPU time, 106.2M memory peak. Mar 19 11:46:55.932517 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:46:55.959519 systemd[1]: Reload requested from client PID 2113 ('systemctl') (unit session-5.scope)... Mar 19 11:46:55.959538 systemd[1]: Reloading... Mar 19 11:46:56.053284 zram_generator::config[2163]: No configuration found. Mar 19 11:46:56.269309 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:46:56.390710 systemd[1]: Reloading finished in 430 ms. Mar 19 11:46:56.437990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:46:56.441799 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:46:56.442738 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:46:56.443003 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:46:56.443041 systemd[1]: kubelet.service: Consumed 144ms CPU time, 91.9M memory peak. Mar 19 11:46:56.444747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:46:56.602788 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:46:56.607668 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:46:56.646480 kubelet[2207]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:46:56.646480 kubelet[2207]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 19 11:46:56.646480 kubelet[2207]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:46:56.646872 kubelet[2207]: I0319 11:46:56.646560 2207 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:46:57.057938 kubelet[2207]: I0319 11:46:57.057879 2207 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 19 11:46:57.057938 kubelet[2207]: I0319 11:46:57.057921 2207 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:46:57.058214 kubelet[2207]: I0319 11:46:57.058191 2207 server.go:954] "Client rotation is on, will bootstrap in background" Mar 19 11:46:57.084059 kubelet[2207]: I0319 11:46:57.084003 2207 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:46:57.085084 kubelet[2207]: E0319 11:46:57.085055 2207 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.120:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:46:57.091712 kubelet[2207]: E0319 11:46:57.091671 2207 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:46:57.091712 kubelet[2207]: I0319 11:46:57.091704 2207 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:46:57.098354 kubelet[2207]: I0319 11:46:57.098299 2207 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:46:57.100153 kubelet[2207]: I0319 11:46:57.100091 2207 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:46:57.100445 kubelet[2207]: I0319 11:46:57.100147 2207 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:46:57.100537 kubelet[2207]: I0319 11:46:57.100456 2207 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:46:57.100537 kubelet[2207]: I0319 11:46:57.100470 2207 container_manager_linux.go:304] "Creating device plugin manager" Mar 19 11:46:57.100692 kubelet[2207]: I0319 11:46:57.100668 2207 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:46:57.103456 kubelet[2207]: I0319 11:46:57.103433 2207 kubelet.go:446] "Attempting to sync node with API server" Mar 19 11:46:57.103456 kubelet[2207]: I0319 11:46:57.103451 2207 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:46:57.103518 kubelet[2207]: I0319 11:46:57.103477 2207 kubelet.go:352] "Adding apiserver pod source" Mar 19 11:46:57.103518 kubelet[2207]: I0319 11:46:57.103491 2207 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:46:57.106876 kubelet[2207]: I0319 11:46:57.106842 2207 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:46:57.107554 kubelet[2207]: I0319 11:46:57.107212 2207 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:46:57.109004 kubelet[2207]: W0319 11:46:57.108963 2207 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 19 11:46:57.109764 kubelet[2207]: W0319 11:46:57.109710 2207 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Mar 19 11:46:57.109839 kubelet[2207]: W0319 11:46:57.109751 2207 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Mar 19 11:46:57.109839 kubelet[2207]: E0319 11:46:57.109778 2207 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:46:57.109839 kubelet[2207]: E0319 11:46:57.109804 2207 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:46:57.111445 kubelet[2207]: I0319 11:46:57.111418 2207 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 19 11:46:57.111517 kubelet[2207]: I0319 11:46:57.111455 2207 server.go:1287] "Started kubelet" Mar 19 11:46:57.112484 kubelet[2207]: I0319 11:46:57.112294 2207 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:46:57.112645 kubelet[2207]: I0319 11:46:57.112609 2207 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:46:57.112871 kubelet[2207]: I0319 11:46:57.112845 2207 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:46:57.112954 kubelet[2207]: I0319 11:46:57.112939 2207 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:46:57.113519 kubelet[2207]: I0319 11:46:57.113105 2207 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:46:57.114344 kubelet[2207]: I0319 11:46:57.113745 2207 server.go:490] "Adding debug handlers to kubelet server" Mar 19 11:46:57.115235 kubelet[2207]: E0319 11:46:57.114878 2207 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:46:57.115235 kubelet[2207]: I0319 11:46:57.114909 2207 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 19 11:46:57.115235 kubelet[2207]: I0319 11:46:57.115053 2207 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 11:46:57.115235 kubelet[2207]: I0319 11:46:57.115092 2207 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:46:57.115716 kubelet[2207]: W0319 11:46:57.115441 2207 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Mar 19 11:46:57.115716 kubelet[2207]: E0319 11:46:57.115480 2207 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:46:57.115716 kubelet[2207]: E0319 11:46:57.115482 2207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="200ms" Mar 19 11:46:57.116208 kubelet[2207]: E0319 11:46:57.116184 2207 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:46:57.116970 kubelet[2207]: I0319 11:46:57.116950 2207 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:46:57.116970 kubelet[2207]: I0319 11:46:57.116965 2207 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:46:57.117069 kubelet[2207]: I0319 11:46:57.117045 2207 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:46:57.118099 kubelet[2207]: E0319 11:46:57.116891 2207 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.120:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.120:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182e31c275f6029a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-19 11:46:57.111433882 +0000 UTC m=+0.499924890,LastTimestamp:2025-03-19 11:46:57.111433882 +0000 UTC m=+0.499924890,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 19 11:46:57.130598 kubelet[2207]: I0319 11:46:57.130451 2207 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 19 11:46:57.130598 kubelet[2207]: I0319 11:46:57.130525 2207 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 19 11:46:57.130598 kubelet[2207]: I0319 11:46:57.130546 2207 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:46:57.130970 kubelet[2207]: I0319 11:46:57.130951 2207 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:46:57.133327 kubelet[2207]: I0319 11:46:57.132491 2207 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:46:57.133327 kubelet[2207]: I0319 11:46:57.132531 2207 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 19 11:46:57.133327 kubelet[2207]: I0319 11:46:57.132550 2207 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 19 11:46:57.133327 kubelet[2207]: I0319 11:46:57.132557 2207 kubelet.go:2388] "Starting kubelet main sync loop" Mar 19 11:46:57.133327 kubelet[2207]: E0319 11:46:57.132606 2207 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:46:57.215714 kubelet[2207]: E0319 11:46:57.215661 2207 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:46:57.232896 kubelet[2207]: E0319 11:46:57.232829 2207 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 19 11:46:57.316360 kubelet[2207]: E0319 11:46:57.316214 2207 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:46:57.316800 kubelet[2207]: E0319 11:46:57.316770 2207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="400ms" Mar 19 11:46:57.352831 kubelet[2207]: W0319 11:46:57.352754 2207 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Mar 19 11:46:57.352926 kubelet[2207]: E0319 11:46:57.352831 2207 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:46:57.353488 kubelet[2207]: I0319 11:46:57.353458 2207 policy_none.go:49] "None policy: Start" Mar 19 11:46:57.353518 kubelet[2207]: I0319 11:46:57.353488 2207 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 19 11:46:57.353518 kubelet[2207]: I0319 11:46:57.353502 2207 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:46:57.360530 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 19 11:46:57.375181 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 19 11:46:57.389121 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 19 11:46:57.390316 kubelet[2207]: I0319 11:46:57.390291 2207 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:46:57.390587 kubelet[2207]: I0319 11:46:57.390535 2207 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:46:57.390587 kubelet[2207]: I0319 11:46:57.390552 2207 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:46:57.391273 kubelet[2207]: I0319 11:46:57.390752 2207 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:46:57.391558 kubelet[2207]: E0319 11:46:57.391435 2207 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 19 11:46:57.391558 kubelet[2207]: E0319 11:46:57.391485 2207 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 19 11:46:57.441183 systemd[1]: Created slice kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice - libcontainer container kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice. Mar 19 11:46:57.451025 kubelet[2207]: E0319 11:46:57.450995 2207 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 19 11:46:57.453980 systemd[1]: Created slice kubepods-burstable-pod51cf42a883f05cc547af55183cab3ae7.slice - libcontainer container kubepods-burstable-pod51cf42a883f05cc547af55183cab3ae7.slice. Mar 19 11:46:57.470594 kubelet[2207]: E0319 11:46:57.470550 2207 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 19 11:46:57.472006 systemd[1]: Created slice kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice - libcontainer container kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice. Mar 19 11:46:57.473748 kubelet[2207]: E0319 11:46:57.473728 2207 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 19 11:46:57.491772 kubelet[2207]: I0319 11:46:57.491747 2207 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 19 11:46:57.492127 kubelet[2207]: E0319 11:46:57.492091 2207 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Mar 19 11:46:57.517578 kubelet[2207]: I0319 11:46:57.517529 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 19 11:46:57.517578 kubelet[2207]: I0319 11:46:57.517566 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51cf42a883f05cc547af55183cab3ae7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"51cf42a883f05cc547af55183cab3ae7\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:46:57.517696 kubelet[2207]: I0319 11:46:57.517593 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:46:57.517696 kubelet[2207]: I0319 11:46:57.517613 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:46:57.517745 kubelet[2207]: I0319 11:46:57.517678 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:46:57.517774 kubelet[2207]: I0319 11:46:57.517738 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51cf42a883f05cc547af55183cab3ae7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"51cf42a883f05cc547af55183cab3ae7\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:46:57.517797 kubelet[2207]: I0319 11:46:57.517772 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51cf42a883f05cc547af55183cab3ae7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"51cf42a883f05cc547af55183cab3ae7\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:46:57.517797 kubelet[2207]: I0319 11:46:57.517793 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:46:57.517847 kubelet[2207]: I0319 11:46:57.517814 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:46:57.694379 kubelet[2207]: I0319 11:46:57.694190 2207 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 19 11:46:57.694791 kubelet[2207]: E0319 11:46:57.694694 2207 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Mar 19 11:46:57.717579 kubelet[2207]: E0319 11:46:57.717523 2207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="800ms" Mar 19 11:46:57.751922 kubelet[2207]: E0319 11:46:57.751898 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:46:57.752558 containerd[1514]: time="2025-03-19T11:46:57.752522249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,}" Mar 19 11:46:57.771745 kubelet[2207]: E0319 11:46:57.771707 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:46:57.772035 containerd[1514]: time="2025-03-19T11:46:57.772002297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:51cf42a883f05cc547af55183cab3ae7,Namespace:kube-system,Attempt:0,}" Mar 19 11:46:57.774494 kubelet[2207]: E0319 11:46:57.774465 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:46:57.774929 containerd[1514]: time="2025-03-19T11:46:57.774866988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,}" Mar 19 11:46:58.096744 kubelet[2207]: I0319 11:46:58.096708 2207 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 19 11:46:58.097039 kubelet[2207]: E0319 11:46:58.097003 2207 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.120:6443/api/v1/nodes\": dial tcp 10.0.0.120:6443: connect: connection refused" node="localhost" Mar 19 11:46:58.300878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount270894978.mount: Deactivated successfully. Mar 19 11:46:58.308170 containerd[1514]: time="2025-03-19T11:46:58.308120257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:46:58.311032 containerd[1514]: time="2025-03-19T11:46:58.310950629Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 19 11:46:58.311939 containerd[1514]: time="2025-03-19T11:46:58.311906298Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:46:58.313710 containerd[1514]: time="2025-03-19T11:46:58.313668519Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:46:58.314587 containerd[1514]: time="2025-03-19T11:46:58.314563172Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:46:58.315446 containerd[1514]: time="2025-03-19T11:46:58.315406953Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:46:58.316326 containerd[1514]: time="2025-03-19T11:46:58.316294670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 19 11:46:58.317226 containerd[1514]: time="2025-03-19T11:46:58.317199578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 19 11:46:58.319553 containerd[1514]: time="2025-03-19T11:46:58.319524873Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 547.466157ms" Mar 19 11:46:58.320314 containerd[1514]: time="2025-03-19T11:46:58.320288672Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 567.652781ms" Mar 19 11:46:58.320978 kubelet[2207]: W0319 11:46:58.320641 2207 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Mar 19 11:46:58.321020 kubelet[2207]: E0319 11:46:58.320987 2207 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.120:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:46:58.325105 containerd[1514]: time="2025-03-19T11:46:58.325066590Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 550.137217ms" Mar 19 11:46:58.368957 kubelet[2207]: W0319 11:46:58.368810 2207 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Mar 19 11:46:58.368957 kubelet[2207]: E0319 11:46:58.368883 2207 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.120:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:46:58.489557 containerd[1514]: time="2025-03-19T11:46:58.488799404Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:46:58.489557 containerd[1514]: time="2025-03-19T11:46:58.488861774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:46:58.489557 containerd[1514]: time="2025-03-19T11:46:58.488942579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:46:58.489557 containerd[1514]: time="2025-03-19T11:46:58.489002483Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:46:58.489557 containerd[1514]: time="2025-03-19T11:46:58.489017439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:46:58.489557 containerd[1514]: time="2025-03-19T11:46:58.489100829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:46:58.489557 containerd[1514]: time="2025-03-19T11:46:58.489467371Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:46:58.489877 containerd[1514]: time="2025-03-19T11:46:58.489566059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:46:58.490411 containerd[1514]: time="2025-03-19T11:46:58.488770274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:46:58.490411 containerd[1514]: time="2025-03-19T11:46:58.490368510Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:46:58.490411 containerd[1514]: time="2025-03-19T11:46:58.490379517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:46:58.490626 containerd[1514]: time="2025-03-19T11:46:58.490454848Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:46:58.515402 systemd[1]: Started cri-containerd-075d1d5d30bd31d473ebb3970244f4e10d658126fbbb2f07b0001285d844c70a.scope - libcontainer container 075d1d5d30bd31d473ebb3970244f4e10d658126fbbb2f07b0001285d844c70a. Mar 19 11:46:58.516905 systemd[1]: Started cri-containerd-ba116da94249e400562e14f84b2cd49e6672e4993e31b1fc305a8d6ad1a4cbd6.scope - libcontainer container ba116da94249e400562e14f84b2cd49e6672e4993e31b1fc305a8d6ad1a4cbd6. Mar 19 11:46:58.518329 kubelet[2207]: E0319 11:46:58.518209 2207 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.120:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.120:6443: connect: connection refused" interval="1.6s" Mar 19 11:46:58.518781 systemd[1]: Started cri-containerd-eebab44337d5ecd857f6e81811096283799d43f4f0826d13bfeffc24b6875351.scope - libcontainer container eebab44337d5ecd857f6e81811096283799d43f4f0826d13bfeffc24b6875351. Mar 19 11:46:58.558894 containerd[1514]: time="2025-03-19T11:46:58.558818055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:51cf42a883f05cc547af55183cab3ae7,Namespace:kube-system,Attempt:0,} returns sandbox id \"075d1d5d30bd31d473ebb3970244f4e10d658126fbbb2f07b0001285d844c70a\"" Mar 19 11:46:58.560024 kubelet[2207]: E0319 11:46:58.560003 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:46:58.562717 containerd[1514]: time="2025-03-19T11:46:58.562572060Z" level=info msg="CreateContainer within sandbox \"075d1d5d30bd31d473ebb3970244f4e10d658126fbbb2f07b0001285d844c70a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 19 11:46:58.564860 containerd[1514]: time="2025-03-19T11:46:58.564649901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"eebab44337d5ecd857f6e81811096283799d43f4f0826d13bfeffc24b6875351\"" Mar 19 11:46:58.564948 containerd[1514]: time="2025-03-19T11:46:58.564834749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba116da94249e400562e14f84b2cd49e6672e4993e31b1fc305a8d6ad1a4cbd6\"" Mar 19 11:46:58.565901 kubelet[2207]: E0319 11:46:58.565711 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:46:58.565901 kubelet[2207]: E0319 11:46:58.565749 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:46:58.567314 containerd[1514]: time="2025-03-19T11:46:58.567146901Z" level=info msg="CreateContainer within sandbox \"eebab44337d5ecd857f6e81811096283799d43f4f0826d13bfeffc24b6875351\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 19 11:46:58.567640 containerd[1514]: time="2025-03-19T11:46:58.567616524Z" level=info msg="CreateContainer within sandbox \"ba116da94249e400562e14f84b2cd49e6672e4993e31b1fc305a8d6ad1a4cbd6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 19 11:46:58.669517 kubelet[2207]: W0319 11:46:58.669401 2207 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Mar 19 11:46:58.669517 kubelet[2207]: E0319 11:46:58.669443 2207 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.120:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:46:58.683119 kubelet[2207]: W0319 11:46:58.683073 2207 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.120:6443: connect: connection refused Mar 19 11:46:58.683168 kubelet[2207]: E0319 11:46:58.683122 2207 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.120:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.120:6443: connect: connection refused" logger="UnhandledError" Mar 19 11:46:58.744611 containerd[1514]: time="2025-03-19T11:46:58.744547193Z" level=info msg="CreateContainer within sandbox \"075d1d5d30bd31d473ebb3970244f4e10d658126fbbb2f07b0001285d844c70a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"000bb9a7c932477f0bbdec1706a9701b8c034c7bfdd6b875c498f702f65a7a17\"" Mar 19 11:46:58.745374 containerd[1514]: time="2025-03-19T11:46:58.745331477Z" level=info msg="StartContainer for \"000bb9a7c932477f0bbdec1706a9701b8c034c7bfdd6b875c498f702f65a7a17\"" Mar 19 11:46:58.750186 containerd[1514]: time="2025-03-19T11:46:58.750143877Z" level=info msg="CreateContainer within sandbox \"eebab44337d5ecd857f6e81811096283799d43f4f0826d13bfeffc24b6875351\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2346b74fa01c992c905a0f7c190e2038147617151680de83d2cf21dcb877f275\"" Mar 19 11:46:58.750904 containerd[1514]: time="2025-03-19T11:46:58.750866969Z" level=info msg="CreateContainer within sandbox \"ba116da94249e400562e14f84b2cd49e6672e4993e31b1fc305a8d6ad1a4cbd6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e7c388020f7ae877403d5abc2e43795828ebb25a998bb197ab3318ec8e375719\"" Mar 19 11:46:58.751509 containerd[1514]: time="2025-03-19T11:46:58.751158842Z" level=info msg="StartContainer for \"2346b74fa01c992c905a0f7c190e2038147617151680de83d2cf21dcb877f275\"" Mar 19 11:46:58.751509 containerd[1514]: time="2025-03-19T11:46:58.751319480Z" level=info msg="StartContainer for \"e7c388020f7ae877403d5abc2e43795828ebb25a998bb197ab3318ec8e375719\"" Mar 19 11:46:58.779400 systemd[1]: Started cri-containerd-000bb9a7c932477f0bbdec1706a9701b8c034c7bfdd6b875c498f702f65a7a17.scope - libcontainer container 000bb9a7c932477f0bbdec1706a9701b8c034c7bfdd6b875c498f702f65a7a17. Mar 19 11:46:58.784190 systemd[1]: Started cri-containerd-2346b74fa01c992c905a0f7c190e2038147617151680de83d2cf21dcb877f275.scope - libcontainer container 2346b74fa01c992c905a0f7c190e2038147617151680de83d2cf21dcb877f275. Mar 19 11:46:58.785948 systemd[1]: Started cri-containerd-e7c388020f7ae877403d5abc2e43795828ebb25a998bb197ab3318ec8e375719.scope - libcontainer container e7c388020f7ae877403d5abc2e43795828ebb25a998bb197ab3318ec8e375719. Mar 19 11:46:58.821985 containerd[1514]: time="2025-03-19T11:46:58.821950351Z" level=info msg="StartContainer for \"000bb9a7c932477f0bbdec1706a9701b8c034c7bfdd6b875c498f702f65a7a17\" returns successfully" Mar 19 11:46:58.839492 containerd[1514]: time="2025-03-19T11:46:58.839443164Z" level=info msg="StartContainer for \"2346b74fa01c992c905a0f7c190e2038147617151680de83d2cf21dcb877f275\" returns successfully" Mar 19 11:46:58.839492 containerd[1514]: time="2025-03-19T11:46:58.839512996Z" level=info msg="StartContainer for \"e7c388020f7ae877403d5abc2e43795828ebb25a998bb197ab3318ec8e375719\" returns successfully" Mar 19 11:46:58.900548 kubelet[2207]: I0319 11:46:58.900489 2207 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 19 11:46:59.140363 kubelet[2207]: E0319 11:46:59.140320 2207 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 19 11:46:59.140516 kubelet[2207]: E0319 11:46:59.140431 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:46:59.142581 kubelet[2207]: E0319 11:46:59.142556 2207 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 19 11:46:59.142666 kubelet[2207]: E0319 11:46:59.142645 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:46:59.149481 kubelet[2207]: E0319 11:46:59.149451 2207 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 19 11:46:59.149592 kubelet[2207]: E0319 11:46:59.149567 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:46:59.947246 kubelet[2207]: I0319 11:46:59.947130 2207 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 19 11:46:59.947246 kubelet[2207]: E0319 11:46:59.947172 2207 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 19 11:46:59.950518 kubelet[2207]: E0319 11:46:59.950487 2207 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:47:00.051346 kubelet[2207]: E0319 11:47:00.051294 2207 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:47:00.147153 kubelet[2207]: E0319 11:47:00.147122 2207 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 19 11:47:00.147342 kubelet[2207]: E0319 11:47:00.147271 2207 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 19 11:47:00.147342 kubelet[2207]: E0319 11:47:00.147278 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:47:00.147431 kubelet[2207]: E0319 11:47:00.147410 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:47:00.152286 kubelet[2207]: E0319 11:47:00.152231 2207 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:47:00.253334 kubelet[2207]: E0319 11:47:00.253179 2207 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:47:00.353860 kubelet[2207]: E0319 11:47:00.353792 2207 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:47:00.454742 kubelet[2207]: E0319 11:47:00.454686 2207 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:47:00.555369 kubelet[2207]: E0319 11:47:00.555319 2207 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:47:00.655988 kubelet[2207]: E0319 11:47:00.655894 2207 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:47:00.716031 kubelet[2207]: I0319 11:47:00.715964 2207 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 19 11:47:00.721574 kubelet[2207]: E0319 11:47:00.721547 2207 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 19 11:47:00.721574 kubelet[2207]: I0319 11:47:00.721571 2207 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 19 11:47:00.723960 kubelet[2207]: E0319 11:47:00.723923 2207 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 19 11:47:00.723960 kubelet[2207]: I0319 11:47:00.723944 2207 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 19 11:47:00.725703 kubelet[2207]: E0319 11:47:00.725660 2207 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 19 11:47:01.106307 kubelet[2207]: I0319 11:47:01.106229 2207 apiserver.go:52] "Watching apiserver" Mar 19 11:47:01.115149 kubelet[2207]: I0319 11:47:01.115121 2207 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 19 11:47:01.479514 kubelet[2207]: I0319 11:47:01.479383 2207 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 19 11:47:01.486129 kubelet[2207]: E0319 11:47:01.486091 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:47:02.052925 systemd[1]: Reload requested from client PID 2489 ('systemctl') (unit session-5.scope)... Mar 19 11:47:02.052942 systemd[1]: Reloading... Mar 19 11:47:02.134306 zram_generator::config[2536]: No configuration found. Mar 19 11:47:02.149294 kubelet[2207]: E0319 11:47:02.149243 2207 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:47:02.244624 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 19 11:47:02.364268 systemd[1]: Reloading finished in 310 ms. Mar 19 11:47:02.389420 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:47:02.401813 systemd[1]: kubelet.service: Deactivated successfully. Mar 19 11:47:02.402165 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:47:02.402223 systemd[1]: kubelet.service: Consumed 966ms CPU time, 126.3M memory peak. Mar 19 11:47:02.412494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 19 11:47:02.584590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 19 11:47:02.589924 (kubelet)[2578]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 19 11:47:02.630118 kubelet[2578]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:47:02.630118 kubelet[2578]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 19 11:47:02.630118 kubelet[2578]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 19 11:47:02.630118 kubelet[2578]: I0319 11:47:02.629657 2578 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 19 11:47:02.636714 kubelet[2578]: I0319 11:47:02.636687 2578 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 19 11:47:02.636714 kubelet[2578]: I0319 11:47:02.636707 2578 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 19 11:47:02.636920 kubelet[2578]: I0319 11:47:02.636904 2578 server.go:954] "Client rotation is on, will bootstrap in background" Mar 19 11:47:02.637928 kubelet[2578]: I0319 11:47:02.637908 2578 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 19 11:47:02.639969 kubelet[2578]: I0319 11:47:02.639942 2578 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 19 11:47:02.646976 kubelet[2578]: E0319 11:47:02.646902 2578 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 19 11:47:02.646976 kubelet[2578]: I0319 11:47:02.646960 2578 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 19 11:47:02.651860 kubelet[2578]: I0319 11:47:02.651835 2578 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 19 11:47:02.652066 kubelet[2578]: I0319 11:47:02.652029 2578 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 19 11:47:02.652217 kubelet[2578]: I0319 11:47:02.652056 2578 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 19 11:47:02.652324 kubelet[2578]: I0319 11:47:02.652216 2578 topology_manager.go:138] "Creating topology manager with none policy" Mar 19 11:47:02.652324 kubelet[2578]: I0319 11:47:02.652225 2578 container_manager_linux.go:304] "Creating device plugin manager" Mar 19 11:47:02.652324 kubelet[2578]: I0319 11:47:02.652277 2578 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:47:02.652456 kubelet[2578]: I0319 11:47:02.652426 2578 kubelet.go:446] "Attempting to sync node with API server" Mar 19 11:47:02.652456 kubelet[2578]: I0319 11:47:02.652443 2578 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 19 11:47:02.652456 kubelet[2578]: I0319 11:47:02.652460 2578 kubelet.go:352] "Adding apiserver pod source" Mar 19 11:47:02.652581 kubelet[2578]: I0319 11:47:02.652471 2578 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 19 11:47:02.653417 kubelet[2578]: I0319 11:47:02.653388 2578 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 19 11:47:02.653949 kubelet[2578]: I0319 11:47:02.653925 2578 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 19 11:47:02.655756 kubelet[2578]: I0319 11:47:02.654472 2578 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 19 11:47:02.655756 kubelet[2578]: I0319 11:47:02.654504 2578 server.go:1287] "Started kubelet" Mar 19 11:47:02.655756 kubelet[2578]: I0319 11:47:02.654609 2578 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 19 11:47:02.655756 kubelet[2578]: I0319 11:47:02.654826 2578 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 19 11:47:02.655756 kubelet[2578]: I0319 11:47:02.655535 2578 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 19 11:47:02.655756 kubelet[2578]: I0319 11:47:02.655611 2578 server.go:490] "Adding debug handlers to kubelet server" Mar 19 11:47:02.657512 kubelet[2578]: I0319 11:47:02.657492 2578 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 19 11:47:02.657629 kubelet[2578]: E0319 11:47:02.657585 2578 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 19 11:47:02.657676 kubelet[2578]: I0319 11:47:02.657639 2578 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 19 11:47:02.657676 kubelet[2578]: I0319 11:47:02.657647 2578 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 19 11:47:02.661354 kubelet[2578]: I0319 11:47:02.661322 2578 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 19 11:47:02.661516 kubelet[2578]: I0319 11:47:02.661498 2578 reconciler.go:26] "Reconciler: start to sync state" Mar 19 11:47:02.669194 kubelet[2578]: I0319 11:47:02.668573 2578 factory.go:221] Registration of the systemd container factory successfully Mar 19 11:47:02.669194 kubelet[2578]: I0319 11:47:02.668951 2578 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 19 11:47:02.672521 kubelet[2578]: I0319 11:47:02.672484 2578 factory.go:221] Registration of the containerd container factory successfully Mar 19 11:47:02.672796 kubelet[2578]: E0319 11:47:02.672588 2578 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 19 11:47:02.678031 kubelet[2578]: I0319 11:47:02.677983 2578 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 19 11:47:02.679406 kubelet[2578]: I0319 11:47:02.679380 2578 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 19 11:47:02.679459 kubelet[2578]: I0319 11:47:02.679411 2578 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 19 11:47:02.679459 kubelet[2578]: I0319 11:47:02.679431 2578 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 19 11:47:02.679459 kubelet[2578]: I0319 11:47:02.679439 2578 kubelet.go:2388] "Starting kubelet main sync loop" Mar 19 11:47:02.679534 kubelet[2578]: E0319 11:47:02.679483 2578 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 19 11:47:02.707464 kubelet[2578]: I0319 11:47:02.707430 2578 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 19 11:47:02.707464 kubelet[2578]: I0319 11:47:02.707446 2578 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 19 11:47:02.707464 kubelet[2578]: I0319 11:47:02.707463 2578 state_mem.go:36] "Initialized new in-memory state store" Mar 19 11:47:02.707634 kubelet[2578]: I0319 11:47:02.707591 2578 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 19 11:47:02.707634 kubelet[2578]: I0319 11:47:02.707601 2578 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 19 11:47:02.707634 kubelet[2578]: I0319 11:47:02.707616 2578 policy_none.go:49] "None policy: Start" Mar 19 11:47:02.707634 kubelet[2578]: I0319 11:47:02.707625 2578 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 19 11:47:02.707634 kubelet[2578]: I0319 11:47:02.707634 2578 state_mem.go:35] "Initializing new in-memory state store" Mar 19 11:47:02.707744 kubelet[2578]: I0319 11:47:02.707721 2578 state_mem.go:75] "Updated machine memory state" Mar 19 11:47:02.712320 kubelet[2578]: I0319 11:47:02.712159 2578 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 19 11:47:02.712583 kubelet[2578]: I0319 11:47:02.712468 2578 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 19 11:47:02.712583 kubelet[2578]: I0319 11:47:02.712490 2578 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 19 11:47:02.712804 kubelet[2578]: I0319 11:47:02.712786 2578 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 19 11:47:02.714147 kubelet[2578]: E0319 11:47:02.714108 2578 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 19 11:47:02.780949 kubelet[2578]: I0319 11:47:02.780900 2578 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 19 11:47:02.781638 kubelet[2578]: I0319 11:47:02.780922 2578 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 19 11:47:02.781638 kubelet[2578]: I0319 11:47:02.780964 2578 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 19 11:47:02.788103 kubelet[2578]: E0319 11:47:02.788064 2578 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 19 11:47:02.817105 kubelet[2578]: I0319 11:47:02.817074 2578 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 19 11:47:02.825487 kubelet[2578]: I0319 11:47:02.825442 2578 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Mar 19 11:47:02.825678 kubelet[2578]: I0319 11:47:02.825550 2578 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 19 11:47:02.962789 kubelet[2578]: I0319 11:47:02.962628 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/51cf42a883f05cc547af55183cab3ae7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"51cf42a883f05cc547af55183cab3ae7\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:47:02.962789 kubelet[2578]: I0319 11:47:02.962668 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/51cf42a883f05cc547af55183cab3ae7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"51cf42a883f05cc547af55183cab3ae7\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:47:02.962789 kubelet[2578]: I0319 11:47:02.962688 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/51cf42a883f05cc547af55183cab3ae7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"51cf42a883f05cc547af55183cab3ae7\") " pod="kube-system/kube-apiserver-localhost" Mar 19 11:47:02.962789 kubelet[2578]: I0319 11:47:02.962709 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:47:02.962789 kubelet[2578]: I0319 11:47:02.962724 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:47:02.963070 kubelet[2578]: I0319 11:47:02.962738 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:47:02.963070 kubelet[2578]: I0319 11:47:02.962752 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:47:02.963070 kubelet[2578]: I0319 11:47:02.962768 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 19 11:47:02.963070 kubelet[2578]: I0319 11:47:02.962787 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 19 11:47:03.087113 kubelet[2578]: E0319 11:47:03.087074 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:47:03.088068 kubelet[2578]: E0319 11:47:03.088037 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:47:03.089312 kubelet[2578]: E0319 11:47:03.089284 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:47:03.653510 kubelet[2578]: I0319 11:47:03.653451 2578 apiserver.go:52] "Watching apiserver" Mar 19 11:47:03.662376 kubelet[2578]: I0319 11:47:03.662337 2578 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 19 11:47:03.694040 kubelet[2578]: I0319 11:47:03.693205 2578 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 19 11:47:03.694040 kubelet[2578]: E0319 11:47:03.693656 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:47:03.694040 kubelet[2578]: E0319 11:47:03.693786 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:47:03.699963 kubelet[2578]: E0319 11:47:03.699890 2578 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 19 11:47:03.700209 kubelet[2578]: E0319 11:47:03.700049 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:47:03.723040 kubelet[2578]: I0319 11:47:03.722307 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.722288046 podStartE2EDuration="1.722288046s" podCreationTimestamp="2025-03-19 11:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:47:03.713784123 +0000 UTC m=+1.119901804" watchObservedRunningTime="2025-03-19 11:47:03.722288046 +0000 UTC m=+1.128405727" Mar 19 11:47:03.723040 kubelet[2578]: I0319 11:47:03.722430 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.722424923 podStartE2EDuration="2.722424923s" podCreationTimestamp="2025-03-19 11:47:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:47:03.72155103 +0000 UTC m=+1.127668721" watchObservedRunningTime="2025-03-19 11:47:03.722424923 +0000 UTC m=+1.128542604" Mar 19 11:47:03.728317 kubelet[2578]: I0319 11:47:03.728231 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7282106270000002 podStartE2EDuration="1.728210627s" podCreationTimestamp="2025-03-19 11:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:47:03.728030475 +0000 UTC m=+1.134148166" watchObservedRunningTime="2025-03-19 11:47:03.728210627 +0000 UTC m=+1.134328308" Mar 19 11:47:04.296224 sudo[1651]: pam_unix(sudo:session): session closed for user root Mar 19 11:47:04.297592 sshd[1650]: Connection closed by 10.0.0.1 port 45466 Mar 19 11:47:04.297925 sshd-session[1647]: pam_unix(sshd:session): session closed for user core Mar 19 11:47:04.302391 systemd[1]: sshd@4-10.0.0.120:22-10.0.0.1:45466.service: Deactivated successfully. Mar 19 11:47:04.304652 systemd[1]: session-5.scope: Deactivated successfully. Mar 19 11:47:04.304885 systemd[1]: session-5.scope: Consumed 4.599s CPU time, 223.6M memory peak. Mar 19 11:47:04.306452 systemd-logind[1495]: Session 5 logged out. Waiting for processes to exit. Mar 19 11:47:04.307527 systemd-logind[1495]: Removed session 5. Mar 19 11:47:04.694951 kubelet[2578]: E0319 11:47:04.694826 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:47:04.695421 kubelet[2578]: E0319 11:47:04.694987 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:47:05.696279 kubelet[2578]: E0319 11:47:05.696221 2578 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 19 11:47:07.647831 kubelet[2578]: I0319 11:47:07.647793 2578 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 19 11:47:07.648328 kubelet[2578]: I0319 11:47:07.648273 2578 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 19 11:47:07.648374 containerd[1514]: time="2025-03-19T11:47:07.648084728Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 19 11:47:08.383863 systemd[1]: Created slice kubepods-besteffort-poda1345662_0368_4fce_a971_82eba7cb6ed6.slice - libcontainer container kubepods-besteffort-poda1345662_0368_4fce_a971_82eba7cb6ed6.slice. Mar 19 11:47:08.396072 kubelet[2578]: I0319 11:47:08.395959 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdmxv\" (UniqueName: \"kubernetes.io/projected/a1345662-0368-4fce-a971-82eba7cb6ed6-kube-api-access-gdmxv\") pod \"kube-proxy-lwg7w\" (UID: \"a1345662-0368-4fce-a971-82eba7cb6ed6\") " pod="kube-system/kube-proxy-lwg7w" Mar 19 11:47:08.396072 kubelet[2578]: I0319 11:47:08.395999 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/8c747c51-14ac-408d-8134-8bf9c76641b6-cni-plugin\") pod \"kube-flannel-ds-h7v49\" (UID: \"8c747c51-14ac-408d-8134-8bf9c76641b6\") " pod="kube-flannel/kube-flannel-ds-h7v49" Mar 19 11:47:08.396072 kubelet[2578]: I0319 11:47:08.396021 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/8c747c51-14ac-408d-8134-8bf9c76641b6-flannel-cfg\") pod \"kube-flannel-ds-h7v49\" (UID: \"8c747c51-14ac-408d-8134-8bf9c76641b6\") " pod="kube-flannel/kube-flannel-ds-h7v49" Mar 19 11:47:08.396072 kubelet[2578]: I0319 11:47:08.396040 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a1345662-0368-4fce-a971-82eba7cb6ed6-kube-proxy\") pod \"kube-proxy-lwg7w\" (UID: \"a1345662-0368-4fce-a971-82eba7cb6ed6\") " pod="kube-system/kube-proxy-lwg7w" Mar 19 11:47:08.396072 kubelet[2578]: I0319 11:47:08.396060 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8c747c51-14ac-408d-8134-8bf9c76641b6-run\") pod \"kube-flannel-ds-h7v49\" (UID: \"8c747c51-14ac-408d-8134-8bf9c76641b6\") " pod="kube-flannel/kube-flannel-ds-h7v49" Mar 19 11:47:08.396341 kubelet[2578]: I0319 11:47:08.396080 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mf64\" (UniqueName: \"kubernetes.io/projected/8c747c51-14ac-408d-8134-8bf9c76641b6-kube-api-access-5mf64\") pod \"kube-flannel-ds-h7v49\" (UID: \"8c747c51-14ac-408d-8134-8bf9c76641b6\") " pod="kube-flannel/kube-flannel-ds-h7v49" Mar 19 11:47:08.396341 kubelet[2578]: I0319 11:47:08.396103 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/8c747c51-14ac-408d-8134-8bf9c76641b6-cni\") pod \"kube-flannel-ds-h7v49\" (UID: \"8c747c51-14ac-408d-8134-8bf9c76641b6\") " pod="kube-flannel/kube-flannel-ds-h7v49" Mar 19 11:47:08.396341 kubelet[2578]: I0319 11:47:08.396124 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a1345662-0368-4fce-a971-82eba7cb6ed6-xtables-lock\") pod \"kube-proxy-lwg7w\" (UID: \"a1345662-0368-4fce-a971-82eba7cb6ed6\") " pod="kube-system/kube-proxy-lwg7w" Mar 19 11:47:08.396341 kubelet[2578]: I0319 11:47:08.396143 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a1345662-0368-4fce-a971-82eba7cb6ed6-lib-modules\") pod \"kube-proxy-lwg7w\" (UID: \"a1345662-0368-4fce-a971-82eba7cb6ed6\") " pod="kube-system/kube-proxy-lwg7w" Mar 19 11:47:08.396341 kubelet[2578]: I0319 11:47:08.396163 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c747c51-14ac-408d-8134-8bf9c76641b6-xtables-lock\") pod \"kube-flannel-ds-h7v49\" (UID: \"8c747c51-14ac-408d-8134-8bf9c76641b6\") " pod="kube-flannel/kube-flannel-ds-h7v49" Mar 19 11:47:08.400913 systemd[1]: Created slice kubepods-burstable-pod8c747c51_14ac_408d_8134_8bf9c76641b6.slice - libcontainer container kubepods-burstable-pod8c747c51_14ac_408d_8134_8bf9c76641b6.slice. Mar 19 11:47:08.698523 containerd[1514]: time="2025-03-19T11:47:08.698381798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lwg7w,Uid:a1345662-0368-4fce-a971-82eba7cb6ed6,Namespace:kube-system,Attempt:0,}" Mar 19 11:47:08.704315 containerd[1514]: time="2025-03-19T11:47:08.704271274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-h7v49,Uid:8c747c51-14ac-408d-8134-8bf9c76641b6,Namespace:kube-flannel,Attempt:0,}" Mar 19 11:47:08.738487 containerd[1514]: time="2025-03-19T11:47:08.738375103Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:47:08.738487 containerd[1514]: time="2025-03-19T11:47:08.738444660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:47:08.738487 containerd[1514]: time="2025-03-19T11:47:08.738459334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:47:08.738653 containerd[1514]: time="2025-03-19T11:47:08.738567269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:47:08.750202 containerd[1514]: time="2025-03-19T11:47:08.749890687Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:47:08.750202 containerd[1514]: time="2025-03-19T11:47:08.749966389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:47:08.750202 containerd[1514]: time="2025-03-19T11:47:08.749984780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:47:08.750202 containerd[1514]: time="2025-03-19T11:47:08.750079285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:47:08.763439 systemd[1]: Started cri-containerd-f241801f8e67fa4b96eb1ff311bff1d309564c03f04831d16ce9225eee0ff931.scope - libcontainer container f241801f8e67fa4b96eb1ff311bff1d309564c03f04831d16ce9225eee0ff931. Mar 19 11:47:08.770381 systemd[1]: Started cri-containerd-0399d3b6cfc2adcdb8f716bec629395f62dc24dcd09c4bd2b8742c9dc8a04fdd.scope - libcontainer container 0399d3b6cfc2adcdb8f716bec629395f62dc24dcd09c4bd2b8742c9dc8a04fdd. Mar 19 11:47:08.793116 containerd[1514]: time="2025-03-19T11:47:08.793038525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lwg7w,Uid:a1345662-0368-4fce-a971-82eba7cb6ed6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f241801f8e67fa4b96eb1ff311bff1d309564c03f04831d16ce9225eee0ff931\"" Mar 19 11:47:08.797064 containerd[1514]: time="2025-03-19T11:47:08.797015569Z" level=info msg="CreateContainer within sandbox \"f241801f8e67fa4b96eb1ff311bff1d309564c03f04831d16ce9225eee0ff931\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 19 11:47:08.812693 containerd[1514]: time="2025-03-19T11:47:08.812637169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-h7v49,Uid:8c747c51-14ac-408d-8134-8bf9c76641b6,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"0399d3b6cfc2adcdb8f716bec629395f62dc24dcd09c4bd2b8742c9dc8a04fdd\"" Mar 19 11:47:08.818287 containerd[1514]: time="2025-03-19T11:47:08.818208042Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Mar 19 11:47:08.828165 containerd[1514]: time="2025-03-19T11:47:08.828105521Z" level=info msg="CreateContainer within sandbox \"f241801f8e67fa4b96eb1ff311bff1d309564c03f04831d16ce9225eee0ff931\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"387f5a10d913a063d5e534a9fae053bf51e537f2f3b919aac6db89fdce07f84b\"" Mar 19 11:47:08.828692 containerd[1514]: time="2025-03-19T11:47:08.828659630Z" level=info msg="StartContainer for \"387f5a10d913a063d5e534a9fae053bf51e537f2f3b919aac6db89fdce07f84b\"" Mar 19 11:47:08.860436 systemd[1]: Started cri-containerd-387f5a10d913a063d5e534a9fae053bf51e537f2f3b919aac6db89fdce07f84b.scope - libcontainer container 387f5a10d913a063d5e534a9fae053bf51e537f2f3b919aac6db89fdce07f84b. Mar 19 11:47:08.895673 containerd[1514]: time="2025-03-19T11:47:08.895620073Z" level=info msg="StartContainer for \"387f5a10d913a063d5e534a9fae053bf51e537f2f3b919aac6db89fdce07f84b\" returns successfully" Mar 19 11:47:09.825703 kubelet[2578]: I0319 11:47:09.825629 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lwg7w" podStartSLOduration=1.825608963 podStartE2EDuration="1.825608963s" podCreationTimestamp="2025-03-19 11:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:47:09.825436906 +0000 UTC m=+7.231554587" watchObservedRunningTime="2025-03-19 11:47:09.825608963 +0000 UTC m=+7.231726644" Mar 19 11:47:10.440664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975976207.mount: Deactivated successfully. Mar 19 11:47:10.477544 containerd[1514]: time="2025-03-19T11:47:10.477489613Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:47:10.478299 containerd[1514]: time="2025-03-19T11:47:10.478224824Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Mar 19 11:47:10.479612 containerd[1514]: time="2025-03-19T11:47:10.479575168Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:47:10.481950 containerd[1514]: time="2025-03-19T11:47:10.481911342Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:47:10.482679 containerd[1514]: time="2025-03-19T11:47:10.482638425Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 1.663901173s" Mar 19 11:47:10.482679 containerd[1514]: time="2025-03-19T11:47:10.482668853Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Mar 19 11:47:10.484890 containerd[1514]: time="2025-03-19T11:47:10.484860164Z" level=info msg="CreateContainer within sandbox \"0399d3b6cfc2adcdb8f716bec629395f62dc24dcd09c4bd2b8742c9dc8a04fdd\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Mar 19 11:47:10.497707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4083490660.mount: Deactivated successfully. Mar 19 11:47:10.498645 containerd[1514]: time="2025-03-19T11:47:10.498610556Z" level=info msg="CreateContainer within sandbox \"0399d3b6cfc2adcdb8f716bec629395f62dc24dcd09c4bd2b8742c9dc8a04fdd\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"af05991299b87abcf9b38f49246e8191f34efe1ed1d27d65010af42466a10d59\"" Mar 19 11:47:10.499679 containerd[1514]: time="2025-03-19T11:47:10.499056842Z" level=info msg="StartContainer for \"af05991299b87abcf9b38f49246e8191f34efe1ed1d27d65010af42466a10d59\"" Mar 19 11:47:10.532421 systemd[1]: Started cri-containerd-af05991299b87abcf9b38f49246e8191f34efe1ed1d27d65010af42466a10d59.scope - libcontainer container af05991299b87abcf9b38f49246e8191f34efe1ed1d27d65010af42466a10d59. Mar 19 11:47:10.562075 systemd[1]: cri-containerd-af05991299b87abcf9b38f49246e8191f34efe1ed1d27d65010af42466a10d59.scope: Deactivated successfully. Mar 19 11:47:10.562273 containerd[1514]: time="2025-03-19T11:47:10.562157107Z" level=info msg="StartContainer for \"af05991299b87abcf9b38f49246e8191f34efe1ed1d27d65010af42466a10d59\" returns successfully" Mar 19 11:47:10.582348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-af05991299b87abcf9b38f49246e8191f34efe1ed1d27d65010af42466a10d59-rootfs.mount: Deactivated successfully. Mar 19 11:47:10.623157 containerd[1514]: time="2025-03-19T11:47:10.623080071Z" level=info msg="shim disconnected" id=af05991299b87abcf9b38f49246e8191f34efe1ed1d27d65010af42466a10d59 namespace=k8s.io Mar 19 11:47:10.623157 containerd[1514]: time="2025-03-19T11:47:10.623148443Z" level=warning msg="cleaning up after shim disconnected" id=af05991299b87abcf9b38f49246e8191f34efe1ed1d27d65010af42466a10d59 namespace=k8s.io Mar 19 11:47:10.623157 containerd[1514]: time="2025-03-19T11:47:10.623158004Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:47:10.708247 containerd[1514]: time="2025-03-19T11:47:10.708097385Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Mar 19 11:47:11.919326 update_engine[1503]: I20250319 11:47:11.919198 1503 update_attempter.cc:509] Updating boot flags... Mar 19 11:47:11.953072 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2969) Mar 19 11:47:11.989584 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2973) Mar 19 11:47:12.025355 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2973) Mar 19 11:47:13.045430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2060575486.mount: Deactivated successfully. Mar 19 11:47:14.975385 containerd[1514]: time="2025-03-19T11:47:14.975319762Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:47:14.976161 containerd[1514]: time="2025-03-19T11:47:14.976116680Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Mar 19 11:47:14.977329 containerd[1514]: time="2025-03-19T11:47:14.977296476Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:47:14.980087 containerd[1514]: time="2025-03-19T11:47:14.980054324Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 19 11:47:14.981145 containerd[1514]: time="2025-03-19T11:47:14.981115994Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 4.272975284s" Mar 19 11:47:14.981206 containerd[1514]: time="2025-03-19T11:47:14.981145168Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Mar 19 11:47:14.983264 containerd[1514]: time="2025-03-19T11:47:14.983215495Z" level=info msg="CreateContainer within sandbox \"0399d3b6cfc2adcdb8f716bec629395f62dc24dcd09c4bd2b8742c9dc8a04fdd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 19 11:47:14.995026 containerd[1514]: time="2025-03-19T11:47:14.994971565Z" level=info msg="CreateContainer within sandbox \"0399d3b6cfc2adcdb8f716bec629395f62dc24dcd09c4bd2b8742c9dc8a04fdd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9bd33cc956d55953a97aefbab1ce6be05c2895a903f962fc495192743817465e\"" Mar 19 11:47:14.995477 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439411787.mount: Deactivated successfully. Mar 19 11:47:14.996010 containerd[1514]: time="2025-03-19T11:47:14.995543925Z" level=info msg="StartContainer for \"9bd33cc956d55953a97aefbab1ce6be05c2895a903f962fc495192743817465e\"" Mar 19 11:47:15.030448 systemd[1]: Started cri-containerd-9bd33cc956d55953a97aefbab1ce6be05c2895a903f962fc495192743817465e.scope - libcontainer container 9bd33cc956d55953a97aefbab1ce6be05c2895a903f962fc495192743817465e. Mar 19 11:47:15.059325 systemd[1]: cri-containerd-9bd33cc956d55953a97aefbab1ce6be05c2895a903f962fc495192743817465e.scope: Deactivated successfully. Mar 19 11:47:15.124447 kubelet[2578]: I0319 11:47:15.124401 2578 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 19 11:47:15.280032 containerd[1514]: time="2025-03-19T11:47:15.279952300Z" level=info msg="StartContainer for \"9bd33cc956d55953a97aefbab1ce6be05c2895a903f962fc495192743817465e\" returns successfully" Mar 19 11:47:15.300356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9bd33cc956d55953a97aefbab1ce6be05c2895a903f962fc495192743817465e-rootfs.mount: Deactivated successfully. Mar 19 11:47:15.334790 systemd[1]: Created slice kubepods-burstable-pod453d7428_cd2e_4a1b_9701_b4c6e96c5fca.slice - libcontainer container kubepods-burstable-pod453d7428_cd2e_4a1b_9701_b4c6e96c5fca.slice. Mar 19 11:47:15.339510 systemd[1]: Created slice kubepods-burstable-pod418c0612_fdd2_4d2f_81c7_2ff9ed73bafe.slice - libcontainer container kubepods-burstable-pod418c0612_fdd2_4d2f_81c7_2ff9ed73bafe.slice. Mar 19 11:47:15.344222 kubelet[2578]: I0319 11:47:15.344192 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/418c0612-fdd2-4d2f-81c7-2ff9ed73bafe-config-volume\") pod \"coredns-668d6bf9bc-ftqwz\" (UID: \"418c0612-fdd2-4d2f-81c7-2ff9ed73bafe\") " pod="kube-system/coredns-668d6bf9bc-ftqwz" Mar 19 11:47:15.344222 kubelet[2578]: I0319 11:47:15.344222 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/453d7428-cd2e-4a1b-9701-b4c6e96c5fca-config-volume\") pod \"coredns-668d6bf9bc-gqsp9\" (UID: \"453d7428-cd2e-4a1b-9701-b4c6e96c5fca\") " pod="kube-system/coredns-668d6bf9bc-gqsp9" Mar 19 11:47:15.344406 kubelet[2578]: I0319 11:47:15.344241 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kctwt\" (UniqueName: \"kubernetes.io/projected/418c0612-fdd2-4d2f-81c7-2ff9ed73bafe-kube-api-access-kctwt\") pod \"coredns-668d6bf9bc-ftqwz\" (UID: \"418c0612-fdd2-4d2f-81c7-2ff9ed73bafe\") " pod="kube-system/coredns-668d6bf9bc-ftqwz" Mar 19 11:47:15.344406 kubelet[2578]: I0319 11:47:15.344290 2578 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r29lx\" (UniqueName: \"kubernetes.io/projected/453d7428-cd2e-4a1b-9701-b4c6e96c5fca-kube-api-access-r29lx\") pod \"coredns-668d6bf9bc-gqsp9\" (UID: \"453d7428-cd2e-4a1b-9701-b4c6e96c5fca\") " pod="kube-system/coredns-668d6bf9bc-gqsp9" Mar 19 11:47:15.384137 containerd[1514]: time="2025-03-19T11:47:15.383997146Z" level=info msg="shim disconnected" id=9bd33cc956d55953a97aefbab1ce6be05c2895a903f962fc495192743817465e namespace=k8s.io Mar 19 11:47:15.384137 containerd[1514]: time="2025-03-19T11:47:15.384081799Z" level=warning msg="cleaning up after shim disconnected" id=9bd33cc956d55953a97aefbab1ce6be05c2895a903f962fc495192743817465e namespace=k8s.io Mar 19 11:47:15.384137 containerd[1514]: time="2025-03-19T11:47:15.384101891Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 19 11:47:15.638460 containerd[1514]: time="2025-03-19T11:47:15.638344221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gqsp9,Uid:453d7428-cd2e-4a1b-9701-b4c6e96c5fca,Namespace:kube-system,Attempt:0,}" Mar 19 11:47:15.642062 containerd[1514]: time="2025-03-19T11:47:15.642016620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ftqwz,Uid:418c0612-fdd2-4d2f-81c7-2ff9ed73bafe,Namespace:kube-system,Attempt:0,}" Mar 19 11:47:15.678829 containerd[1514]: time="2025-03-19T11:47:15.678766957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ftqwz,Uid:418c0612-fdd2-4d2f-81c7-2ff9ed73bafe,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ec057536792fe3d85e1900827341391fdcfe818c0b32fd7184bf79f760577d6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 19 11:47:15.679084 kubelet[2578]: E0319 11:47:15.679012 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec057536792fe3d85e1900827341391fdcfe818c0b32fd7184bf79f760577d6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 19 11:47:15.679165 kubelet[2578]: E0319 11:47:15.679099 2578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec057536792fe3d85e1900827341391fdcfe818c0b32fd7184bf79f760577d6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-ftqwz" Mar 19 11:47:15.679165 kubelet[2578]: E0319 11:47:15.679125 2578 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ec057536792fe3d85e1900827341391fdcfe818c0b32fd7184bf79f760577d6\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-ftqwz" Mar 19 11:47:15.679233 kubelet[2578]: E0319 11:47:15.679181 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-ftqwz_kube-system(418c0612-fdd2-4d2f-81c7-2ff9ed73bafe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-ftqwz_kube-system(418c0612-fdd2-4d2f-81c7-2ff9ed73bafe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ec057536792fe3d85e1900827341391fdcfe818c0b32fd7184bf79f760577d6\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-ftqwz" podUID="418c0612-fdd2-4d2f-81c7-2ff9ed73bafe" Mar 19 11:47:15.679698 containerd[1514]: time="2025-03-19T11:47:15.679649617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gqsp9,Uid:453d7428-cd2e-4a1b-9701-b4c6e96c5fca,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a470e8184fcfc117584ba1a1201f4ba1b818e6ed2ed962e88a7c09caa87eafe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 19 11:47:15.679817 kubelet[2578]: E0319 11:47:15.679791 2578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a470e8184fcfc117584ba1a1201f4ba1b818e6ed2ed962e88a7c09caa87eafe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Mar 19 11:47:15.679883 kubelet[2578]: E0319 11:47:15.679828 2578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a470e8184fcfc117584ba1a1201f4ba1b818e6ed2ed962e88a7c09caa87eafe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-gqsp9" Mar 19 11:47:15.679883 kubelet[2578]: E0319 11:47:15.679849 2578 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a470e8184fcfc117584ba1a1201f4ba1b818e6ed2ed962e88a7c09caa87eafe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-gqsp9" Mar 19 11:47:15.679957 kubelet[2578]: E0319 11:47:15.679894 2578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gqsp9_kube-system(453d7428-cd2e-4a1b-9701-b4c6e96c5fca)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gqsp9_kube-system(453d7428-cd2e-4a1b-9701-b4c6e96c5fca)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a470e8184fcfc117584ba1a1201f4ba1b818e6ed2ed962e88a7c09caa87eafe\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-gqsp9" podUID="453d7428-cd2e-4a1b-9701-b4c6e96c5fca" Mar 19 11:47:15.718209 containerd[1514]: time="2025-03-19T11:47:15.718150766Z" level=info msg="CreateContainer within sandbox \"0399d3b6cfc2adcdb8f716bec629395f62dc24dcd09c4bd2b8742c9dc8a04fdd\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Mar 19 11:47:15.731578 containerd[1514]: time="2025-03-19T11:47:15.731524298Z" level=info msg="CreateContainer within sandbox \"0399d3b6cfc2adcdb8f716bec629395f62dc24dcd09c4bd2b8742c9dc8a04fdd\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"9ea1752d871bf18e1da9e8a6c4669a3d5d5d55babaa9c38188094b0bd98ac7b3\"" Mar 19 11:47:15.732213 containerd[1514]: time="2025-03-19T11:47:15.732188188Z" level=info msg="StartContainer for \"9ea1752d871bf18e1da9e8a6c4669a3d5d5d55babaa9c38188094b0bd98ac7b3\"" Mar 19 11:47:15.758456 systemd[1]: Started cri-containerd-9ea1752d871bf18e1da9e8a6c4669a3d5d5d55babaa9c38188094b0bd98ac7b3.scope - libcontainer container 9ea1752d871bf18e1da9e8a6c4669a3d5d5d55babaa9c38188094b0bd98ac7b3. Mar 19 11:47:15.788340 containerd[1514]: time="2025-03-19T11:47:15.788224170Z" level=info msg="StartContainer for \"9ea1752d871bf18e1da9e8a6c4669a3d5d5d55babaa9c38188094b0bd98ac7b3\" returns successfully" Mar 19 11:47:16.730638 kubelet[2578]: I0319 11:47:16.730549 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-h7v49" podStartSLOduration=2.565872261 podStartE2EDuration="8.730528598s" podCreationTimestamp="2025-03-19 11:47:08 +0000 UTC" firstStartedPulling="2025-03-19 11:47:08.817389613 +0000 UTC m=+6.223507294" lastFinishedPulling="2025-03-19 11:47:14.98204595 +0000 UTC m=+12.388163631" observedRunningTime="2025-03-19 11:47:16.730346758 +0000 UTC m=+14.136464439" watchObservedRunningTime="2025-03-19 11:47:16.730528598 +0000 UTC m=+14.136646279" Mar 19 11:47:16.831246 systemd-networkd[1430]: flannel.1: Link UP Mar 19 11:47:16.831273 systemd-networkd[1430]: flannel.1: Gained carrier Mar 19 11:47:17.969440 systemd-networkd[1430]: flannel.1: Gained IPv6LL Mar 19 11:47:25.082484 systemd[1]: Started sshd@5-10.0.0.120:22-10.0.0.1:37980.service - OpenSSH per-connection server daemon (10.0.0.1:37980). Mar 19 11:47:25.122568 sshd[3270]: Accepted publickey for core from 10.0.0.1 port 37980 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:47:25.124712 sshd-session[3270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:47:25.129624 systemd-logind[1495]: New session 6 of user core. Mar 19 11:47:25.141664 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 19 11:47:25.269958 sshd[3272]: Connection closed by 10.0.0.1 port 37980 Mar 19 11:47:25.270391 sshd-session[3270]: pam_unix(sshd:session): session closed for user core Mar 19 11:47:25.275243 systemd[1]: sshd@5-10.0.0.120:22-10.0.0.1:37980.service: Deactivated successfully. Mar 19 11:47:25.277683 systemd[1]: session-6.scope: Deactivated successfully. Mar 19 11:47:25.278568 systemd-logind[1495]: Session 6 logged out. Waiting for processes to exit. Mar 19 11:47:25.279561 systemd-logind[1495]: Removed session 6. Mar 19 11:47:28.683228 containerd[1514]: time="2025-03-19T11:47:28.683172478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gqsp9,Uid:453d7428-cd2e-4a1b-9701-b4c6e96c5fca,Namespace:kube-system,Attempt:0,}" Mar 19 11:47:28.985342 systemd-networkd[1430]: cni0: Link UP Mar 19 11:47:28.985353 systemd-networkd[1430]: cni0: Gained carrier Mar 19 11:47:28.990089 systemd-networkd[1430]: cni0: Lost carrier Mar 19 11:47:28.996468 systemd-networkd[1430]: vethb0a21b4c: Link UP Mar 19 11:47:28.998975 kernel: cni0: port 1(vethb0a21b4c) entered blocking state Mar 19 11:47:28.999028 kernel: cni0: port 1(vethb0a21b4c) entered disabled state Mar 19 11:47:28.999048 kernel: vethb0a21b4c: entered allmulticast mode Mar 19 11:47:28.999075 kernel: vethb0a21b4c: entered promiscuous mode Mar 19 11:47:29.000463 kernel: cni0: port 1(vethb0a21b4c) entered blocking state Mar 19 11:47:29.000501 kernel: cni0: port 1(vethb0a21b4c) entered forwarding state Mar 19 11:47:29.004279 kernel: cni0: port 1(vethb0a21b4c) entered disabled state Mar 19 11:47:29.011750 kernel: cni0: port 1(vethb0a21b4c) entered blocking state Mar 19 11:47:29.011827 kernel: cni0: port 1(vethb0a21b4c) entered forwarding state Mar 19 11:47:29.011488 systemd-networkd[1430]: vethb0a21b4c: Gained carrier Mar 19 11:47:29.011709 systemd-networkd[1430]: cni0: Gained carrier Mar 19 11:47:29.013841 containerd[1514]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0001268e8), "name":"cbr0", "type":"bridge"} Mar 19 11:47:29.013841 containerd[1514]: delegateAdd: netconf sent to delegate plugin: Mar 19 11:47:29.046127 containerd[1514]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-03-19T11:47:29.046016924Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:47:29.046127 containerd[1514]: time="2025-03-19T11:47:29.046095734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:47:29.046127 containerd[1514]: time="2025-03-19T11:47:29.046118200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:47:29.046375 containerd[1514]: time="2025-03-19T11:47:29.046184334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:47:29.061050 systemd[1]: run-containerd-runc-k8s.io-fe4eb954a672bd33a471b98b469c21b905a2d800545346ef91b721f39ced5837-runc.LqyPt3.mount: Deactivated successfully. Mar 19 11:47:29.070399 systemd[1]: Started cri-containerd-fe4eb954a672bd33a471b98b469c21b905a2d800545346ef91b721f39ced5837.scope - libcontainer container fe4eb954a672bd33a471b98b469c21b905a2d800545346ef91b721f39ced5837. Mar 19 11:47:29.082218 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:47:29.106476 containerd[1514]: time="2025-03-19T11:47:29.106421334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gqsp9,Uid:453d7428-cd2e-4a1b-9701-b4c6e96c5fca,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe4eb954a672bd33a471b98b469c21b905a2d800545346ef91b721f39ced5837\"" Mar 19 11:47:29.109565 containerd[1514]: time="2025-03-19T11:47:29.109534667Z" level=info msg="CreateContainer within sandbox \"fe4eb954a672bd33a471b98b469c21b905a2d800545346ef91b721f39ced5837\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:47:29.125229 containerd[1514]: time="2025-03-19T11:47:29.125187039Z" level=info msg="CreateContainer within sandbox \"fe4eb954a672bd33a471b98b469c21b905a2d800545346ef91b721f39ced5837\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"05ea7e722d535b82a340c2af4a057f566e6824b37865b838e173c7e63ff8c40f\"" Mar 19 11:47:29.125921 containerd[1514]: time="2025-03-19T11:47:29.125647385Z" level=info msg="StartContainer for \"05ea7e722d535b82a340c2af4a057f566e6824b37865b838e173c7e63ff8c40f\"" Mar 19 11:47:29.157388 systemd[1]: Started cri-containerd-05ea7e722d535b82a340c2af4a057f566e6824b37865b838e173c7e63ff8c40f.scope - libcontainer container 05ea7e722d535b82a340c2af4a057f566e6824b37865b838e173c7e63ff8c40f. Mar 19 11:47:29.185283 containerd[1514]: time="2025-03-19T11:47:29.184061963Z" level=info msg="StartContainer for \"05ea7e722d535b82a340c2af4a057f566e6824b37865b838e173c7e63ff8c40f\" returns successfully" Mar 19 11:47:29.680817 containerd[1514]: time="2025-03-19T11:47:29.680773923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ftqwz,Uid:418c0612-fdd2-4d2f-81c7-2ff9ed73bafe,Namespace:kube-system,Attempt:0,}" Mar 19 11:47:29.701021 systemd-networkd[1430]: vethbe14bfdd: Link UP Mar 19 11:47:29.702605 kernel: cni0: port 2(vethbe14bfdd) entered blocking state Mar 19 11:47:29.702649 kernel: cni0: port 2(vethbe14bfdd) entered disabled state Mar 19 11:47:29.702669 kernel: vethbe14bfdd: entered allmulticast mode Mar 19 11:47:29.704920 kernel: vethbe14bfdd: entered promiscuous mode Mar 19 11:47:29.704974 kernel: cni0: port 2(vethbe14bfdd) entered blocking state Mar 19 11:47:29.705006 kernel: cni0: port 2(vethbe14bfdd) entered forwarding state Mar 19 11:47:29.712458 systemd-networkd[1430]: vethbe14bfdd: Gained carrier Mar 19 11:47:29.714295 containerd[1514]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000022938), "name":"cbr0", "type":"bridge"} Mar 19 11:47:29.714295 containerd[1514]: delegateAdd: netconf sent to delegate plugin: Mar 19 11:47:29.737864 containerd[1514]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-03-19T11:47:29.737768496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 19 11:47:29.738035 containerd[1514]: time="2025-03-19T11:47:29.737831705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 19 11:47:29.738035 containerd[1514]: time="2025-03-19T11:47:29.737849410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:47:29.738035 containerd[1514]: time="2025-03-19T11:47:29.737945265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 19 11:47:29.759439 systemd[1]: Started cri-containerd-98ad08a9450deb26a7e7557f7fbffd35412416f10306eafbdd2eb659b6bd98e3.scope - libcontainer container 98ad08a9450deb26a7e7557f7fbffd35412416f10306eafbdd2eb659b6bd98e3. Mar 19 11:47:29.772175 kubelet[2578]: I0319 11:47:29.772104 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gqsp9" podStartSLOduration=21.772087563 podStartE2EDuration="21.772087563s" podCreationTimestamp="2025-03-19 11:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:47:29.762688552 +0000 UTC m=+27.168806233" watchObservedRunningTime="2025-03-19 11:47:29.772087563 +0000 UTC m=+27.178205244" Mar 19 11:47:29.774178 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 19 11:47:29.801526 containerd[1514]: time="2025-03-19T11:47:29.801470954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ftqwz,Uid:418c0612-fdd2-4d2f-81c7-2ff9ed73bafe,Namespace:kube-system,Attempt:0,} returns sandbox id \"98ad08a9450deb26a7e7557f7fbffd35412416f10306eafbdd2eb659b6bd98e3\"" Mar 19 11:47:29.804326 containerd[1514]: time="2025-03-19T11:47:29.804221681Z" level=info msg="CreateContainer within sandbox \"98ad08a9450deb26a7e7557f7fbffd35412416f10306eafbdd2eb659b6bd98e3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 19 11:47:29.824267 containerd[1514]: time="2025-03-19T11:47:29.824215592Z" level=info msg="CreateContainer within sandbox \"98ad08a9450deb26a7e7557f7fbffd35412416f10306eafbdd2eb659b6bd98e3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f67e3bdfa3ed08d25643fb9aaa6d28e5dd3c4a1aee4908f544657564e8e3c8cd\"" Mar 19 11:47:29.825474 containerd[1514]: time="2025-03-19T11:47:29.824733033Z" level=info msg="StartContainer for \"f67e3bdfa3ed08d25643fb9aaa6d28e5dd3c4a1aee4908f544657564e8e3c8cd\"" Mar 19 11:47:29.851423 systemd[1]: Started cri-containerd-f67e3bdfa3ed08d25643fb9aaa6d28e5dd3c4a1aee4908f544657564e8e3c8cd.scope - libcontainer container f67e3bdfa3ed08d25643fb9aaa6d28e5dd3c4a1aee4908f544657564e8e3c8cd. Mar 19 11:47:29.879910 containerd[1514]: time="2025-03-19T11:47:29.879840223Z" level=info msg="StartContainer for \"f67e3bdfa3ed08d25643fb9aaa6d28e5dd3c4a1aee4908f544657564e8e3c8cd\" returns successfully" Mar 19 11:47:30.285466 systemd[1]: Started sshd@6-10.0.0.120:22-10.0.0.1:37994.service - OpenSSH per-connection server daemon (10.0.0.1:37994). Mar 19 11:47:30.327060 sshd[3554]: Accepted publickey for core from 10.0.0.1 port 37994 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:47:30.328594 sshd-session[3554]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:47:30.332557 systemd-logind[1495]: New session 7 of user core. Mar 19 11:47:30.342396 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 19 11:47:30.464896 sshd[3556]: Connection closed by 10.0.0.1 port 37994 Mar 19 11:47:30.465332 sshd-session[3554]: pam_unix(sshd:session): session closed for user core Mar 19 11:47:30.469743 systemd[1]: sshd@6-10.0.0.120:22-10.0.0.1:37994.service: Deactivated successfully. Mar 19 11:47:30.471964 systemd[1]: session-7.scope: Deactivated successfully. Mar 19 11:47:30.472792 systemd-logind[1495]: Session 7 logged out. Waiting for processes to exit. Mar 19 11:47:30.473781 systemd-logind[1495]: Removed session 7. Mar 19 11:47:30.577460 systemd-networkd[1430]: vethb0a21b4c: Gained IPv6LL Mar 19 11:47:30.705423 systemd-networkd[1430]: cni0: Gained IPv6LL Mar 19 11:47:30.897494 systemd-networkd[1430]: vethbe14bfdd: Gained IPv6LL Mar 19 11:47:31.659064 kubelet[2578]: I0319 11:47:31.658612 2578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ftqwz" podStartSLOduration=23.658589313 podStartE2EDuration="23.658589313s" podCreationTimestamp="2025-03-19 11:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-19 11:47:30.997945874 +0000 UTC m=+28.404063555" watchObservedRunningTime="2025-03-19 11:47:31.658589313 +0000 UTC m=+29.064706984" Mar 19 11:47:35.482460 systemd[1]: Started sshd@7-10.0.0.120:22-10.0.0.1:36788.service - OpenSSH per-connection server daemon (10.0.0.1:36788). Mar 19 11:47:35.520063 sshd[3599]: Accepted publickey for core from 10.0.0.1 port 36788 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:47:35.521485 sshd-session[3599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:47:35.525685 systemd-logind[1495]: New session 8 of user core. Mar 19 11:47:35.532394 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 19 11:47:35.637456 sshd[3601]: Connection closed by 10.0.0.1 port 36788 Mar 19 11:47:35.637841 sshd-session[3599]: pam_unix(sshd:session): session closed for user core Mar 19 11:47:35.642078 systemd[1]: sshd@7-10.0.0.120:22-10.0.0.1:36788.service: Deactivated successfully. Mar 19 11:47:35.644265 systemd[1]: session-8.scope: Deactivated successfully. Mar 19 11:47:35.644992 systemd-logind[1495]: Session 8 logged out. Waiting for processes to exit. Mar 19 11:47:35.645850 systemd-logind[1495]: Removed session 8. Mar 19 11:47:40.653124 systemd[1]: Started sshd@8-10.0.0.120:22-10.0.0.1:36790.service - OpenSSH per-connection server daemon (10.0.0.1:36790). Mar 19 11:47:40.692171 sshd[3639]: Accepted publickey for core from 10.0.0.1 port 36790 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:47:40.693672 sshd-session[3639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:47:40.697967 systemd-logind[1495]: New session 9 of user core. Mar 19 11:47:40.709503 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 19 11:47:40.830698 sshd[3641]: Connection closed by 10.0.0.1 port 36790 Mar 19 11:47:40.831107 sshd-session[3639]: pam_unix(sshd:session): session closed for user core Mar 19 11:47:40.845243 systemd[1]: sshd@8-10.0.0.120:22-10.0.0.1:36790.service: Deactivated successfully. Mar 19 11:47:40.847129 systemd[1]: session-9.scope: Deactivated successfully. Mar 19 11:47:40.849315 systemd-logind[1495]: Session 9 logged out. Waiting for processes to exit. Mar 19 11:47:40.854579 systemd[1]: Started sshd@9-10.0.0.120:22-10.0.0.1:36792.service - OpenSSH per-connection server daemon (10.0.0.1:36792). Mar 19 11:47:40.855689 systemd-logind[1495]: Removed session 9. Mar 19 11:47:40.896550 sshd[3654]: Accepted publickey for core from 10.0.0.1 port 36792 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:47:40.898747 sshd-session[3654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:47:40.904754 systemd-logind[1495]: New session 10 of user core. Mar 19 11:47:40.915445 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 19 11:47:41.067869 sshd[3657]: Connection closed by 10.0.0.1 port 36792 Mar 19 11:47:41.068509 sshd-session[3654]: pam_unix(sshd:session): session closed for user core Mar 19 11:47:41.080594 systemd[1]: sshd@9-10.0.0.120:22-10.0.0.1:36792.service: Deactivated successfully. Mar 19 11:47:41.083105 systemd[1]: session-10.scope: Deactivated successfully. Mar 19 11:47:41.085897 systemd-logind[1495]: Session 10 logged out. Waiting for processes to exit. Mar 19 11:47:41.095716 systemd[1]: Started sshd@10-10.0.0.120:22-10.0.0.1:36794.service - OpenSSH per-connection server daemon (10.0.0.1:36794). Mar 19 11:47:41.096607 systemd-logind[1495]: Removed session 10. Mar 19 11:47:41.128548 sshd[3668]: Accepted publickey for core from 10.0.0.1 port 36794 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:47:41.130817 sshd-session[3668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:47:41.136076 systemd-logind[1495]: New session 11 of user core. Mar 19 11:47:41.150453 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 19 11:47:41.267542 sshd[3671]: Connection closed by 10.0.0.1 port 36794 Mar 19 11:47:41.268192 sshd-session[3668]: pam_unix(sshd:session): session closed for user core Mar 19 11:47:41.273672 systemd[1]: sshd@10-10.0.0.120:22-10.0.0.1:36794.service: Deactivated successfully. Mar 19 11:47:41.275762 systemd[1]: session-11.scope: Deactivated successfully. Mar 19 11:47:41.276535 systemd-logind[1495]: Session 11 logged out. Waiting for processes to exit. Mar 19 11:47:41.277457 systemd-logind[1495]: Removed session 11. Mar 19 11:47:46.281715 systemd[1]: Started sshd@11-10.0.0.120:22-10.0.0.1:54370.service - OpenSSH per-connection server daemon (10.0.0.1:54370). Mar 19 11:47:46.321423 sshd[3706]: Accepted publickey for core from 10.0.0.1 port 54370 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:47:46.323429 sshd-session[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:47:46.328805 systemd-logind[1495]: New session 12 of user core. Mar 19 11:47:46.341550 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 19 11:47:46.455180 sshd[3708]: Connection closed by 10.0.0.1 port 54370 Mar 19 11:47:46.455639 sshd-session[3706]: pam_unix(sshd:session): session closed for user core Mar 19 11:47:46.459941 systemd[1]: sshd@11-10.0.0.120:22-10.0.0.1:54370.service: Deactivated successfully. Mar 19 11:47:46.462510 systemd[1]: session-12.scope: Deactivated successfully. Mar 19 11:47:46.463645 systemd-logind[1495]: Session 12 logged out. Waiting for processes to exit. Mar 19 11:47:46.464939 systemd-logind[1495]: Removed session 12. Mar 19 11:47:51.468800 systemd[1]: Started sshd@12-10.0.0.120:22-10.0.0.1:54380.service - OpenSSH per-connection server daemon (10.0.0.1:54380). Mar 19 11:47:51.506621 sshd[3742]: Accepted publickey for core from 10.0.0.1 port 54380 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:47:51.508287 sshd-session[3742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:47:51.512801 systemd-logind[1495]: New session 13 of user core. Mar 19 11:47:51.521496 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 19 11:47:51.678852 sshd[3744]: Connection closed by 10.0.0.1 port 54380 Mar 19 11:47:51.679339 sshd-session[3742]: pam_unix(sshd:session): session closed for user core Mar 19 11:47:51.689297 systemd[1]: sshd@12-10.0.0.120:22-10.0.0.1:54380.service: Deactivated successfully. Mar 19 11:47:51.691961 systemd[1]: session-13.scope: Deactivated successfully. Mar 19 11:47:51.694384 systemd-logind[1495]: Session 13 logged out. Waiting for processes to exit. Mar 19 11:47:51.703829 systemd[1]: Started sshd@13-10.0.0.120:22-10.0.0.1:54384.service - OpenSSH per-connection server daemon (10.0.0.1:54384). Mar 19 11:47:51.706971 systemd-logind[1495]: Removed session 13. Mar 19 11:47:51.742590 sshd[3756]: Accepted publickey for core from 10.0.0.1 port 54384 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:47:51.744446 sshd-session[3756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:47:51.749290 systemd-logind[1495]: New session 14 of user core. Mar 19 11:47:51.762599 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 19 11:47:52.228755 sshd[3759]: Connection closed by 10.0.0.1 port 54384 Mar 19 11:47:52.229648 sshd-session[3756]: pam_unix(sshd:session): session closed for user core Mar 19 11:47:52.243564 systemd[1]: sshd@13-10.0.0.120:22-10.0.0.1:54384.service: Deactivated successfully. Mar 19 11:47:52.245690 systemd[1]: session-14.scope: Deactivated successfully. Mar 19 11:47:52.247391 systemd-logind[1495]: Session 14 logged out. Waiting for processes to exit. Mar 19 11:47:52.262797 systemd[1]: Started sshd@14-10.0.0.120:22-10.0.0.1:54396.service - OpenSSH per-connection server daemon (10.0.0.1:54396). Mar 19 11:47:52.263897 systemd-logind[1495]: Removed session 14. Mar 19 11:47:52.298935 sshd[3791]: Accepted publickey for core from 10.0.0.1 port 54396 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:47:52.300774 sshd-session[3791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:47:52.305467 systemd-logind[1495]: New session 15 of user core. Mar 19 11:47:52.312400 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 19 11:47:53.345720 sshd[3794]: Connection closed by 10.0.0.1 port 54396 Mar 19 11:47:53.344885 sshd-session[3791]: pam_unix(sshd:session): session closed for user core Mar 19 11:47:53.359125 systemd[1]: sshd@14-10.0.0.120:22-10.0.0.1:54396.service: Deactivated successfully. Mar 19 11:47:53.362983 systemd[1]: session-15.scope: Deactivated successfully. Mar 19 11:47:53.364822 systemd-logind[1495]: Session 15 logged out. Waiting for processes to exit. Mar 19 11:47:53.373866 systemd[1]: Started sshd@15-10.0.0.120:22-10.0.0.1:54410.service - OpenSSH per-connection server daemon (10.0.0.1:54410). Mar 19 11:47:53.375331 systemd-logind[1495]: Removed session 15. Mar 19 11:47:53.410119 sshd[3824]: Accepted publickey for core from 10.0.0.1 port 54410 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:47:53.412359 sshd-session[3824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:47:53.418378 systemd-logind[1495]: New session 16 of user core. Mar 19 11:47:53.433641 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 19 11:47:53.681105 sshd[3827]: Connection closed by 10.0.0.1 port 54410 Mar 19 11:47:53.681340 sshd-session[3824]: pam_unix(sshd:session): session closed for user core Mar 19 11:47:53.691378 systemd[1]: sshd@15-10.0.0.120:22-10.0.0.1:54410.service: Deactivated successfully. Mar 19 11:47:53.694169 systemd[1]: session-16.scope: Deactivated successfully. Mar 19 11:47:53.696075 systemd-logind[1495]: Session 16 logged out. Waiting for processes to exit. Mar 19 11:47:53.708876 systemd[1]: Started sshd@16-10.0.0.120:22-10.0.0.1:54422.service - OpenSSH per-connection server daemon (10.0.0.1:54422). Mar 19 11:47:53.710579 systemd-logind[1495]: Removed session 16. Mar 19 11:47:53.743106 sshd[3837]: Accepted publickey for core from 10.0.0.1 port 54422 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:47:53.744736 sshd-session[3837]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:47:53.750229 systemd-logind[1495]: New session 17 of user core. Mar 19 11:47:53.760655 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 19 11:47:53.884201 sshd[3840]: Connection closed by 10.0.0.1 port 54422 Mar 19 11:47:53.884688 sshd-session[3837]: pam_unix(sshd:session): session closed for user core Mar 19 11:47:53.889761 systemd[1]: sshd@16-10.0.0.120:22-10.0.0.1:54422.service: Deactivated successfully. Mar 19 11:47:53.892792 systemd[1]: session-17.scope: Deactivated successfully. Mar 19 11:47:53.893719 systemd-logind[1495]: Session 17 logged out. Waiting for processes to exit. Mar 19 11:47:53.894740 systemd-logind[1495]: Removed session 17. Mar 19 11:47:58.904660 systemd[1]: Started sshd@17-10.0.0.120:22-10.0.0.1:51940.service - OpenSSH per-connection server daemon (10.0.0.1:51940). Mar 19 11:47:58.939587 sshd[3874]: Accepted publickey for core from 10.0.0.1 port 51940 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:47:58.941375 sshd-session[3874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:47:58.946639 systemd-logind[1495]: New session 18 of user core. Mar 19 11:47:58.957471 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 19 11:47:59.083836 sshd[3876]: Connection closed by 10.0.0.1 port 51940 Mar 19 11:47:59.084340 sshd-session[3874]: pam_unix(sshd:session): session closed for user core Mar 19 11:47:59.089407 systemd[1]: sshd@17-10.0.0.120:22-10.0.0.1:51940.service: Deactivated successfully. Mar 19 11:47:59.091391 systemd[1]: session-18.scope: Deactivated successfully. Mar 19 11:47:59.092035 systemd-logind[1495]: Session 18 logged out. Waiting for processes to exit. Mar 19 11:47:59.093284 systemd-logind[1495]: Removed session 18. Mar 19 11:48:04.097633 systemd[1]: Started sshd@18-10.0.0.120:22-10.0.0.1:53372.service - OpenSSH per-connection server daemon (10.0.0.1:53372). Mar 19 11:48:04.134552 sshd[3914]: Accepted publickey for core from 10.0.0.1 port 53372 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:48:04.136324 sshd-session[3914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:48:04.140656 systemd-logind[1495]: New session 19 of user core. Mar 19 11:48:04.150395 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 19 11:48:04.261990 sshd[3916]: Connection closed by 10.0.0.1 port 53372 Mar 19 11:48:04.262493 sshd-session[3914]: pam_unix(sshd:session): session closed for user core Mar 19 11:48:04.266908 systemd[1]: sshd@18-10.0.0.120:22-10.0.0.1:53372.service: Deactivated successfully. Mar 19 11:48:04.269307 systemd[1]: session-19.scope: Deactivated successfully. Mar 19 11:48:04.270150 systemd-logind[1495]: Session 19 logged out. Waiting for processes to exit. Mar 19 11:48:04.271333 systemd-logind[1495]: Removed session 19. Mar 19 11:48:09.274235 systemd[1]: Started sshd@19-10.0.0.120:22-10.0.0.1:53376.service - OpenSSH per-connection server daemon (10.0.0.1:53376). Mar 19 11:48:09.311732 sshd[3952]: Accepted publickey for core from 10.0.0.1 port 53376 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:48:09.313281 sshd-session[3952]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:48:09.317206 systemd-logind[1495]: New session 20 of user core. Mar 19 11:48:09.324460 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 19 11:48:09.435873 sshd[3954]: Connection closed by 10.0.0.1 port 53376 Mar 19 11:48:09.436366 sshd-session[3952]: pam_unix(sshd:session): session closed for user core Mar 19 11:48:09.441439 systemd[1]: sshd@19-10.0.0.120:22-10.0.0.1:53376.service: Deactivated successfully. Mar 19 11:48:09.443814 systemd[1]: session-20.scope: Deactivated successfully. Mar 19 11:48:09.444668 systemd-logind[1495]: Session 20 logged out. Waiting for processes to exit. Mar 19 11:48:09.445777 systemd-logind[1495]: Removed session 20. Mar 19 11:48:14.450951 systemd[1]: Started sshd@20-10.0.0.120:22-10.0.0.1:39056.service - OpenSSH per-connection server daemon (10.0.0.1:39056). Mar 19 11:48:14.493062 sshd[3988]: Accepted publickey for core from 10.0.0.1 port 39056 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:48:14.495006 sshd-session[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:48:14.500720 systemd-logind[1495]: New session 21 of user core. Mar 19 11:48:14.510529 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 19 11:48:14.638543 sshd[3990]: Connection closed by 10.0.0.1 port 39056 Mar 19 11:48:14.638553 sshd-session[3988]: pam_unix(sshd:session): session closed for user core Mar 19 11:48:14.643155 systemd[1]: sshd@20-10.0.0.120:22-10.0.0.1:39056.service: Deactivated successfully. Mar 19 11:48:14.645837 systemd[1]: session-21.scope: Deactivated successfully. Mar 19 11:48:14.647073 systemd-logind[1495]: Session 21 logged out. Waiting for processes to exit. Mar 19 11:48:14.648469 systemd-logind[1495]: Removed session 21. Mar 19 11:48:19.650578 systemd[1]: Started sshd@21-10.0.0.120:22-10.0.0.1:39062.service - OpenSSH per-connection server daemon (10.0.0.1:39062). Mar 19 11:48:19.687780 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 39062 ssh2: RSA SHA256:6/OODnHq2m2WHfivZ2gm3AjcQP8Dsv+GDPSeYlIBidA Mar 19 11:48:19.689118 sshd-session[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 19 11:48:19.693384 systemd-logind[1495]: New session 22 of user core. Mar 19 11:48:19.705391 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 19 11:48:19.811157 sshd[4026]: Connection closed by 10.0.0.1 port 39062 Mar 19 11:48:19.811549 sshd-session[4024]: pam_unix(sshd:session): session closed for user core Mar 19 11:48:19.815162 systemd[1]: sshd@21-10.0.0.120:22-10.0.0.1:39062.service: Deactivated successfully. Mar 19 11:48:19.817369 systemd[1]: session-22.scope: Deactivated successfully. Mar 19 11:48:19.818055 systemd-logind[1495]: Session 22 logged out. Waiting for processes to exit. Mar 19 11:48:19.818944 systemd-logind[1495]: Removed session 22.