Feb 13 19:40:14.925130 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 19:40:14.925163 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:40:14.925175 kernel: BIOS-provided physical RAM map: Feb 13 19:40:14.925182 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:40:14.925189 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:40:14.925195 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:40:14.925202 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 19:40:14.925209 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 19:40:14.925216 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 19:40:14.925224 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 19:40:14.925231 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:40:14.925237 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:40:14.925244 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:40:14.925250 kernel: NX (Execute Disable) protection: active Feb 13 19:40:14.925258 kernel: APIC: Static calls initialized Feb 13 19:40:14.925268 kernel: SMBIOS 2.8 present. Feb 13 19:40:14.925275 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 19:40:14.925282 kernel: Hypervisor detected: KVM Feb 13 19:40:14.925289 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:40:14.925296 kernel: kvm-clock: using sched offset of 2273285673 cycles Feb 13 19:40:14.925304 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:40:14.925312 kernel: tsc: Detected 2794.748 MHz processor Feb 13 19:40:14.925334 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:40:14.925349 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:40:14.925364 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 19:40:14.925381 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:40:14.925389 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:40:14.925396 kernel: Using GB pages for direct mapping Feb 13 19:40:14.925403 kernel: ACPI: Early table checksum verification disabled Feb 13 19:40:14.925410 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 19:40:14.925418 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:14.925425 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:14.925432 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:14.925439 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 19:40:14.925453 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:14.925460 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:14.925467 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:14.925474 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:40:14.925481 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 19:40:14.925489 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 19:40:14.925499 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 19:40:14.925509 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 19:40:14.925516 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 19:40:14.925524 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 19:40:14.925531 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 19:40:14.925539 kernel: No NUMA configuration found Feb 13 19:40:14.925546 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 19:40:14.925553 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 19:40:14.925564 kernel: Zone ranges: Feb 13 19:40:14.925571 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:40:14.925579 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 19:40:14.925586 kernel: Normal empty Feb 13 19:40:14.925594 kernel: Movable zone start for each node Feb 13 19:40:14.925601 kernel: Early memory node ranges Feb 13 19:40:14.925610 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:40:14.925618 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 19:40:14.925626 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 19:40:14.925638 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:40:14.925645 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:40:14.925652 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 19:40:14.925660 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:40:14.925667 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:40:14.925675 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:40:14.925682 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:40:14.925689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:40:14.925697 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:40:14.925704 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:40:14.925714 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:40:14.925722 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:40:14.925729 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:40:14.925736 kernel: TSC deadline timer available Feb 13 19:40:14.925744 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:40:14.925751 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:40:14.925758 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:40:14.925766 kernel: kvm-guest: setup PV sched yield Feb 13 19:40:14.925773 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 19:40:14.925783 kernel: Booting paravirtualized kernel on KVM Feb 13 19:40:14.925790 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:40:14.925798 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:40:14.925806 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:40:14.925813 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:40:14.925820 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:40:14.925827 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:40:14.925835 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:40:14.925843 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:40:14.925854 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:40:14.925861 kernel: random: crng init done Feb 13 19:40:14.925868 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:40:14.925876 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:40:14.925883 kernel: Fallback order for Node 0: 0 Feb 13 19:40:14.925891 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 19:40:14.925898 kernel: Policy zone: DMA32 Feb 13 19:40:14.925905 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:40:14.925916 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 138948K reserved, 0K cma-reserved) Feb 13 19:40:14.925923 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:40:14.925931 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:40:14.925938 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:40:14.925945 kernel: Dynamic Preempt: voluntary Feb 13 19:40:14.925953 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:40:14.925961 kernel: rcu: RCU event tracing is enabled. Feb 13 19:40:14.925969 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:40:14.925976 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:40:14.925986 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:40:14.925993 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:40:14.926001 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:40:14.926008 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:40:14.926016 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:40:14.926023 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:40:14.926031 kernel: Console: colour VGA+ 80x25 Feb 13 19:40:14.926038 kernel: printk: console [ttyS0] enabled Feb 13 19:40:14.926045 kernel: ACPI: Core revision 20230628 Feb 13 19:40:14.926055 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:40:14.926063 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:40:14.926070 kernel: x2apic enabled Feb 13 19:40:14.926077 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:40:14.926085 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:40:14.926092 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:40:14.926100 kernel: kvm-guest: setup PV IPIs Feb 13 19:40:14.926124 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:40:14.926132 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:40:14.926140 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 19:40:14.926148 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:40:14.926200 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:40:14.926211 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:40:14.926219 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:40:14.926226 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:40:14.926234 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:40:14.926250 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:40:14.926275 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:40:14.926283 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:40:14.926291 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:40:14.926299 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:40:14.926307 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:40:14.926315 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:40:14.926323 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:40:14.926331 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:40:14.926345 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:40:14.926353 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:40:14.926360 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:40:14.926368 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:40:14.926376 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:40:14.926384 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:40:14.926391 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:40:14.926399 kernel: landlock: Up and running. Feb 13 19:40:14.926407 kernel: SELinux: Initializing. Feb 13 19:40:14.926417 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:40:14.926424 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:40:14.926432 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:40:14.926440 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:40:14.926448 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:40:14.926456 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:40:14.926464 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:40:14.926471 kernel: ... version: 0 Feb 13 19:40:14.926479 kernel: ... bit width: 48 Feb 13 19:40:14.926489 kernel: ... generic registers: 6 Feb 13 19:40:14.926497 kernel: ... value mask: 0000ffffffffffff Feb 13 19:40:14.926505 kernel: ... max period: 00007fffffffffff Feb 13 19:40:14.926512 kernel: ... fixed-purpose events: 0 Feb 13 19:40:14.926520 kernel: ... event mask: 000000000000003f Feb 13 19:40:14.926528 kernel: signal: max sigframe size: 1776 Feb 13 19:40:14.926535 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:40:14.926543 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:40:14.926551 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:40:14.926561 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:40:14.926568 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:40:14.926576 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:40:14.926584 kernel: smpboot: Max logical packages: 1 Feb 13 19:40:14.926591 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 19:40:14.926599 kernel: devtmpfs: initialized Feb 13 19:40:14.926606 kernel: x86/mm: Memory block size: 128MB Feb 13 19:40:14.926614 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:40:14.926622 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:40:14.926632 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:40:14.926640 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:40:14.926647 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:40:14.926655 kernel: audit: type=2000 audit(1739475613.695:1): state=initialized audit_enabled=0 res=1 Feb 13 19:40:14.926663 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:40:14.926671 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:40:14.926678 kernel: cpuidle: using governor menu Feb 13 19:40:14.926686 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:40:14.926694 kernel: dca service started, version 1.12.1 Feb 13 19:40:14.926704 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 19:40:14.926712 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 19:40:14.926720 kernel: PCI: Using configuration type 1 for base access Feb 13 19:40:14.926727 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:40:14.926735 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:40:14.926743 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:40:14.926750 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:40:14.926758 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:40:14.926766 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:40:14.926776 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:40:14.926783 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:40:14.926791 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:40:14.926799 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:40:14.926806 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:40:14.926814 kernel: ACPI: Interpreter enabled Feb 13 19:40:14.926821 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:40:14.926829 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:40:14.926837 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:40:14.926847 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:40:14.926855 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:40:14.926862 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:40:14.927051 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:40:14.927212 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:40:14.927335 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:40:14.927346 kernel: PCI host bridge to bus 0000:00 Feb 13 19:40:14.927475 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:40:14.927586 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:40:14.927697 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:40:14.927807 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 19:40:14.927917 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 19:40:14.928027 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 19:40:14.928148 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:40:14.928312 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:40:14.928443 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:40:14.928566 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 19:40:14.928687 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 19:40:14.928805 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 19:40:14.928925 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:40:14.929059 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:40:14.929229 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 19:40:14.929355 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 19:40:14.929477 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 19:40:14.929608 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:40:14.929736 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 19:40:14.929857 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 19:40:14.929982 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 19:40:14.930127 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:40:14.930272 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 19:40:14.930394 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 19:40:14.930515 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 19:40:14.930636 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 19:40:14.930768 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:40:14.930896 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:40:14.931025 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:40:14.931167 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 19:40:14.931291 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 19:40:14.931418 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:40:14.931538 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 19:40:14.931548 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:40:14.931561 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:40:14.931569 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:40:14.931577 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:40:14.931585 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:40:14.931593 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:40:14.931601 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:40:14.931608 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:40:14.931616 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:40:14.931624 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:40:14.931635 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:40:14.931643 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:40:14.931651 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:40:14.931658 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:40:14.931666 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:40:14.931674 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:40:14.931682 kernel: iommu: Default domain type: Translated Feb 13 19:40:14.931690 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:40:14.931698 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:40:14.931708 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:40:14.931716 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:40:14.931724 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 19:40:14.931844 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:40:14.931963 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:40:14.932081 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:40:14.932092 kernel: vgaarb: loaded Feb 13 19:40:14.932101 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:40:14.932120 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:40:14.932128 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:40:14.932136 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:40:14.932145 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:40:14.932191 kernel: pnp: PnP ACPI init Feb 13 19:40:14.932323 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 19:40:14.932335 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:40:14.932344 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:40:14.932356 kernel: NET: Registered PF_INET protocol family Feb 13 19:40:14.932365 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:40:14.932373 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:40:14.932381 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:40:14.932388 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:40:14.932396 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:40:14.932404 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:40:14.932412 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:40:14.932420 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:40:14.932431 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:40:14.932439 kernel: NET: Registered PF_XDP protocol family Feb 13 19:40:14.932550 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:40:14.932659 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:40:14.932767 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:40:14.932875 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 19:40:14.932983 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 19:40:14.933092 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 19:40:14.933115 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:40:14.933124 kernel: Initialise system trusted keyrings Feb 13 19:40:14.933132 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:40:14.933141 kernel: Key type asymmetric registered Feb 13 19:40:14.933148 kernel: Asymmetric key parser 'x509' registered Feb 13 19:40:14.933168 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:40:14.933179 kernel: io scheduler mq-deadline registered Feb 13 19:40:14.933190 kernel: io scheduler kyber registered Feb 13 19:40:14.933199 kernel: io scheduler bfq registered Feb 13 19:40:14.933210 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:40:14.933219 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:40:14.933227 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:40:14.933235 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:40:14.933243 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:40:14.933251 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:40:14.933259 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:40:14.933267 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:40:14.933274 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:40:14.933282 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:40:14.933414 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:40:14.933528 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:40:14.933642 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:40:14 UTC (1739475614) Feb 13 19:40:14.933754 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 19:40:14.933764 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:40:14.933772 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:40:14.933780 kernel: Segment Routing with IPv6 Feb 13 19:40:14.933791 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:40:14.933799 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:40:14.933807 kernel: Key type dns_resolver registered Feb 13 19:40:14.933815 kernel: IPI shorthand broadcast: enabled Feb 13 19:40:14.933823 kernel: sched_clock: Marking stable (626002615, 133163592)->(852297933, -93131726) Feb 13 19:40:14.933831 kernel: registered taskstats version 1 Feb 13 19:40:14.933839 kernel: Loading compiled-in X.509 certificates Feb 13 19:40:14.933847 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 19:40:14.933855 kernel: Key type .fscrypt registered Feb 13 19:40:14.933866 kernel: Key type fscrypt-provisioning registered Feb 13 19:40:14.933874 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:40:14.933882 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:40:14.933890 kernel: ima: No architecture policies found Feb 13 19:40:14.933898 kernel: clk: Disabling unused clocks Feb 13 19:40:14.933906 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 19:40:14.933913 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:40:14.933921 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 19:40:14.933929 kernel: Run /init as init process Feb 13 19:40:14.933939 kernel: with arguments: Feb 13 19:40:14.933947 kernel: /init Feb 13 19:40:14.933955 kernel: with environment: Feb 13 19:40:14.933963 kernel: HOME=/ Feb 13 19:40:14.933970 kernel: TERM=linux Feb 13 19:40:14.933978 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:40:14.933988 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:40:14.933998 systemd[1]: Detected virtualization kvm. Feb 13 19:40:14.934010 systemd[1]: Detected architecture x86-64. Feb 13 19:40:14.934018 systemd[1]: Running in initrd. Feb 13 19:40:14.934027 systemd[1]: No hostname configured, using default hostname. Feb 13 19:40:14.934035 systemd[1]: Hostname set to . Feb 13 19:40:14.934043 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:40:14.934052 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:40:14.934061 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:40:14.934070 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:40:14.934082 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:40:14.934103 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:40:14.934122 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:40:14.934131 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:40:14.934142 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:40:14.934164 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:40:14.934173 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:40:14.934182 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:40:14.934190 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:40:14.934199 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:40:14.934208 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:40:14.934216 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:40:14.934225 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:40:14.934236 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:40:14.934245 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:40:14.934254 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:40:14.934262 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:40:14.934271 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:40:14.934280 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:40:14.934289 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:40:14.934297 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:40:14.934306 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:40:14.934317 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:40:14.934326 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:40:14.934334 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:40:14.934343 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:40:14.934352 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:40:14.934360 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:40:14.934371 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:40:14.934380 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:40:14.934392 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:40:14.934422 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 19:40:14.934446 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:40:14.934455 systemd-journald[194]: Journal started Feb 13 19:40:14.934480 systemd-journald[194]: Runtime Journal (/run/log/journal/a1f61748a0cd449b940200adbc54f305) is 6.0M, max 48.3M, 42.3M free. Feb 13 19:40:14.924190 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 19:40:14.962080 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:40:14.962116 kernel: Bridge firewalling registered Feb 13 19:40:14.950973 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 19:40:14.966353 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:40:14.966774 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:40:14.969062 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:40:14.988452 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:40:14.991721 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:40:14.994317 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:40:14.999324 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:40:15.018740 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:40:15.021668 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:40:15.024305 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:40:15.026448 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:40:15.030153 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:40:15.034130 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:40:15.041002 dracut-cmdline[227]: dracut-dracut-053 Feb 13 19:40:15.044482 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:40:15.067495 systemd-resolved[233]: Positive Trust Anchors: Feb 13 19:40:15.067509 systemd-resolved[233]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:40:15.067538 systemd-resolved[233]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:40:15.070213 systemd-resolved[233]: Defaulting to hostname 'linux'. Feb 13 19:40:15.076074 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:40:15.078853 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:40:15.130195 kernel: SCSI subsystem initialized Feb 13 19:40:15.139179 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:40:15.150186 kernel: iscsi: registered transport (tcp) Feb 13 19:40:15.175180 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:40:15.175201 kernel: QLogic iSCSI HBA Driver Feb 13 19:40:15.224849 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:40:15.251385 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:40:15.274175 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:40:15.274207 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:40:15.275711 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:40:15.336195 kernel: raid6: avx2x4 gen() 30643 MB/s Feb 13 19:40:15.353189 kernel: raid6: avx2x2 gen() 24082 MB/s Feb 13 19:40:15.370420 kernel: raid6: avx2x1 gen() 22851 MB/s Feb 13 19:40:15.370442 kernel: raid6: using algorithm avx2x4 gen() 30643 MB/s Feb 13 19:40:15.388289 kernel: raid6: .... xor() 6762 MB/s, rmw enabled Feb 13 19:40:15.388348 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:40:15.409198 kernel: xor: automatically using best checksumming function avx Feb 13 19:40:15.556211 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:40:15.570372 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:40:15.580333 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:40:15.591718 systemd-udevd[414]: Using default interface naming scheme 'v255'. Feb 13 19:40:15.596064 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:40:15.604427 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:40:15.618675 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Feb 13 19:40:15.657522 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:40:15.677533 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:40:15.741912 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:40:15.751333 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:40:15.761981 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:40:15.765640 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:40:15.768228 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:40:15.770791 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:40:15.779375 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:40:15.785178 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:40:15.812504 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:40:15.812523 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:40:15.812667 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:40:15.812687 kernel: AES CTR mode by8 optimization enabled Feb 13 19:40:15.812698 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:40:15.812710 kernel: GPT:9289727 != 19775487 Feb 13 19:40:15.812731 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:40:15.812743 kernel: GPT:9289727 != 19775487 Feb 13 19:40:15.812753 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:40:15.812763 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:40:15.802659 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:40:15.807600 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:40:15.807701 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:40:15.810440 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:40:15.811655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:40:15.811713 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:40:15.814146 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:40:15.826299 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:40:15.831979 kernel: libata version 3.00 loaded. Feb 13 19:40:15.838180 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (461) Feb 13 19:40:15.841931 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:40:15.854418 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:40:15.891391 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:40:15.891727 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:40:15.891745 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:40:15.891915 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:40:15.892083 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (463) Feb 13 19:40:15.892111 kernel: scsi host0: ahci Feb 13 19:40:15.892352 kernel: scsi host1: ahci Feb 13 19:40:15.892529 kernel: scsi host2: ahci Feb 13 19:40:15.892701 kernel: scsi host3: ahci Feb 13 19:40:15.892874 kernel: scsi host4: ahci Feb 13 19:40:15.893048 kernel: scsi host5: ahci Feb 13 19:40:15.893275 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 19:40:15.893297 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 19:40:15.893310 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 19:40:15.893324 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 19:40:15.893338 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 19:40:15.893351 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 19:40:15.890713 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:40:15.897641 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:40:15.909437 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:40:15.909718 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:40:15.919370 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:40:15.920680 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:40:15.934380 disk-uuid[569]: Primary Header is updated. Feb 13 19:40:15.934380 disk-uuid[569]: Secondary Entries is updated. Feb 13 19:40:15.934380 disk-uuid[569]: Secondary Header is updated. Feb 13 19:40:15.939124 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:40:15.942619 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:40:15.946351 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:40:16.172312 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:40:16.172398 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:40:16.172409 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:40:16.173195 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:40:16.174183 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:40:16.175194 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:40:16.176186 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:40:16.177482 kernel: ata3.00: applying bridge limits Feb 13 19:40:16.177502 kernel: ata3.00: configured for UDMA/100 Feb 13 19:40:16.178196 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:40:16.227192 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:40:16.240953 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:40:16.240973 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:40:16.950970 disk-uuid[577]: The operation has completed successfully. Feb 13 19:40:16.952525 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:40:16.980467 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:40:16.980631 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:40:17.011327 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:40:17.014448 sh[593]: Success Feb 13 19:40:17.026189 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:40:17.059223 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:40:17.073499 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:40:17.075726 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:40:17.087279 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 19:40:17.087309 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:40:17.087321 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:40:17.088283 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:40:17.089612 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:40:17.093517 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:40:17.095013 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:40:17.104285 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:40:17.105852 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:40:17.114189 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:40:17.114218 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:40:17.114229 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:40:17.117181 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:40:17.124916 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:40:17.127176 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:40:17.137374 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:40:17.145320 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:40:17.196802 ignition[692]: Ignition 2.20.0 Feb 13 19:40:17.196813 ignition[692]: Stage: fetch-offline Feb 13 19:40:17.196865 ignition[692]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:40:17.196874 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:40:17.196961 ignition[692]: parsed url from cmdline: "" Feb 13 19:40:17.196965 ignition[692]: no config URL provided Feb 13 19:40:17.196970 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:40:17.196978 ignition[692]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:40:17.197005 ignition[692]: op(1): [started] loading QEMU firmware config module Feb 13 19:40:17.197010 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:40:17.205831 ignition[692]: op(1): [finished] loading QEMU firmware config module Feb 13 19:40:17.210665 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:40:17.222137 ignition[692]: parsing config with SHA512: 6726bc2ff4bc0132d2757e334e1157a6a4ef3fe4e334599659352c841634848469b6ec1b6cf8fef2d477418394321fff1ae46283d928b2a9f60fe342a3974ed4 Feb 13 19:40:17.223332 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:40:17.225690 unknown[692]: fetched base config from "system" Feb 13 19:40:17.225698 unknown[692]: fetched user config from "qemu" Feb 13 19:40:17.226100 ignition[692]: fetch-offline: fetch-offline passed Feb 13 19:40:17.228855 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:40:17.226178 ignition[692]: Ignition finished successfully Feb 13 19:40:17.244940 systemd-networkd[782]: lo: Link UP Feb 13 19:40:17.244950 systemd-networkd[782]: lo: Gained carrier Feb 13 19:40:17.246594 systemd-networkd[782]: Enumeration completed Feb 13 19:40:17.246701 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:40:17.247073 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:40:17.247078 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:40:17.248141 systemd-networkd[782]: eth0: Link UP Feb 13 19:40:17.248145 systemd-networkd[782]: eth0: Gained carrier Feb 13 19:40:17.248152 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:40:17.249126 systemd[1]: Reached target network.target - Network. Feb 13 19:40:17.251226 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:40:17.260291 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:40:17.267234 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:40:17.273983 ignition[785]: Ignition 2.20.0 Feb 13 19:40:17.274000 ignition[785]: Stage: kargs Feb 13 19:40:17.274232 ignition[785]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:40:17.274245 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:40:17.275257 ignition[785]: kargs: kargs passed Feb 13 19:40:17.275305 ignition[785]: Ignition finished successfully Feb 13 19:40:17.278885 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:40:17.294301 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:40:17.306597 ignition[794]: Ignition 2.20.0 Feb 13 19:40:17.306609 ignition[794]: Stage: disks Feb 13 19:40:17.306811 ignition[794]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:40:17.306825 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:40:17.310342 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:40:17.307899 ignition[794]: disks: disks passed Feb 13 19:40:17.312148 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:40:17.307956 ignition[794]: Ignition finished successfully Feb 13 19:40:17.314250 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:40:17.315620 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:40:17.317323 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:40:17.317748 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:40:17.329317 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:40:17.341765 systemd-resolved[233]: Detected conflict on linux IN A 10.0.0.96 Feb 13 19:40:17.341781 systemd-resolved[233]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Feb 13 19:40:17.346767 systemd-fsck[804]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:40:17.354224 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:40:17.365284 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:40:17.447190 kernel: EXT4-fs (vda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 19:40:17.447856 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:40:17.449511 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:40:17.462245 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:40:17.464210 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:40:17.465643 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:40:17.471210 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (812) Feb 13 19:40:17.471239 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:40:17.465702 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:40:17.478010 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:40:17.478033 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:40:17.478056 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:40:17.465732 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:40:17.472883 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:40:17.479454 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:40:17.482583 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:40:17.516262 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:40:17.521265 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:40:17.524779 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:40:17.529169 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:40:17.605873 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:40:17.618255 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:40:17.619934 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:40:17.626170 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:40:17.646411 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:40:17.648345 ignition[925]: INFO : Ignition 2.20.0 Feb 13 19:40:17.648345 ignition[925]: INFO : Stage: mount Feb 13 19:40:17.649932 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:40:17.649932 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:40:17.649932 ignition[925]: INFO : mount: mount passed Feb 13 19:40:17.649932 ignition[925]: INFO : Ignition finished successfully Feb 13 19:40:17.655503 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:40:17.662281 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:40:18.086888 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:40:18.104419 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:40:18.112878 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (938) Feb 13 19:40:18.112938 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:40:18.112954 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:40:18.114455 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:40:18.117184 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:40:18.118930 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:40:18.144099 ignition[955]: INFO : Ignition 2.20.0 Feb 13 19:40:18.144099 ignition[955]: INFO : Stage: files Feb 13 19:40:18.146308 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:40:18.146308 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:40:18.146308 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:40:18.146308 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:40:18.146308 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:40:18.153093 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:40:18.153093 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:40:18.153093 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:40:18.153093 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:40:18.153093 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 19:40:18.148782 unknown[955]: wrote ssh authorized keys file for user: core Feb 13 19:40:18.191415 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:40:18.345696 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:40:18.345696 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:40:18.349756 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:40:18.349756 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:40:18.349756 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:40:18.349756 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:40:18.349756 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:40:18.349756 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:40:18.349756 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:40:18.349756 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:40:18.349756 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:40:18.349756 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:40:18.349756 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:40:18.349756 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:40:18.349756 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:40:18.445284 systemd-networkd[782]: eth0: Gained IPv6LL Feb 13 19:40:18.881671 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:40:19.549577 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:40:19.549577 ignition[955]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:40:19.554187 ignition[955]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:40:19.554187 ignition[955]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:40:19.554187 ignition[955]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:40:19.554187 ignition[955]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:40:19.554187 ignition[955]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:40:19.554187 ignition[955]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:40:19.554187 ignition[955]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:40:19.554187 ignition[955]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:40:19.578429 ignition[955]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:40:19.585765 ignition[955]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:40:19.587632 ignition[955]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:40:19.587632 ignition[955]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:40:19.587632 ignition[955]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:40:19.587632 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:40:19.587632 ignition[955]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:40:19.587632 ignition[955]: INFO : files: files passed Feb 13 19:40:19.587632 ignition[955]: INFO : Ignition finished successfully Feb 13 19:40:19.600734 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:40:19.610443 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:40:19.612843 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:40:19.617249 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:40:19.617385 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:40:19.623299 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:40:19.628094 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:40:19.628094 initrd-setup-root-after-ignition[985]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:40:19.631401 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:40:19.635806 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:40:19.638785 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:40:19.648420 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:40:19.677282 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:40:19.677467 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:40:19.678627 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:40:19.681101 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:40:19.681620 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:40:19.682628 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:40:19.702971 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:40:19.705048 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:40:19.719773 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:40:19.720587 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:40:19.721077 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:40:19.721581 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:40:19.721715 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:40:19.728043 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:40:19.728706 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:40:19.729033 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:40:19.729529 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:40:19.729860 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:40:19.730209 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:40:19.730695 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:40:19.731042 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:40:19.731563 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:40:19.731911 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:40:19.732466 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:40:19.732607 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:40:19.750180 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:40:19.750723 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:40:19.751078 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:40:19.755633 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:40:19.756195 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:40:19.756335 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:40:19.760093 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:40:19.760225 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:40:19.760834 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:40:19.761078 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:40:19.768418 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:40:19.771384 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:40:19.773496 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:40:19.775708 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:40:19.776779 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:40:19.779062 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:40:19.779988 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:40:19.782117 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:40:19.783340 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:40:19.785905 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:40:19.786913 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:40:19.800493 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:40:19.803507 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:40:19.805468 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:40:19.806775 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:40:19.809272 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:40:19.810258 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:40:19.813708 ignition[1009]: INFO : Ignition 2.20.0 Feb 13 19:40:19.813708 ignition[1009]: INFO : Stage: umount Feb 13 19:40:19.815386 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:40:19.815386 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:40:19.815386 ignition[1009]: INFO : umount: umount passed Feb 13 19:40:19.815386 ignition[1009]: INFO : Ignition finished successfully Feb 13 19:40:19.814545 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:40:19.814664 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:40:19.817304 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:40:19.817461 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:40:19.821668 systemd[1]: Stopped target network.target - Network. Feb 13 19:40:19.823086 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:40:19.823150 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:40:19.825439 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:40:19.825489 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:40:19.825770 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:40:19.825814 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:40:19.826187 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:40:19.826231 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:40:19.826763 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:40:19.827053 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:40:19.832688 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:40:19.837329 systemd-networkd[782]: eth0: DHCPv6 lease lost Feb 13 19:40:19.837539 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:40:19.837703 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:40:19.840983 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:40:19.841196 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:40:19.843539 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:40:19.843608 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:40:19.852311 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:40:19.853828 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:40:19.853909 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:40:19.856297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:40:19.856349 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:40:19.858237 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:40:19.858286 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:40:19.860444 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:40:19.860493 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:40:19.861876 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:40:19.874503 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:40:19.874639 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:40:19.879955 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:40:19.880179 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:40:19.882301 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:40:19.882351 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:40:19.884018 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:40:19.884058 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:40:19.886270 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:40:19.886321 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:40:19.888530 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:40:19.888585 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:40:19.890529 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:40:19.890581 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:40:19.904373 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:40:19.904907 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:40:19.904987 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:40:19.905459 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:40:19.905509 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:40:19.905756 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:40:19.905803 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:40:19.906092 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:40:19.906139 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:40:19.913067 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:40:19.913231 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:40:20.200384 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:40:20.200533 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:40:20.203511 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:40:20.209615 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:40:20.209684 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:40:20.227282 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:40:20.239923 systemd[1]: Switching root. Feb 13 19:40:20.284966 systemd-journald[194]: Journal stopped Feb 13 19:40:21.523653 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 19:40:21.523733 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:40:21.523753 kernel: SELinux: policy capability open_perms=1 Feb 13 19:40:21.523765 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:40:21.523782 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:40:21.523793 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:40:21.523805 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:40:21.523820 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:40:21.523831 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:40:21.523842 kernel: audit: type=1403 audit(1739475620.706:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:40:21.523855 systemd[1]: Successfully loaded SELinux policy in 48.836ms. Feb 13 19:40:21.523876 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.955ms. Feb 13 19:40:21.523889 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:40:21.523902 systemd[1]: Detected virtualization kvm. Feb 13 19:40:21.523914 systemd[1]: Detected architecture x86-64. Feb 13 19:40:21.523926 systemd[1]: Detected first boot. Feb 13 19:40:21.523941 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:40:21.523953 zram_generator::config[1054]: No configuration found. Feb 13 19:40:21.523974 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:40:21.523987 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:40:21.524000 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:40:21.524013 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:40:21.524025 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:40:21.524039 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:40:21.524056 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:40:21.524070 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:40:21.524083 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:40:21.524096 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:40:21.524109 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:40:21.524121 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:40:21.524133 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:40:21.524146 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:40:21.524183 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:40:21.524201 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:40:21.524214 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:40:21.524230 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:40:21.524243 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:40:21.524261 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:40:21.524273 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:40:21.524285 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:40:21.524297 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:40:21.524312 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:40:21.524324 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:40:21.524336 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:40:21.524349 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:40:21.524361 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:40:21.524373 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:40:21.524388 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:40:21.524400 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:40:21.524412 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:40:21.524427 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:40:21.524439 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:40:21.524452 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:40:21.524464 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:40:21.524476 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:40:21.524489 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:40:21.524501 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:40:21.524514 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:40:21.524526 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:40:21.524541 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:40:21.524554 systemd[1]: Reached target machines.target - Containers. Feb 13 19:40:21.524566 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:40:21.524578 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:40:21.524590 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:40:21.524603 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:40:21.524615 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:40:21.524627 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:40:21.524641 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:40:21.524655 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:40:21.524671 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:40:21.524683 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:40:21.524696 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:40:21.524708 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:40:21.524720 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:40:21.524733 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:40:21.524747 kernel: loop: module loaded Feb 13 19:40:21.524759 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:40:21.524771 kernel: fuse: init (API version 7.39) Feb 13 19:40:21.524782 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:40:21.524795 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:40:21.524807 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:40:21.524819 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:40:21.524831 kernel: ACPI: bus type drm_connector registered Feb 13 19:40:21.524843 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:40:21.524855 systemd[1]: Stopped verity-setup.service. Feb 13 19:40:21.524869 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:40:21.524899 systemd-journald[1124]: Collecting audit messages is disabled. Feb 13 19:40:21.524925 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:40:21.524938 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:40:21.524952 systemd-journald[1124]: Journal started Feb 13 19:40:21.524984 systemd-journald[1124]: Runtime Journal (/run/log/journal/a1f61748a0cd449b940200adbc54f305) is 6.0M, max 48.3M, 42.3M free. Feb 13 19:40:21.278249 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:40:21.295051 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:40:21.295523 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:40:21.528714 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:40:21.529716 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:40:21.531028 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:40:21.532421 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:40:21.533765 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:40:21.535084 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:40:21.536586 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:40:21.538222 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:40:21.538398 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:40:21.539954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:40:21.540148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:40:21.541831 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:40:21.542026 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:40:21.543475 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:40:21.543668 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:40:21.545389 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:40:21.545559 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:40:21.547058 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:40:21.547276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:40:21.548945 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:40:21.550555 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:40:21.552252 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:40:21.570278 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:40:21.592740 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:40:21.595775 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:40:21.596986 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:40:21.597030 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:40:21.599087 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:40:21.601479 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:40:21.607262 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:40:21.608645 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:40:21.610538 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:40:21.613116 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:40:21.614455 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:40:21.617601 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:40:21.618770 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:40:21.620234 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:40:21.626224 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:40:21.629938 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:40:21.638092 systemd-journald[1124]: Time spent on flushing to /var/log/journal/a1f61748a0cd449b940200adbc54f305 is 13.395ms for 954 entries. Feb 13 19:40:21.638092 systemd-journald[1124]: System Journal (/var/log/journal/a1f61748a0cd449b940200adbc54f305) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:40:22.073942 systemd-journald[1124]: Received client request to flush runtime journal. Feb 13 19:40:22.074126 kernel: loop0: detected capacity change from 0 to 218376 Feb 13 19:40:22.074185 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:40:22.074209 kernel: loop1: detected capacity change from 0 to 138184 Feb 13 19:40:22.074228 kernel: loop2: detected capacity change from 0 to 141000 Feb 13 19:40:22.074246 kernel: loop3: detected capacity change from 0 to 218376 Feb 13 19:40:22.074262 kernel: loop4: detected capacity change from 0 to 138184 Feb 13 19:40:21.633240 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:40:21.634615 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:40:21.636168 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:40:21.677698 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:40:21.681015 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:40:21.696184 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:40:21.704839 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:40:21.717949 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Feb 13 19:40:21.717973 systemd-tmpfiles[1169]: ACLs are not supported, ignoring. Feb 13 19:40:21.724024 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:40:21.799800 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:40:21.830599 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:40:21.839364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:40:21.860526 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:40:21.862636 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:40:21.865380 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:40:21.914907 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Feb 13 19:40:21.914921 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Feb 13 19:40:21.920032 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:40:22.075859 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:40:22.083182 kernel: loop5: detected capacity change from 0 to 141000 Feb 13 19:40:22.096005 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:40:22.097401 (sd-merge)[1191]: Merged extensions into '/usr'. Feb 13 19:40:22.102629 systemd[1]: Reloading requested from client PID 1168 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:40:22.102647 systemd[1]: Reloading... Feb 13 19:40:22.184848 zram_generator::config[1222]: No configuration found. Feb 13 19:40:22.302721 ldconfig[1163]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:40:22.330421 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:40:22.379827 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:40:22.380438 systemd[1]: Reloading finished in 277 ms. Feb 13 19:40:22.416534 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:40:22.418369 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:40:22.420333 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:40:22.438461 systemd[1]: Starting ensure-sysext.service... Feb 13 19:40:22.440891 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:40:22.449764 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:40:22.449781 systemd[1]: Reloading... Feb 13 19:40:22.471058 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:40:22.471364 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:40:22.472361 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:40:22.472649 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Feb 13 19:40:22.472727 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Feb 13 19:40:22.476928 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:40:22.476950 systemd-tmpfiles[1262]: Skipping /boot Feb 13 19:40:22.494326 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:40:22.494408 systemd-tmpfiles[1262]: Skipping /boot Feb 13 19:40:22.546195 zram_generator::config[1298]: No configuration found. Feb 13 19:40:22.644251 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:40:22.693049 systemd[1]: Reloading finished in 242 ms. Feb 13 19:40:22.719146 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:40:22.732622 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:40:22.742027 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:40:22.744694 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:40:22.747454 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:40:22.751323 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:40:22.754332 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:40:22.758313 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:40:22.764809 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:40:22.767662 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:40:22.767829 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:40:22.769003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:40:22.776440 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:40:22.780603 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:40:22.782287 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:40:22.782403 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:40:22.783298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:40:22.783513 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:40:22.789272 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:40:22.789485 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:40:22.791791 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:40:22.792003 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:40:22.799868 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:40:22.800396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:40:22.808502 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:40:22.810834 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Feb 13 19:40:22.811380 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:40:22.814393 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:40:22.816304 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:40:22.816418 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:40:22.817527 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:40:22.819669 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:40:22.821500 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:40:22.822083 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:40:22.823831 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:40:22.825620 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:40:22.825787 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:40:22.833239 augenrules[1368]: No rules Feb 13 19:40:22.835256 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:40:22.835528 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:40:22.837214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:40:22.837422 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:40:22.843797 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:40:22.852332 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:40:22.853581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:40:22.856272 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:40:22.859378 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:40:22.873396 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:40:22.877472 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:40:22.878650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:40:22.893225 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:40:22.895250 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:40:22.895598 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:40:22.897706 systemd[1]: Finished ensure-sysext.service. Feb 13 19:40:22.900731 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:40:22.903686 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:40:22.903886 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:40:22.905905 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:40:22.906111 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:40:22.907923 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:40:22.908141 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:40:22.922003 augenrules[1379]: /sbin/augenrules: No change Feb 13 19:40:22.924171 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1390) Feb 13 19:40:22.925603 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:40:22.925833 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:40:22.932650 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:40:22.941767 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:40:22.946375 augenrules[1429]: No rules Feb 13 19:40:23.002756 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 19:40:23.004587 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:40:23.005740 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:40:23.005828 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:40:23.008192 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:40:23.008650 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:40:23.009890 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:40:23.010321 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:40:23.010577 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:40:23.028389 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:40:23.037448 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:40:23.041182 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 19:40:23.042623 systemd-resolved[1331]: Positive Trust Anchors: Feb 13 19:40:23.042642 systemd-resolved[1331]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:40:23.042674 systemd-resolved[1331]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:40:23.048033 systemd-resolved[1331]: Defaulting to hostname 'linux'. Feb 13 19:40:23.051838 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:40:23.053222 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:40:23.058383 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:40:23.058673 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:40:23.058845 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:40:23.071735 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:40:23.074217 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:40:23.155549 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:40:23.170547 systemd-networkd[1436]: lo: Link UP Feb 13 19:40:23.170559 systemd-networkd[1436]: lo: Gained carrier Feb 13 19:40:23.172251 systemd-networkd[1436]: Enumeration completed Feb 13 19:40:23.172337 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:40:23.173066 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:40:23.173077 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:40:23.173681 systemd[1]: Reached target network.target - Network. Feb 13 19:40:23.174165 systemd-networkd[1436]: eth0: Link UP Feb 13 19:40:23.174176 systemd-networkd[1436]: eth0: Gained carrier Feb 13 19:40:23.174189 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:40:23.176316 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:40:23.207504 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:40:23.207846 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:40:23.209429 systemd-networkd[1436]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:40:23.210729 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. Feb 13 19:40:23.212439 systemd-timesyncd[1440]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:40:23.212491 systemd-timesyncd[1440]: Initial clock synchronization to Thu 2025-02-13 19:40:23.122642 UTC. Feb 13 19:40:23.219545 kernel: kvm_amd: TSC scaling supported Feb 13 19:40:23.219594 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:40:23.219608 kernel: kvm_amd: Nested Paging enabled Feb 13 19:40:23.219620 kernel: kvm_amd: LBR virtualization supported Feb 13 19:40:23.220782 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:40:23.220806 kernel: kvm_amd: Virtual GIF supported Feb 13 19:40:23.241292 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:40:23.282837 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:40:23.285783 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:40:23.298345 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:40:23.306960 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:40:23.337972 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:40:23.339568 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:40:23.340717 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:40:23.341896 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:40:23.343326 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:40:23.344821 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:40:23.346068 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:40:23.347531 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:40:23.348958 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:40:23.349001 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:40:23.350168 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:40:23.352452 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:40:23.355691 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:40:23.365362 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:40:23.368661 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:40:23.370666 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:40:23.372048 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:40:23.373219 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:40:23.374414 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:40:23.374443 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:40:23.375643 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:40:23.378243 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:40:23.379135 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:40:23.381871 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:40:23.399407 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:40:23.400707 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:40:23.403761 jq[1466]: false Feb 13 19:40:23.404298 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:40:23.410278 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:40:23.414366 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:40:23.417265 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:40:23.418714 extend-filesystems[1467]: Found loop3 Feb 13 19:40:23.419985 extend-filesystems[1467]: Found loop4 Feb 13 19:40:23.419985 extend-filesystems[1467]: Found loop5 Feb 13 19:40:23.419985 extend-filesystems[1467]: Found sr0 Feb 13 19:40:23.419985 extend-filesystems[1467]: Found vda Feb 13 19:40:23.419985 extend-filesystems[1467]: Found vda1 Feb 13 19:40:23.419985 extend-filesystems[1467]: Found vda2 Feb 13 19:40:23.419985 extend-filesystems[1467]: Found vda3 Feb 13 19:40:23.419985 extend-filesystems[1467]: Found usr Feb 13 19:40:23.419985 extend-filesystems[1467]: Found vda4 Feb 13 19:40:23.419985 extend-filesystems[1467]: Found vda6 Feb 13 19:40:23.419985 extend-filesystems[1467]: Found vda7 Feb 13 19:40:23.419985 extend-filesystems[1467]: Found vda9 Feb 13 19:40:23.419985 extend-filesystems[1467]: Checking size of /dev/vda9 Feb 13 19:40:23.422898 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:40:23.423751 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:40:23.424211 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:40:23.425292 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:40:23.429344 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:40:23.431427 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:40:23.434989 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:40:23.436360 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:40:23.445639 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:40:23.445891 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:40:23.447670 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:40:23.449194 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:40:23.455596 dbus-daemon[1465]: [system] SELinux support is enabled Feb 13 19:40:23.460547 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:40:23.463394 jq[1481]: true Feb 13 19:40:23.473878 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1382) Feb 13 19:40:23.473795 (ntainerd)[1486]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:40:23.474283 extend-filesystems[1467]: Resized partition /dev/vda9 Feb 13 19:40:23.477055 extend-filesystems[1496]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:40:23.486078 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:40:23.486117 jq[1494]: true Feb 13 19:40:23.486341 update_engine[1478]: I20250213 19:40:23.485396 1478 main.cc:92] Flatcar Update Engine starting Feb 13 19:40:23.488185 update_engine[1478]: I20250213 19:40:23.487978 1478 update_check_scheduler.cc:74] Next update check in 4m46s Feb 13 19:40:23.504968 tar[1484]: linux-amd64/LICENSE Feb 13 19:40:23.504968 tar[1484]: linux-amd64/helm Feb 13 19:40:23.512436 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:40:23.518230 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:40:23.518418 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:40:23.520018 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:40:23.520042 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:40:23.521441 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:40:23.558541 extend-filesystems[1496]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:40:23.558541 extend-filesystems[1496]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:40:23.558541 extend-filesystems[1496]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:40:23.521464 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:40:23.623740 sshd_keygen[1495]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:40:23.623852 extend-filesystems[1467]: Resized filesystem in /dev/vda9 Feb 13 19:40:23.531573 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:40:23.549354 systemd-logind[1476]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:40:23.549376 systemd-logind[1476]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:40:23.549700 systemd-logind[1476]: New seat seat0. Feb 13 19:40:23.552600 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:40:23.554921 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:40:23.555247 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:40:23.637782 locksmithd[1506]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:40:23.653655 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:40:23.691692 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:40:23.696524 systemd[1]: Started sshd@0-10.0.0.96:22-10.0.0.1:47086.service - OpenSSH per-connection server daemon (10.0.0.1:47086). Feb 13 19:40:23.700935 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:40:23.701353 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:40:23.714706 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:40:23.737815 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:40:23.749604 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:40:23.765480 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:40:23.766966 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:40:23.796588 sshd[1537]: Connection closed by authenticating user core 10.0.0.1 port 47086 [preauth] Feb 13 19:40:23.799128 systemd[1]: sshd@0-10.0.0.96:22-10.0.0.1:47086.service: Deactivated successfully. Feb 13 19:40:23.917626 containerd[1486]: time="2025-02-13T19:40:23.917536102Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:40:23.942529 containerd[1486]: time="2025-02-13T19:40:23.942489769Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:40:23.944755 containerd[1486]: time="2025-02-13T19:40:23.944656362Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:40:23.944755 containerd[1486]: time="2025-02-13T19:40:23.944707658Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:40:23.944755 containerd[1486]: time="2025-02-13T19:40:23.944728868Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:40:23.944978 containerd[1486]: time="2025-02-13T19:40:23.944959931Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:40:23.945017 containerd[1486]: time="2025-02-13T19:40:23.944980230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:40:23.945108 containerd[1486]: time="2025-02-13T19:40:23.945059819Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:40:23.945108 containerd[1486]: time="2025-02-13T19:40:23.945114381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:40:23.945378 containerd[1486]: time="2025-02-13T19:40:23.945350223Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:40:23.945378 containerd[1486]: time="2025-02-13T19:40:23.945369540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:40:23.945445 containerd[1486]: time="2025-02-13T19:40:23.945382284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:40:23.945445 containerd[1486]: time="2025-02-13T19:40:23.945392473Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:40:23.945522 containerd[1486]: time="2025-02-13T19:40:23.945497329Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:40:23.945797 containerd[1486]: time="2025-02-13T19:40:23.945771353Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:40:23.945927 containerd[1486]: time="2025-02-13T19:40:23.945895426Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:40:23.945927 containerd[1486]: time="2025-02-13T19:40:23.945921064Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:40:23.946049 containerd[1486]: time="2025-02-13T19:40:23.946028195Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:40:23.946114 containerd[1486]: time="2025-02-13T19:40:23.946094399Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:40:23.966077 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:40:23.968628 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:40:23.970771 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:40:24.265690 containerd[1486]: time="2025-02-13T19:40:24.265534594Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:40:24.265690 containerd[1486]: time="2025-02-13T19:40:24.265619284Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:40:24.265690 containerd[1486]: time="2025-02-13T19:40:24.265643238Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:40:24.265690 containerd[1486]: time="2025-02-13T19:40:24.265661327Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:40:24.265690 containerd[1486]: time="2025-02-13T19:40:24.265674374Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:40:24.266052 containerd[1486]: time="2025-02-13T19:40:24.265933922Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:40:24.266401 containerd[1486]: time="2025-02-13T19:40:24.266343781Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:40:24.266526 tar[1484]: linux-amd64/README.md Feb 13 19:40:24.266602 containerd[1486]: time="2025-02-13T19:40:24.266581912Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:40:24.266629 containerd[1486]: time="2025-02-13T19:40:24.266604271Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:40:24.266629 containerd[1486]: time="2025-02-13T19:40:24.266620894Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:40:24.266707 containerd[1486]: time="2025-02-13T19:40:24.266636299Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:40:24.266707 containerd[1486]: time="2025-02-13T19:40:24.266651168Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:40:24.266707 containerd[1486]: time="2025-02-13T19:40:24.266663463Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:40:24.266707 containerd[1486]: time="2025-02-13T19:40:24.266679303Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:40:24.266707 containerd[1486]: time="2025-02-13T19:40:24.266694489Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:40:24.266839 containerd[1486]: time="2025-02-13T19:40:24.266712340Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:40:24.266839 containerd[1486]: time="2025-02-13T19:40:24.266727497Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:40:24.266839 containerd[1486]: time="2025-02-13T19:40:24.266741168Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:40:24.266839 containerd[1486]: time="2025-02-13T19:40:24.266766509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.266839 containerd[1486]: time="2025-02-13T19:40:24.266780735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.266839 containerd[1486]: time="2025-02-13T19:40:24.266793316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.266839 containerd[1486]: time="2025-02-13T19:40:24.266806501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.266839 containerd[1486]: time="2025-02-13T19:40:24.266819023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.266839 containerd[1486]: time="2025-02-13T19:40:24.266833269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.267085 containerd[1486]: time="2025-02-13T19:40:24.266848020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.267085 containerd[1486]: time="2025-02-13T19:40:24.266862889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.267085 containerd[1486]: time="2025-02-13T19:40:24.266875747Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.267085 containerd[1486]: time="2025-02-13T19:40:24.266892549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.267085 containerd[1486]: time="2025-02-13T19:40:24.266912232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.267085 containerd[1486]: time="2025-02-13T19:40:24.266924318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.267085 containerd[1486]: time="2025-02-13T19:40:24.266937038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.267085 containerd[1486]: time="2025-02-13T19:40:24.266951383Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:40:24.267085 containerd[1486]: time="2025-02-13T19:40:24.266971770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.267085 containerd[1486]: time="2025-02-13T19:40:24.266984055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.267085 containerd[1486]: time="2025-02-13T19:40:24.267004026Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:40:24.267085 containerd[1486]: time="2025-02-13T19:40:24.267063346Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:40:24.267085 containerd[1486]: time="2025-02-13T19:40:24.267081742Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:40:24.267085 containerd[1486]: time="2025-02-13T19:40:24.267093293Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:40:24.267561 containerd[1486]: time="2025-02-13T19:40:24.267105160Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:40:24.267561 containerd[1486]: time="2025-02-13T19:40:24.267114889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.267561 containerd[1486]: time="2025-02-13T19:40:24.267130888Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:40:24.267561 containerd[1486]: time="2025-02-13T19:40:24.267141181Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:40:24.267561 containerd[1486]: time="2025-02-13T19:40:24.267174268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:40:24.267660 containerd[1486]: time="2025-02-13T19:40:24.267463309Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:40:24.267660 containerd[1486]: time="2025-02-13T19:40:24.267520815Z" level=info msg="Connect containerd service" Feb 13 19:40:24.267660 containerd[1486]: time="2025-02-13T19:40:24.267555082Z" level=info msg="using legacy CRI server" Feb 13 19:40:24.267660 containerd[1486]: time="2025-02-13T19:40:24.267562551Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:40:24.267914 containerd[1486]: time="2025-02-13T19:40:24.267711207Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:40:24.268404 containerd[1486]: time="2025-02-13T19:40:24.268381923Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:40:24.268639 containerd[1486]: time="2025-02-13T19:40:24.268555206Z" level=info msg="Start subscribing containerd event" Feb 13 19:40:24.268671 containerd[1486]: time="2025-02-13T19:40:24.268656331Z" level=info msg="Start recovering state" Feb 13 19:40:24.268866 containerd[1486]: time="2025-02-13T19:40:24.268745380Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:40:24.268866 containerd[1486]: time="2025-02-13T19:40:24.268751532Z" level=info msg="Start event monitor" Feb 13 19:40:24.268866 containerd[1486]: time="2025-02-13T19:40:24.268791019Z" level=info msg="Start snapshots syncer" Feb 13 19:40:24.268866 containerd[1486]: time="2025-02-13T19:40:24.268804086Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:40:24.268866 containerd[1486]: time="2025-02-13T19:40:24.268806919Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:40:24.268866 containerd[1486]: time="2025-02-13T19:40:24.268811991Z" level=info msg="Start streaming server" Feb 13 19:40:24.269265 containerd[1486]: time="2025-02-13T19:40:24.269246170Z" level=info msg="containerd successfully booted in 0.353388s" Feb 13 19:40:24.273547 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:40:24.284841 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:40:24.845431 systemd-networkd[1436]: eth0: Gained IPv6LL Feb 13 19:40:24.849032 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:40:24.851068 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:40:24.861508 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:40:24.864397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:40:24.866807 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:40:24.886729 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:40:24.887121 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:40:24.888824 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:40:24.889548 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:40:26.205048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:40:26.206688 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:40:26.207957 systemd[1]: Startup finished in 783ms (kernel) + 5.968s (initrd) + 5.548s (userspace) = 12.300s. Feb 13 19:40:26.220649 (kubelet)[1583]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:40:26.223025 agetty[1550]: failed to open credentials directory Feb 13 19:40:26.233108 agetty[1549]: failed to open credentials directory Feb 13 19:40:26.812713 kubelet[1583]: E0213 19:40:26.812631 1583 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:40:26.817057 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:40:26.817327 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:40:26.817845 systemd[1]: kubelet.service: Consumed 1.834s CPU time. Feb 13 19:40:33.742321 systemd[1]: Started sshd@1-10.0.0.96:22-10.0.0.1:43676.service - OpenSSH per-connection server daemon (10.0.0.1:43676). Feb 13 19:40:33.779941 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 43676 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:40:33.782066 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:33.792756 systemd-logind[1476]: New session 1 of user core. Feb 13 19:40:33.794501 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:40:33.803435 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:40:33.814989 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:40:33.829439 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:40:33.832344 (systemd)[1600]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:40:33.937621 systemd[1600]: Queued start job for default target default.target. Feb 13 19:40:33.949426 systemd[1600]: Created slice app.slice - User Application Slice. Feb 13 19:40:33.949452 systemd[1600]: Reached target paths.target - Paths. Feb 13 19:40:33.949466 systemd[1600]: Reached target timers.target - Timers. Feb 13 19:40:33.951065 systemd[1600]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:40:33.964981 systemd[1600]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:40:33.965121 systemd[1600]: Reached target sockets.target - Sockets. Feb 13 19:40:33.965147 systemd[1600]: Reached target basic.target - Basic System. Feb 13 19:40:33.965206 systemd[1600]: Reached target default.target - Main User Target. Feb 13 19:40:33.965239 systemd[1600]: Startup finished in 126ms. Feb 13 19:40:33.965615 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:40:33.967127 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:40:34.029689 systemd[1]: Started sshd@2-10.0.0.96:22-10.0.0.1:43684.service - OpenSSH per-connection server daemon (10.0.0.1:43684). Feb 13 19:40:34.082535 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 43684 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:40:34.084334 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:34.089149 systemd-logind[1476]: New session 2 of user core. Feb 13 19:40:34.107351 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:40:34.161823 sshd[1613]: Connection closed by 10.0.0.1 port 43684 Feb 13 19:40:34.162318 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:34.179706 systemd[1]: sshd@2-10.0.0.96:22-10.0.0.1:43684.service: Deactivated successfully. Feb 13 19:40:34.181999 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:40:34.183767 systemd-logind[1476]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:40:34.197457 systemd[1]: Started sshd@3-10.0.0.96:22-10.0.0.1:43688.service - OpenSSH per-connection server daemon (10.0.0.1:43688). Feb 13 19:40:34.198517 systemd-logind[1476]: Removed session 2. Feb 13 19:40:34.236377 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 43688 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:40:34.238550 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:34.242846 systemd-logind[1476]: New session 3 of user core. Feb 13 19:40:34.258439 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:40:34.307750 sshd[1620]: Connection closed by 10.0.0.1 port 43688 Feb 13 19:40:34.308369 sshd-session[1618]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:34.321233 systemd[1]: sshd@3-10.0.0.96:22-10.0.0.1:43688.service: Deactivated successfully. Feb 13 19:40:34.323000 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:40:34.324330 systemd-logind[1476]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:40:34.325545 systemd[1]: Started sshd@4-10.0.0.96:22-10.0.0.1:43696.service - OpenSSH per-connection server daemon (10.0.0.1:43696). Feb 13 19:40:34.326248 systemd-logind[1476]: Removed session 3. Feb 13 19:40:34.362335 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 43696 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:40:34.363727 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:34.367795 systemd-logind[1476]: New session 4 of user core. Feb 13 19:40:34.382270 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:40:34.436077 sshd[1627]: Connection closed by 10.0.0.1 port 43696 Feb 13 19:40:34.436427 sshd-session[1625]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:34.453813 systemd[1]: sshd@4-10.0.0.96:22-10.0.0.1:43696.service: Deactivated successfully. Feb 13 19:40:34.456404 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:40:34.458461 systemd-logind[1476]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:40:34.460000 systemd[1]: Started sshd@5-10.0.0.96:22-10.0.0.1:43708.service - OpenSSH per-connection server daemon (10.0.0.1:43708). Feb 13 19:40:34.460878 systemd-logind[1476]: Removed session 4. Feb 13 19:40:34.499637 sshd[1632]: Accepted publickey for core from 10.0.0.1 port 43708 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:40:34.501374 sshd-session[1632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:40:34.505986 systemd-logind[1476]: New session 5 of user core. Feb 13 19:40:34.523313 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:40:34.748074 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:40:34.748418 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:40:35.016364 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:40:35.016481 (dockerd)[1655]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:40:35.259767 dockerd[1655]: time="2025-02-13T19:40:35.259696424Z" level=info msg="Starting up" Feb 13 19:40:35.426769 dockerd[1655]: time="2025-02-13T19:40:35.426662120Z" level=info msg="Loading containers: start." Feb 13 19:40:35.604185 kernel: Initializing XFRM netlink socket Feb 13 19:40:35.680917 systemd-networkd[1436]: docker0: Link UP Feb 13 19:40:35.720558 dockerd[1655]: time="2025-02-13T19:40:35.720501698Z" level=info msg="Loading containers: done." Feb 13 19:40:35.734090 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4001646483-merged.mount: Deactivated successfully. Feb 13 19:40:35.736421 dockerd[1655]: time="2025-02-13T19:40:35.736381898Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:40:35.736506 dockerd[1655]: time="2025-02-13T19:40:35.736483725Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:40:35.736621 dockerd[1655]: time="2025-02-13T19:40:35.736599154Z" level=info msg="Daemon has completed initialization" Feb 13 19:40:35.772466 dockerd[1655]: time="2025-02-13T19:40:35.772393720Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:40:35.772636 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:40:36.262988 containerd[1486]: time="2025-02-13T19:40:36.262948019Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:40:36.955566 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:40:36.970309 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:40:37.171093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:40:37.175394 (kubelet)[1863]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:40:37.254119 kubelet[1863]: E0213 19:40:37.253998 1863 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:40:37.261131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2317069583.mount: Deactivated successfully. Feb 13 19:40:37.261851 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:40:37.262031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:40:38.560337 containerd[1486]: time="2025-02-13T19:40:38.560266041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:38.561171 containerd[1486]: time="2025-02-13T19:40:38.561072529Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673931" Feb 13 19:40:38.562585 containerd[1486]: time="2025-02-13T19:40:38.562533053Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:38.565249 containerd[1486]: time="2025-02-13T19:40:38.565208275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:38.566224 containerd[1486]: time="2025-02-13T19:40:38.566195089Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 2.303206463s" Feb 13 19:40:38.566270 containerd[1486]: time="2025-02-13T19:40:38.566229384Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 19:40:38.566808 containerd[1486]: time="2025-02-13T19:40:38.566779295Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:40:39.683182 containerd[1486]: time="2025-02-13T19:40:39.683120468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:39.684020 containerd[1486]: time="2025-02-13T19:40:39.683980920Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771784" Feb 13 19:40:39.685457 containerd[1486]: time="2025-02-13T19:40:39.685409529Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:39.687891 containerd[1486]: time="2025-02-13T19:40:39.687833978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:39.689051 containerd[1486]: time="2025-02-13T19:40:39.689000151Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 1.122180098s" Feb 13 19:40:39.689051 containerd[1486]: time="2025-02-13T19:40:39.689044668Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 19:40:39.689630 containerd[1486]: time="2025-02-13T19:40:39.689587695Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:40:41.012887 containerd[1486]: time="2025-02-13T19:40:41.012812015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:41.013754 containerd[1486]: time="2025-02-13T19:40:41.013711840Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170276" Feb 13 19:40:41.014929 containerd[1486]: time="2025-02-13T19:40:41.014895990Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:41.017538 containerd[1486]: time="2025-02-13T19:40:41.017510556Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:41.018442 containerd[1486]: time="2025-02-13T19:40:41.018417938Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 1.328805432s" Feb 13 19:40:41.018489 containerd[1486]: time="2025-02-13T19:40:41.018446058Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 19:40:41.018886 containerd[1486]: time="2025-02-13T19:40:41.018860496Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:40:41.941197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2524861727.mount: Deactivated successfully. Feb 13 19:40:42.210585 containerd[1486]: time="2025-02-13T19:40:42.210533620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:42.211476 containerd[1486]: time="2025-02-13T19:40:42.211406914Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 19:40:42.212752 containerd[1486]: time="2025-02-13T19:40:42.212719546Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:42.215036 containerd[1486]: time="2025-02-13T19:40:42.215003396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:42.215618 containerd[1486]: time="2025-02-13T19:40:42.215587129Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 1.196702425s" Feb 13 19:40:42.215645 containerd[1486]: time="2025-02-13T19:40:42.215616725Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:40:42.216202 containerd[1486]: time="2025-02-13T19:40:42.216123502Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:40:42.759869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2866774311.mount: Deactivated successfully. Feb 13 19:40:43.945371 containerd[1486]: time="2025-02-13T19:40:43.945268717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:43.946556 containerd[1486]: time="2025-02-13T19:40:43.946495011Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Feb 13 19:40:43.948350 containerd[1486]: time="2025-02-13T19:40:43.948272829Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:43.951663 containerd[1486]: time="2025-02-13T19:40:43.951632042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:43.952761 containerd[1486]: time="2025-02-13T19:40:43.952703203Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.736500291s" Feb 13 19:40:43.952761 containerd[1486]: time="2025-02-13T19:40:43.952756846Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 19:40:43.953279 containerd[1486]: time="2025-02-13T19:40:43.953242727Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:40:44.471500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1316232062.mount: Deactivated successfully. Feb 13 19:40:44.478406 containerd[1486]: time="2025-02-13T19:40:44.478355389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:44.479522 containerd[1486]: time="2025-02-13T19:40:44.479480006Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 19:40:44.480817 containerd[1486]: time="2025-02-13T19:40:44.480780326Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:44.483532 containerd[1486]: time="2025-02-13T19:40:44.483502852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:44.484372 containerd[1486]: time="2025-02-13T19:40:44.484343687Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 531.07093ms" Feb 13 19:40:44.484416 containerd[1486]: time="2025-02-13T19:40:44.484378635Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 19:40:44.484833 containerd[1486]: time="2025-02-13T19:40:44.484808178Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:40:45.044640 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2745110508.mount: Deactivated successfully. Feb 13 19:40:46.705484 containerd[1486]: time="2025-02-13T19:40:46.705412185Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:46.706186 containerd[1486]: time="2025-02-13T19:40:46.706142411Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Feb 13 19:40:46.707634 containerd[1486]: time="2025-02-13T19:40:46.707593621Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:46.710514 containerd[1486]: time="2025-02-13T19:40:46.710471980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:40:46.711671 containerd[1486]: time="2025-02-13T19:40:46.711642441Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.22672223s" Feb 13 19:40:46.711711 containerd[1486]: time="2025-02-13T19:40:46.711671108Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 19:40:47.455601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:40:47.463309 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:40:47.608393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:40:47.612622 (kubelet)[2080]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:40:47.650873 kubelet[2080]: E0213 19:40:47.650751 2080 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:40:47.655479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:40:47.655685 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:40:48.864470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:40:48.874394 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:40:48.899317 systemd[1]: Reloading requested from client PID 2096 ('systemctl') (unit session-5.scope)... Feb 13 19:40:48.899333 systemd[1]: Reloading... Feb 13 19:40:48.978185 zram_generator::config[2135]: No configuration found. Feb 13 19:40:49.398240 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:40:49.475397 systemd[1]: Reloading finished in 575 ms. Feb 13 19:40:49.524002 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:40:49.527213 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:40:49.527457 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:40:49.529027 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:40:49.691034 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:40:49.695740 (kubelet)[2185]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:40:49.732354 kubelet[2185]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:40:49.732354 kubelet[2185]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:40:49.732354 kubelet[2185]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:40:49.732880 kubelet[2185]: I0213 19:40:49.732399 2185 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:40:49.927543 kubelet[2185]: I0213 19:40:49.926188 2185 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:40:49.927543 kubelet[2185]: I0213 19:40:49.926245 2185 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:40:49.927543 kubelet[2185]: I0213 19:40:49.926811 2185 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:40:49.951974 kubelet[2185]: E0213 19:40:49.951828 2185 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:40:49.952209 kubelet[2185]: I0213 19:40:49.952143 2185 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:40:49.960436 kubelet[2185]: E0213 19:40:49.960395 2185 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:40:49.960436 kubelet[2185]: I0213 19:40:49.960436 2185 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:40:49.966925 kubelet[2185]: I0213 19:40:49.966891 2185 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:40:49.967721 kubelet[2185]: I0213 19:40:49.967670 2185 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:40:49.967939 kubelet[2185]: I0213 19:40:49.967706 2185 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:40:49.968027 kubelet[2185]: I0213 19:40:49.967946 2185 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:40:49.968027 kubelet[2185]: I0213 19:40:49.967956 2185 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:40:49.968130 kubelet[2185]: I0213 19:40:49.968107 2185 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:40:49.970841 kubelet[2185]: I0213 19:40:49.970812 2185 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:40:49.970841 kubelet[2185]: I0213 19:40:49.970829 2185 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:40:49.970922 kubelet[2185]: I0213 19:40:49.970853 2185 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:40:49.970922 kubelet[2185]: I0213 19:40:49.970865 2185 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:40:49.975809 kubelet[2185]: I0213 19:40:49.975608 2185 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:40:49.976249 kubelet[2185]: I0213 19:40:49.976022 2185 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:40:49.976511 kubelet[2185]: W0213 19:40:49.976484 2185 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:40:49.978765 kubelet[2185]: I0213 19:40:49.978747 2185 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:40:49.978816 kubelet[2185]: I0213 19:40:49.978781 2185 server.go:1287] "Started kubelet" Feb 13 19:40:49.980735 kubelet[2185]: W0213 19:40:49.980681 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Feb 13 19:40:49.980793 kubelet[2185]: E0213 19:40:49.980749 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:40:49.980793 kubelet[2185]: W0213 19:40:49.980729 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Feb 13 19:40:49.980867 kubelet[2185]: E0213 19:40:49.980805 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:40:49.980991 kubelet[2185]: I0213 19:40:49.980957 2185 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:40:49.981894 kubelet[2185]: I0213 19:40:49.981832 2185 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:40:49.982954 kubelet[2185]: I0213 19:40:49.982914 2185 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:40:49.984078 kubelet[2185]: I0213 19:40:49.983744 2185 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:40:49.984078 kubelet[2185]: I0213 19:40:49.984021 2185 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:40:49.984146 kubelet[2185]: E0213 19:40:49.983207 2185 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.96:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.96:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dbe3346d02b4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:40:49.978761908 +0000 UTC m=+0.279084776,LastTimestamp:2025-02-13 19:40:49.978761908 +0000 UTC m=+0.279084776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:40:49.984270 kubelet[2185]: I0213 19:40:49.984251 2185 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:40:49.985184 kubelet[2185]: E0213 19:40:49.985012 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:49.985184 kubelet[2185]: I0213 19:40:49.985046 2185 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:40:49.985419 kubelet[2185]: I0213 19:40:49.985389 2185 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:40:49.985800 kubelet[2185]: W0213 19:40:49.985755 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Feb 13 19:40:49.985836 kubelet[2185]: E0213 19:40:49.985802 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:40:49.986009 kubelet[2185]: I0213 19:40:49.985974 2185 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:40:49.986072 kubelet[2185]: E0213 19:40:49.986035 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="200ms" Feb 13 19:40:49.986547 kubelet[2185]: I0213 19:40:49.986523 2185 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:40:49.986668 kubelet[2185]: I0213 19:40:49.986618 2185 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:40:49.986817 kubelet[2185]: E0213 19:40:49.986802 2185 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:40:49.987392 kubelet[2185]: I0213 19:40:49.987371 2185 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:40:50.001026 kubelet[2185]: I0213 19:40:50.000963 2185 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:40:50.002019 kubelet[2185]: I0213 19:40:50.001265 2185 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:40:50.002019 kubelet[2185]: I0213 19:40:50.001279 2185 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:40:50.002019 kubelet[2185]: I0213 19:40:50.001295 2185 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:40:50.002413 kubelet[2185]: I0213 19:40:50.002389 2185 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:40:50.002413 kubelet[2185]: I0213 19:40:50.002414 2185 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:40:50.002465 kubelet[2185]: I0213 19:40:50.002436 2185 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:40:50.002465 kubelet[2185]: I0213 19:40:50.002443 2185 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:40:50.002807 kubelet[2185]: E0213 19:40:50.002486 2185 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:40:50.003353 kubelet[2185]: W0213 19:40:50.003304 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Feb 13 19:40:50.003451 kubelet[2185]: E0213 19:40:50.003357 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:40:50.085477 kubelet[2185]: E0213 19:40:50.085419 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:50.102773 kubelet[2185]: E0213 19:40:50.102727 2185 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:40:50.186034 kubelet[2185]: E0213 19:40:50.185985 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:50.187505 kubelet[2185]: E0213 19:40:50.187470 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="400ms" Feb 13 19:40:50.286247 kubelet[2185]: E0213 19:40:50.286198 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:50.302594 kubelet[2185]: I0213 19:40:50.302548 2185 policy_none.go:49] "None policy: Start" Feb 13 19:40:50.302594 kubelet[2185]: I0213 19:40:50.302575 2185 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:40:50.302594 kubelet[2185]: I0213 19:40:50.302600 2185 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:40:50.303540 kubelet[2185]: E0213 19:40:50.303495 2185 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:40:50.309849 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:40:50.328008 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:40:50.331139 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:40:50.341151 kubelet[2185]: I0213 19:40:50.341105 2185 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:40:50.341378 kubelet[2185]: I0213 19:40:50.341356 2185 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:40:50.341420 kubelet[2185]: I0213 19:40:50.341379 2185 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:40:50.341666 kubelet[2185]: I0213 19:40:50.341619 2185 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:40:50.342589 kubelet[2185]: E0213 19:40:50.342549 2185 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:40:50.342713 kubelet[2185]: E0213 19:40:50.342600 2185 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:40:50.443613 kubelet[2185]: I0213 19:40:50.443578 2185 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:40:50.444120 kubelet[2185]: E0213 19:40:50.444078 2185 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Feb 13 19:40:50.588911 kubelet[2185]: E0213 19:40:50.588753 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="800ms" Feb 13 19:40:50.646016 kubelet[2185]: I0213 19:40:50.645965 2185 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:40:50.646437 kubelet[2185]: E0213 19:40:50.646386 2185 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Feb 13 19:40:50.713401 systemd[1]: Created slice kubepods-burstable-pod7480f51b877e6f50acc9357c742c38c1.slice - libcontainer container kubepods-burstable-pod7480f51b877e6f50acc9357c742c38c1.slice. Feb 13 19:40:50.726508 kubelet[2185]: E0213 19:40:50.726453 2185 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:40:50.729022 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 19:40:50.737856 kubelet[2185]: E0213 19:40:50.737827 2185 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:40:50.741241 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 19:40:50.742953 kubelet[2185]: E0213 19:40:50.742921 2185 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:40:50.791476 kubelet[2185]: I0213 19:40:50.791426 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7480f51b877e6f50acc9357c742c38c1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7480f51b877e6f50acc9357c742c38c1\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:40:50.791663 kubelet[2185]: I0213 19:40:50.791516 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7480f51b877e6f50acc9357c742c38c1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7480f51b877e6f50acc9357c742c38c1\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:40:50.791663 kubelet[2185]: I0213 19:40:50.791539 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:40:50.791663 kubelet[2185]: I0213 19:40:50.791563 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:40:50.791663 kubelet[2185]: I0213 19:40:50.791585 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:40:50.791663 kubelet[2185]: I0213 19:40:50.791602 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7480f51b877e6f50acc9357c742c38c1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7480f51b877e6f50acc9357c742c38c1\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:40:50.791796 kubelet[2185]: I0213 19:40:50.791620 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:40:50.791796 kubelet[2185]: I0213 19:40:50.791635 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:40:50.791796 kubelet[2185]: I0213 19:40:50.791654 2185 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:40:50.810341 kubelet[2185]: W0213 19:40:50.810303 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Feb 13 19:40:50.810439 kubelet[2185]: E0213 19:40:50.810344 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:40:50.939063 kubelet[2185]: W0213 19:40:50.938904 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Feb 13 19:40:50.939063 kubelet[2185]: E0213 19:40:50.938962 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:40:51.027719 kubelet[2185]: E0213 19:40:51.027642 2185 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:51.028708 containerd[1486]: time="2025-02-13T19:40:51.028661044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7480f51b877e6f50acc9357c742c38c1,Namespace:kube-system,Attempt:0,}" Feb 13 19:40:51.032376 kubelet[2185]: W0213 19:40:51.032337 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Feb 13 19:40:51.032455 kubelet[2185]: E0213 19:40:51.032392 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:40:51.038943 kubelet[2185]: E0213 19:40:51.038890 2185 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:51.039672 containerd[1486]: time="2025-02-13T19:40:51.039619040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 19:40:51.044080 kubelet[2185]: E0213 19:40:51.044038 2185 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:51.044591 containerd[1486]: time="2025-02-13T19:40:51.044554865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 19:40:51.050264 kubelet[2185]: I0213 19:40:51.050233 2185 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:40:51.050646 kubelet[2185]: E0213 19:40:51.050602 2185 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Feb 13 19:40:51.390120 kubelet[2185]: E0213 19:40:51.390048 2185 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="1.6s" Feb 13 19:40:51.560113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2987114325.mount: Deactivated successfully. Feb 13 19:40:51.567208 containerd[1486]: time="2025-02-13T19:40:51.567103684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:40:51.570118 containerd[1486]: time="2025-02-13T19:40:51.570070957Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:40:51.571166 containerd[1486]: time="2025-02-13T19:40:51.571107094Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:40:51.572870 containerd[1486]: time="2025-02-13T19:40:51.572828799Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:40:51.573595 containerd[1486]: time="2025-02-13T19:40:51.573549992Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:40:51.574672 containerd[1486]: time="2025-02-13T19:40:51.574631270Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:40:51.575555 containerd[1486]: time="2025-02-13T19:40:51.575521610Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:40:51.576391 containerd[1486]: time="2025-02-13T19:40:51.576353126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:40:51.577217 containerd[1486]: time="2025-02-13T19:40:51.577177291Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 548.401426ms" Feb 13 19:40:51.580177 containerd[1486]: time="2025-02-13T19:40:51.580114686Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 540.385172ms" Feb 13 19:40:51.582219 kubelet[2185]: W0213 19:40:51.582148 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Feb 13 19:40:51.582273 kubelet[2185]: E0213 19:40:51.582235 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:40:51.582489 containerd[1486]: time="2025-02-13T19:40:51.582463456Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 537.810898ms" Feb 13 19:40:51.710273 containerd[1486]: time="2025-02-13T19:40:51.710044414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:40:51.710273 containerd[1486]: time="2025-02-13T19:40:51.710143951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:40:51.710273 containerd[1486]: time="2025-02-13T19:40:51.710184594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:40:51.710551 containerd[1486]: time="2025-02-13T19:40:51.710306165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:40:51.711513 containerd[1486]: time="2025-02-13T19:40:51.711372701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:40:51.711513 containerd[1486]: time="2025-02-13T19:40:51.711447588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:40:51.711513 containerd[1486]: time="2025-02-13T19:40:51.711482614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:40:51.712351 containerd[1486]: time="2025-02-13T19:40:51.711595400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:40:51.713423 containerd[1486]: time="2025-02-13T19:40:51.713285446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:40:51.713423 containerd[1486]: time="2025-02-13T19:40:51.713332400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:40:51.713423 containerd[1486]: time="2025-02-13T19:40:51.713346051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:40:51.713647 containerd[1486]: time="2025-02-13T19:40:51.713420709Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:40:51.743362 systemd[1]: Started cri-containerd-ae36096a1038590d8252296635471ea15f3f222031f4723f9e7231d77178e193.scope - libcontainer container ae36096a1038590d8252296635471ea15f3f222031f4723f9e7231d77178e193. Feb 13 19:40:51.748071 systemd[1]: Started cri-containerd-07e0cb7acab1a1f59b9fe78c2e35c976629ec1f238f3573fcb9673cf766dfd72.scope - libcontainer container 07e0cb7acab1a1f59b9fe78c2e35c976629ec1f238f3573fcb9673cf766dfd72. Feb 13 19:40:51.750049 systemd[1]: Started cri-containerd-6e15b4a65ba86319494916685dc16f71cf05480c2004d34237bf7ceef1751c44.scope - libcontainer container 6e15b4a65ba86319494916685dc16f71cf05480c2004d34237bf7ceef1751c44. Feb 13 19:40:51.789097 containerd[1486]: time="2025-02-13T19:40:51.788967949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae36096a1038590d8252296635471ea15f3f222031f4723f9e7231d77178e193\"" Feb 13 19:40:51.792124 containerd[1486]: time="2025-02-13T19:40:51.792057883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e15b4a65ba86319494916685dc16f71cf05480c2004d34237bf7ceef1751c44\"" Feb 13 19:40:51.794634 kubelet[2185]: E0213 19:40:51.794573 2185 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:51.795984 kubelet[2185]: E0213 19:40:51.795853 2185 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:51.797432 containerd[1486]: time="2025-02-13T19:40:51.797262901Z" level=info msg="CreateContainer within sandbox \"ae36096a1038590d8252296635471ea15f3f222031f4723f9e7231d77178e193\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:40:51.798694 containerd[1486]: time="2025-02-13T19:40:51.798665844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7480f51b877e6f50acc9357c742c38c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"07e0cb7acab1a1f59b9fe78c2e35c976629ec1f238f3573fcb9673cf766dfd72\"" Feb 13 19:40:51.798744 containerd[1486]: time="2025-02-13T19:40:51.798679015Z" level=info msg="CreateContainer within sandbox \"6e15b4a65ba86319494916685dc16f71cf05480c2004d34237bf7ceef1751c44\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:40:51.799501 kubelet[2185]: E0213 19:40:51.799463 2185 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:51.801699 containerd[1486]: time="2025-02-13T19:40:51.801568304Z" level=info msg="CreateContainer within sandbox \"07e0cb7acab1a1f59b9fe78c2e35c976629ec1f238f3573fcb9673cf766dfd72\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:40:51.852262 kubelet[2185]: I0213 19:40:51.852220 2185 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:40:51.852614 kubelet[2185]: E0213 19:40:51.852577 2185 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Feb 13 19:40:52.046332 kubelet[2185]: E0213 19:40:52.046199 2185 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:40:52.665298 containerd[1486]: time="2025-02-13T19:40:52.665221560Z" level=info msg="CreateContainer within sandbox \"07e0cb7acab1a1f59b9fe78c2e35c976629ec1f238f3573fcb9673cf766dfd72\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c15b0977a1fc872d513385a2772cf134fb973dbe99fa584e5d487c6fce2d65ad\"" Feb 13 19:40:52.666064 containerd[1486]: time="2025-02-13T19:40:52.666020155Z" level=info msg="StartContainer for \"c15b0977a1fc872d513385a2772cf134fb973dbe99fa584e5d487c6fce2d65ad\"" Feb 13 19:40:52.668283 containerd[1486]: time="2025-02-13T19:40:52.668232445Z" level=info msg="CreateContainer within sandbox \"6e15b4a65ba86319494916685dc16f71cf05480c2004d34237bf7ceef1751c44\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf1d4ad226856b514e40d0c0344fe6af756bba9cdff1f394a51166da17cd4c38\"" Feb 13 19:40:52.668669 containerd[1486]: time="2025-02-13T19:40:52.668633941Z" level=info msg="StartContainer for \"bf1d4ad226856b514e40d0c0344fe6af756bba9cdff1f394a51166da17cd4c38\"" Feb 13 19:40:52.669979 containerd[1486]: time="2025-02-13T19:40:52.669923626Z" level=info msg="CreateContainer within sandbox \"ae36096a1038590d8252296635471ea15f3f222031f4723f9e7231d77178e193\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"157312493689843273aefea9b5bd8452fa26f0339354f1df1d5eecdc7918d22d\"" Feb 13 19:40:52.670475 containerd[1486]: time="2025-02-13T19:40:52.670447278Z" level=info msg="StartContainer for \"157312493689843273aefea9b5bd8452fa26f0339354f1df1d5eecdc7918d22d\"" Feb 13 19:40:52.695430 kubelet[2185]: W0213 19:40:52.695364 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Feb 13 19:40:52.695430 kubelet[2185]: E0213 19:40:52.695426 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:40:52.700327 systemd[1]: Started cri-containerd-c15b0977a1fc872d513385a2772cf134fb973dbe99fa584e5d487c6fce2d65ad.scope - libcontainer container c15b0977a1fc872d513385a2772cf134fb973dbe99fa584e5d487c6fce2d65ad. Feb 13 19:40:52.709300 systemd[1]: Started cri-containerd-157312493689843273aefea9b5bd8452fa26f0339354f1df1d5eecdc7918d22d.scope - libcontainer container 157312493689843273aefea9b5bd8452fa26f0339354f1df1d5eecdc7918d22d. Feb 13 19:40:52.711011 systemd[1]: Started cri-containerd-bf1d4ad226856b514e40d0c0344fe6af756bba9cdff1f394a51166da17cd4c38.scope - libcontainer container bf1d4ad226856b514e40d0c0344fe6af756bba9cdff1f394a51166da17cd4c38. Feb 13 19:40:52.730758 kubelet[2185]: W0213 19:40:52.730691 2185 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.96:6443: connect: connection refused Feb 13 19:40:52.730758 kubelet[2185]: E0213 19:40:52.730757 2185 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:40:52.749725 containerd[1486]: time="2025-02-13T19:40:52.749144351Z" level=info msg="StartContainer for \"c15b0977a1fc872d513385a2772cf134fb973dbe99fa584e5d487c6fce2d65ad\" returns successfully" Feb 13 19:40:52.758276 containerd[1486]: time="2025-02-13T19:40:52.758215197Z" level=info msg="StartContainer for \"157312493689843273aefea9b5bd8452fa26f0339354f1df1d5eecdc7918d22d\" returns successfully" Feb 13 19:40:52.767659 containerd[1486]: time="2025-02-13T19:40:52.767498275Z" level=info msg="StartContainer for \"bf1d4ad226856b514e40d0c0344fe6af756bba9cdff1f394a51166da17cd4c38\" returns successfully" Feb 13 19:40:53.017317 kubelet[2185]: E0213 19:40:53.017274 2185 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:40:53.017764 kubelet[2185]: E0213 19:40:53.017393 2185 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:53.017764 kubelet[2185]: E0213 19:40:53.017520 2185 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:40:53.017764 kubelet[2185]: E0213 19:40:53.017596 2185 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:53.019660 kubelet[2185]: E0213 19:40:53.019509 2185 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:40:53.019660 kubelet[2185]: E0213 19:40:53.019597 2185 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:53.455049 kubelet[2185]: I0213 19:40:53.454933 2185 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:40:53.882537 kubelet[2185]: E0213 19:40:53.882415 2185 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:40:53.981608 kubelet[2185]: I0213 19:40:53.981342 2185 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:40:53.981608 kubelet[2185]: E0213 19:40:53.981396 2185 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 19:40:53.984895 kubelet[2185]: E0213 19:40:53.984870 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:54.021877 kubelet[2185]: E0213 19:40:54.021844 2185 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:40:54.022259 kubelet[2185]: E0213 19:40:54.021966 2185 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:40:54.022259 kubelet[2185]: E0213 19:40:54.021968 2185 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:54.022259 kubelet[2185]: E0213 19:40:54.022052 2185 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:54.022431 kubelet[2185]: E0213 19:40:54.022406 2185 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:40:54.022531 kubelet[2185]: E0213 19:40:54.022511 2185 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:54.085561 kubelet[2185]: E0213 19:40:54.085517 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:54.186177 kubelet[2185]: E0213 19:40:54.186051 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:54.286478 kubelet[2185]: E0213 19:40:54.286421 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:54.387258 kubelet[2185]: E0213 19:40:54.387210 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:54.488297 kubelet[2185]: E0213 19:40:54.488243 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:54.589051 kubelet[2185]: E0213 19:40:54.589013 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:54.689542 kubelet[2185]: E0213 19:40:54.689516 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:54.790454 kubelet[2185]: E0213 19:40:54.790316 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:54.891071 kubelet[2185]: E0213 19:40:54.891032 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:54.991298 kubelet[2185]: E0213 19:40:54.991254 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:55.022692 kubelet[2185]: E0213 19:40:55.022664 2185 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:40:55.023064 kubelet[2185]: E0213 19:40:55.022802 2185 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:55.023113 kubelet[2185]: E0213 19:40:55.023095 2185 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:40:55.023277 kubelet[2185]: E0213 19:40:55.023243 2185 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:55.023311 kubelet[2185]: E0213 19:40:55.023280 2185 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:40:55.023650 kubelet[2185]: E0213 19:40:55.023393 2185 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:55.092123 kubelet[2185]: E0213 19:40:55.091996 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:55.192660 kubelet[2185]: E0213 19:40:55.192615 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:55.293131 kubelet[2185]: E0213 19:40:55.293067 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:55.393536 kubelet[2185]: E0213 19:40:55.393438 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:55.494646 kubelet[2185]: E0213 19:40:55.494600 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:55.594851 kubelet[2185]: E0213 19:40:55.594792 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:55.695812 kubelet[2185]: E0213 19:40:55.695692 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:55.796282 kubelet[2185]: E0213 19:40:55.796237 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:55.896951 kubelet[2185]: E0213 19:40:55.896896 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:55.997054 kubelet[2185]: E0213 19:40:55.997010 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:56.098044 kubelet[2185]: E0213 19:40:56.098003 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:56.137100 systemd[1]: Reloading requested from client PID 2465 ('systemctl') (unit session-5.scope)... Feb 13 19:40:56.137117 systemd[1]: Reloading... Feb 13 19:40:56.198223 kubelet[2185]: E0213 19:40:56.198122 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:56.217197 zram_generator::config[2507]: No configuration found. Feb 13 19:40:56.298945 kubelet[2185]: E0213 19:40:56.298814 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:56.323717 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:40:56.399179 kubelet[2185]: E0213 19:40:56.399114 2185 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:40:56.414831 systemd[1]: Reloading finished in 277 ms. Feb 13 19:40:56.461818 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:40:56.489517 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:40:56.489892 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:40:56.500380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:40:56.651982 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:40:56.657102 (kubelet)[2549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:40:56.696293 kubelet[2549]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:40:56.696293 kubelet[2549]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:40:56.696293 kubelet[2549]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:40:56.696690 kubelet[2549]: I0213 19:40:56.696340 2549 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:40:56.701787 kubelet[2549]: I0213 19:40:56.701763 2549 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:40:56.701787 kubelet[2549]: I0213 19:40:56.701780 2549 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:40:56.701974 kubelet[2549]: I0213 19:40:56.701958 2549 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:40:56.702944 kubelet[2549]: I0213 19:40:56.702927 2549 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:40:56.705690 kubelet[2549]: I0213 19:40:56.705658 2549 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:40:56.709197 kubelet[2549]: E0213 19:40:56.709170 2549 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:40:56.709197 kubelet[2549]: I0213 19:40:56.709195 2549 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:40:56.713515 kubelet[2549]: I0213 19:40:56.713498 2549 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:40:56.713774 kubelet[2549]: I0213 19:40:56.713747 2549 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:40:56.713906 kubelet[2549]: I0213 19:40:56.713769 2549 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:40:56.713984 kubelet[2549]: I0213 19:40:56.713908 2549 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:40:56.713984 kubelet[2549]: I0213 19:40:56.713916 2549 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:40:56.713984 kubelet[2549]: I0213 19:40:56.713952 2549 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:40:56.714121 kubelet[2549]: I0213 19:40:56.714104 2549 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:40:56.714144 kubelet[2549]: I0213 19:40:56.714121 2549 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:40:56.714144 kubelet[2549]: I0213 19:40:56.714140 2549 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:40:56.714265 kubelet[2549]: I0213 19:40:56.714252 2549 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:40:56.715026 kubelet[2549]: I0213 19:40:56.714994 2549 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:40:56.715850 kubelet[2549]: I0213 19:40:56.715800 2549 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:40:56.717762 kubelet[2549]: I0213 19:40:56.717113 2549 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:40:56.717762 kubelet[2549]: I0213 19:40:56.717265 2549 server.go:1287] "Started kubelet" Feb 13 19:40:56.717830 kubelet[2549]: I0213 19:40:56.717748 2549 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:40:56.718311 kubelet[2549]: I0213 19:40:56.718146 2549 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:40:56.718618 kubelet[2549]: I0213 19:40:56.718602 2549 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:40:56.718987 kubelet[2549]: I0213 19:40:56.718971 2549 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:40:56.720324 kubelet[2549]: I0213 19:40:56.720305 2549 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:40:56.724259 kubelet[2549]: I0213 19:40:56.723750 2549 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:40:56.724388 kubelet[2549]: I0213 19:40:56.724302 2549 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:40:56.724514 kubelet[2549]: I0213 19:40:56.724496 2549 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:40:56.724825 kubelet[2549]: I0213 19:40:56.724793 2549 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:40:56.727848 kubelet[2549]: I0213 19:40:56.727689 2549 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:40:56.727984 kubelet[2549]: I0213 19:40:56.727943 2549 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:40:56.729687 kubelet[2549]: I0213 19:40:56.729665 2549 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:40:56.734880 kubelet[2549]: E0213 19:40:56.734841 2549 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:40:56.740534 kubelet[2549]: I0213 19:40:56.740491 2549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:40:56.742082 kubelet[2549]: I0213 19:40:56.742006 2549 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:40:56.742082 kubelet[2549]: I0213 19:40:56.742072 2549 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:40:56.742248 kubelet[2549]: I0213 19:40:56.742100 2549 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:40:56.742248 kubelet[2549]: I0213 19:40:56.742109 2549 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:40:56.742525 kubelet[2549]: E0213 19:40:56.742286 2549 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:40:56.763077 kubelet[2549]: I0213 19:40:56.763050 2549 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:40:56.763253 kubelet[2549]: I0213 19:40:56.763222 2549 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:40:56.763253 kubelet[2549]: I0213 19:40:56.763249 2549 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:40:56.763419 kubelet[2549]: I0213 19:40:56.763376 2549 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:40:56.763419 kubelet[2549]: I0213 19:40:56.763386 2549 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:40:56.763419 kubelet[2549]: I0213 19:40:56.763414 2549 policy_none.go:49] "None policy: Start" Feb 13 19:40:56.763484 kubelet[2549]: I0213 19:40:56.763428 2549 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:40:56.763484 kubelet[2549]: I0213 19:40:56.763439 2549 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:40:56.763541 kubelet[2549]: I0213 19:40:56.763528 2549 state_mem.go:75] "Updated machine memory state" Feb 13 19:40:56.767033 kubelet[2549]: I0213 19:40:56.766998 2549 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:40:56.767400 kubelet[2549]: I0213 19:40:56.767224 2549 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:40:56.767400 kubelet[2549]: I0213 19:40:56.767242 2549 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:40:56.767476 kubelet[2549]: I0213 19:40:56.767412 2549 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:40:56.768385 kubelet[2549]: E0213 19:40:56.768362 2549 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:40:56.842971 kubelet[2549]: I0213 19:40:56.842918 2549 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:40:56.842971 kubelet[2549]: I0213 19:40:56.842957 2549 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:40:56.843130 kubelet[2549]: I0213 19:40:56.843092 2549 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:40:56.873411 kubelet[2549]: I0213 19:40:56.873379 2549 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:40:56.879232 kubelet[2549]: I0213 19:40:56.879206 2549 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 19:40:56.879294 kubelet[2549]: I0213 19:40:56.879269 2549 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:40:56.926540 kubelet[2549]: I0213 19:40:56.926409 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:40:56.926540 kubelet[2549]: I0213 19:40:56.926458 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:40:56.926540 kubelet[2549]: I0213 19:40:56.926486 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7480f51b877e6f50acc9357c742c38c1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7480f51b877e6f50acc9357c742c38c1\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:40:56.926540 kubelet[2549]: I0213 19:40:56.926502 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7480f51b877e6f50acc9357c742c38c1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7480f51b877e6f50acc9357c742c38c1\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:40:56.926861 kubelet[2549]: I0213 19:40:56.926567 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7480f51b877e6f50acc9357c742c38c1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7480f51b877e6f50acc9357c742c38c1\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:40:56.926861 kubelet[2549]: I0213 19:40:56.926583 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:40:56.926861 kubelet[2549]: I0213 19:40:56.926597 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:40:56.926861 kubelet[2549]: I0213 19:40:56.926612 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:40:56.926861 kubelet[2549]: I0213 19:40:56.926629 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:40:57.150683 kubelet[2549]: E0213 19:40:57.149879 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:57.150683 kubelet[2549]: E0213 19:40:57.150151 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:57.150683 kubelet[2549]: E0213 19:40:57.150555 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:57.715719 kubelet[2549]: I0213 19:40:57.715676 2549 apiserver.go:52] "Watching apiserver" Feb 13 19:40:57.724876 kubelet[2549]: I0213 19:40:57.724824 2549 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:40:57.752226 kubelet[2549]: I0213 19:40:57.751898 2549 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:40:57.752226 kubelet[2549]: E0213 19:40:57.752049 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:57.753194 kubelet[2549]: E0213 19:40:57.752357 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:57.756259 kubelet[2549]: E0213 19:40:57.756222 2549 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:40:57.756799 kubelet[2549]: E0213 19:40:57.756718 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:57.778690 kubelet[2549]: I0213 19:40:57.778536 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.778518561 podStartE2EDuration="1.778518561s" podCreationTimestamp="2025-02-13 19:40:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:40:57.77141625 +0000 UTC m=+1.110071037" watchObservedRunningTime="2025-02-13 19:40:57.778518561 +0000 UTC m=+1.117173338" Feb 13 19:40:57.819625 kubelet[2549]: I0213 19:40:57.819570 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8195471140000001 podStartE2EDuration="1.819547114s" podCreationTimestamp="2025-02-13 19:40:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:40:57.778500888 +0000 UTC m=+1.117155665" watchObservedRunningTime="2025-02-13 19:40:57.819547114 +0000 UTC m=+1.158201891" Feb 13 19:40:57.819819 kubelet[2549]: I0213 19:40:57.819731 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.819724988 podStartE2EDuration="1.819724988s" podCreationTimestamp="2025-02-13 19:40:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:40:57.819700261 +0000 UTC m=+1.158355038" watchObservedRunningTime="2025-02-13 19:40:57.819724988 +0000 UTC m=+1.158379765" Feb 13 19:40:58.008264 sudo[1635]: pam_unix(sudo:session): session closed for user root Feb 13 19:40:58.009749 sshd[1634]: Connection closed by 10.0.0.1 port 43708 Feb 13 19:40:58.010094 sshd-session[1632]: pam_unix(sshd:session): session closed for user core Feb 13 19:40:58.013919 systemd[1]: sshd@5-10.0.0.96:22-10.0.0.1:43708.service: Deactivated successfully. Feb 13 19:40:58.016808 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:40:58.017024 systemd[1]: session-5.scope: Consumed 3.527s CPU time, 151.2M memory peak, 0B memory swap peak. Feb 13 19:40:58.017493 systemd-logind[1476]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:40:58.018594 systemd-logind[1476]: Removed session 5. Feb 13 19:40:58.753499 kubelet[2549]: E0213 19:40:58.753453 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:40:58.753978 kubelet[2549]: E0213 19:40:58.753629 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:01.697053 kubelet[2549]: I0213 19:41:01.696878 2549 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:41:01.697526 containerd[1486]: time="2025-02-13T19:41:01.697448213Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:41:01.697794 kubelet[2549]: I0213 19:41:01.697667 2549 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:41:02.477617 systemd[1]: Created slice kubepods-besteffort-pod646c7e62_cad0_430e_b2fe_7e6bbadff8d7.slice - libcontainer container kubepods-besteffort-pod646c7e62_cad0_430e_b2fe_7e6bbadff8d7.slice. Feb 13 19:41:02.493893 systemd[1]: Created slice kubepods-burstable-podb1e02c9c_3ba3_4644_befc_745751d04e87.slice - libcontainer container kubepods-burstable-podb1e02c9c_3ba3_4644_befc_745751d04e87.slice. Feb 13 19:41:02.560952 kubelet[2549]: I0213 19:41:02.560903 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/646c7e62-cad0-430e-b2fe-7e6bbadff8d7-xtables-lock\") pod \"kube-proxy-pjlq9\" (UID: \"646c7e62-cad0-430e-b2fe-7e6bbadff8d7\") " pod="kube-system/kube-proxy-pjlq9" Feb 13 19:41:02.560952 kubelet[2549]: I0213 19:41:02.560951 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r56t7\" (UniqueName: \"kubernetes.io/projected/646c7e62-cad0-430e-b2fe-7e6bbadff8d7-kube-api-access-r56t7\") pod \"kube-proxy-pjlq9\" (UID: \"646c7e62-cad0-430e-b2fe-7e6bbadff8d7\") " pod="kube-system/kube-proxy-pjlq9" Feb 13 19:41:02.561131 kubelet[2549]: I0213 19:41:02.560980 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/b1e02c9c-3ba3-4644-befc-745751d04e87-cni-plugin\") pod \"kube-flannel-ds-s5k2c\" (UID: \"b1e02c9c-3ba3-4644-befc-745751d04e87\") " pod="kube-flannel/kube-flannel-ds-s5k2c" Feb 13 19:41:02.561131 kubelet[2549]: I0213 19:41:02.560999 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/b1e02c9c-3ba3-4644-befc-745751d04e87-cni\") pod \"kube-flannel-ds-s5k2c\" (UID: \"b1e02c9c-3ba3-4644-befc-745751d04e87\") " pod="kube-flannel/kube-flannel-ds-s5k2c" Feb 13 19:41:02.561131 kubelet[2549]: I0213 19:41:02.561020 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/646c7e62-cad0-430e-b2fe-7e6bbadff8d7-kube-proxy\") pod \"kube-proxy-pjlq9\" (UID: \"646c7e62-cad0-430e-b2fe-7e6bbadff8d7\") " pod="kube-system/kube-proxy-pjlq9" Feb 13 19:41:02.561131 kubelet[2549]: I0213 19:41:02.561036 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/646c7e62-cad0-430e-b2fe-7e6bbadff8d7-lib-modules\") pod \"kube-proxy-pjlq9\" (UID: \"646c7e62-cad0-430e-b2fe-7e6bbadff8d7\") " pod="kube-system/kube-proxy-pjlq9" Feb 13 19:41:02.561131 kubelet[2549]: I0213 19:41:02.561055 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1e02c9c-3ba3-4644-befc-745751d04e87-xtables-lock\") pod \"kube-flannel-ds-s5k2c\" (UID: \"b1e02c9c-3ba3-4644-befc-745751d04e87\") " pod="kube-flannel/kube-flannel-ds-s5k2c" Feb 13 19:41:02.561268 kubelet[2549]: I0213 19:41:02.561072 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx6ss\" (UniqueName: \"kubernetes.io/projected/b1e02c9c-3ba3-4644-befc-745751d04e87-kube-api-access-rx6ss\") pod \"kube-flannel-ds-s5k2c\" (UID: \"b1e02c9c-3ba3-4644-befc-745751d04e87\") " pod="kube-flannel/kube-flannel-ds-s5k2c" Feb 13 19:41:02.561268 kubelet[2549]: I0213 19:41:02.561090 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b1e02c9c-3ba3-4644-befc-745751d04e87-run\") pod \"kube-flannel-ds-s5k2c\" (UID: \"b1e02c9c-3ba3-4644-befc-745751d04e87\") " pod="kube-flannel/kube-flannel-ds-s5k2c" Feb 13 19:41:02.561268 kubelet[2549]: I0213 19:41:02.561177 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/b1e02c9c-3ba3-4644-befc-745751d04e87-flannel-cfg\") pod \"kube-flannel-ds-s5k2c\" (UID: \"b1e02c9c-3ba3-4644-befc-745751d04e87\") " pod="kube-flannel/kube-flannel-ds-s5k2c" Feb 13 19:41:02.791565 kubelet[2549]: E0213 19:41:02.791449 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:02.792111 containerd[1486]: time="2025-02-13T19:41:02.791902726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjlq9,Uid:646c7e62-cad0-430e-b2fe-7e6bbadff8d7,Namespace:kube-system,Attempt:0,}" Feb 13 19:41:02.796508 kubelet[2549]: E0213 19:41:02.796490 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:02.796790 containerd[1486]: time="2025-02-13T19:41:02.796769574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-s5k2c,Uid:b1e02c9c-3ba3-4644-befc-745751d04e87,Namespace:kube-flannel,Attempt:0,}" Feb 13 19:41:02.802359 kubelet[2549]: E0213 19:41:02.802336 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:02.838053 containerd[1486]: time="2025-02-13T19:41:02.836843925Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:41:02.838053 containerd[1486]: time="2025-02-13T19:41:02.837137145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:41:02.838053 containerd[1486]: time="2025-02-13T19:41:02.837301192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:02.838053 containerd[1486]: time="2025-02-13T19:41:02.837375291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:02.847435 containerd[1486]: time="2025-02-13T19:41:02.846814014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:41:02.847435 containerd[1486]: time="2025-02-13T19:41:02.846889105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:41:02.847435 containerd[1486]: time="2025-02-13T19:41:02.846904684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:02.847987 containerd[1486]: time="2025-02-13T19:41:02.847751422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:02.860319 systemd[1]: Started cri-containerd-0da945003b676bb8999dff8b186c1ca95b2021ec46ea055ef6bfea49a80493e0.scope - libcontainer container 0da945003b676bb8999dff8b186c1ca95b2021ec46ea055ef6bfea49a80493e0. Feb 13 19:41:02.864540 systemd[1]: Started cri-containerd-43f67968ae78655fd954471d65e71a387d1281e96cd982ec125611719fe18e23.scope - libcontainer container 43f67968ae78655fd954471d65e71a387d1281e96cd982ec125611719fe18e23. Feb 13 19:41:02.884637 containerd[1486]: time="2025-02-13T19:41:02.884585436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pjlq9,Uid:646c7e62-cad0-430e-b2fe-7e6bbadff8d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"0da945003b676bb8999dff8b186c1ca95b2021ec46ea055ef6bfea49a80493e0\"" Feb 13 19:41:02.885761 kubelet[2549]: E0213 19:41:02.885727 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:02.893954 containerd[1486]: time="2025-02-13T19:41:02.893855311Z" level=info msg="CreateContainer within sandbox \"0da945003b676bb8999dff8b186c1ca95b2021ec46ea055ef6bfea49a80493e0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:41:02.907935 containerd[1486]: time="2025-02-13T19:41:02.907880877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-s5k2c,Uid:b1e02c9c-3ba3-4644-befc-745751d04e87,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"43f67968ae78655fd954471d65e71a387d1281e96cd982ec125611719fe18e23\"" Feb 13 19:41:02.908625 kubelet[2549]: E0213 19:41:02.908593 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:02.909726 containerd[1486]: time="2025-02-13T19:41:02.909701482Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 13 19:41:02.918917 containerd[1486]: time="2025-02-13T19:41:02.918869006Z" level=info msg="CreateContainer within sandbox \"0da945003b676bb8999dff8b186c1ca95b2021ec46ea055ef6bfea49a80493e0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ae6c0add56378793c5cd3f695d1730a670001abaee498537a21271f5cbcd9afb\"" Feb 13 19:41:02.919519 containerd[1486]: time="2025-02-13T19:41:02.919438744Z" level=info msg="StartContainer for \"ae6c0add56378793c5cd3f695d1730a670001abaee498537a21271f5cbcd9afb\"" Feb 13 19:41:02.949309 systemd[1]: Started cri-containerd-ae6c0add56378793c5cd3f695d1730a670001abaee498537a21271f5cbcd9afb.scope - libcontainer container ae6c0add56378793c5cd3f695d1730a670001abaee498537a21271f5cbcd9afb. Feb 13 19:41:02.983806 containerd[1486]: time="2025-02-13T19:41:02.983746856Z" level=info msg="StartContainer for \"ae6c0add56378793c5cd3f695d1730a670001abaee498537a21271f5cbcd9afb\" returns successfully" Feb 13 19:41:03.762025 kubelet[2549]: E0213 19:41:03.761968 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:03.762264 kubelet[2549]: E0213 19:41:03.762077 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:03.770275 kubelet[2549]: I0213 19:41:03.770215 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pjlq9" podStartSLOduration=1.7701976670000001 podStartE2EDuration="1.770197667s" podCreationTimestamp="2025-02-13 19:41:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:41:03.769865744 +0000 UTC m=+7.108520571" watchObservedRunningTime="2025-02-13 19:41:03.770197667 +0000 UTC m=+7.108852444" Feb 13 19:41:04.690427 kubelet[2549]: E0213 19:41:04.690325 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:04.764585 kubelet[2549]: E0213 19:41:04.764117 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:04.764752 kubelet[2549]: E0213 19:41:04.764731 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:04.835920 kubelet[2549]: E0213 19:41:04.835868 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:04.980995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924897410.mount: Deactivated successfully. Feb 13 19:41:05.015363 containerd[1486]: time="2025-02-13T19:41:05.015322752Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:05.016169 containerd[1486]: time="2025-02-13T19:41:05.016120298Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Feb 13 19:41:05.017152 containerd[1486]: time="2025-02-13T19:41:05.017128830Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:05.019359 containerd[1486]: time="2025-02-13T19:41:05.019333996Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:05.020014 containerd[1486]: time="2025-02-13T19:41:05.019983484Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.110253769s" Feb 13 19:41:05.020040 containerd[1486]: time="2025-02-13T19:41:05.020016396Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Feb 13 19:41:05.021961 containerd[1486]: time="2025-02-13T19:41:05.021937709Z" level=info msg="CreateContainer within sandbox \"43f67968ae78655fd954471d65e71a387d1281e96cd982ec125611719fe18e23\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 13 19:41:05.032940 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2197097419.mount: Deactivated successfully. Feb 13 19:41:05.033763 containerd[1486]: time="2025-02-13T19:41:05.033734454Z" level=info msg="CreateContainer within sandbox \"43f67968ae78655fd954471d65e71a387d1281e96cd982ec125611719fe18e23\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"27e168409d29010b20d4edc454bfb63c50798f11f2d84b254aebc0275ab7d1a7\"" Feb 13 19:41:05.034278 containerd[1486]: time="2025-02-13T19:41:05.034110309Z" level=info msg="StartContainer for \"27e168409d29010b20d4edc454bfb63c50798f11f2d84b254aebc0275ab7d1a7\"" Feb 13 19:41:05.068285 systemd[1]: Started cri-containerd-27e168409d29010b20d4edc454bfb63c50798f11f2d84b254aebc0275ab7d1a7.scope - libcontainer container 27e168409d29010b20d4edc454bfb63c50798f11f2d84b254aebc0275ab7d1a7. Feb 13 19:41:05.093168 systemd[1]: cri-containerd-27e168409d29010b20d4edc454bfb63c50798f11f2d84b254aebc0275ab7d1a7.scope: Deactivated successfully. Feb 13 19:41:05.094573 containerd[1486]: time="2025-02-13T19:41:05.094532491Z" level=info msg="StartContainer for \"27e168409d29010b20d4edc454bfb63c50798f11f2d84b254aebc0275ab7d1a7\" returns successfully" Feb 13 19:41:05.148357 containerd[1486]: time="2025-02-13T19:41:05.148294272Z" level=info msg="shim disconnected" id=27e168409d29010b20d4edc454bfb63c50798f11f2d84b254aebc0275ab7d1a7 namespace=k8s.io Feb 13 19:41:05.148357 containerd[1486]: time="2025-02-13T19:41:05.148350478Z" level=warning msg="cleaning up after shim disconnected" id=27e168409d29010b20d4edc454bfb63c50798f11f2d84b254aebc0275ab7d1a7 namespace=k8s.io Feb 13 19:41:05.148357 containerd[1486]: time="2025-02-13T19:41:05.148360276Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:41:05.766572 kubelet[2549]: E0213 19:41:05.766536 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:05.767016 kubelet[2549]: E0213 19:41:05.766581 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:05.767016 kubelet[2549]: E0213 19:41:05.766709 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:05.767806 containerd[1486]: time="2025-02-13T19:41:05.767776698Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 13 19:41:05.919217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27e168409d29010b20d4edc454bfb63c50798f11f2d84b254aebc0275ab7d1a7-rootfs.mount: Deactivated successfully. Feb 13 19:41:08.152053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2501413402.mount: Deactivated successfully. Feb 13 19:41:08.910601 update_engine[1478]: I20250213 19:41:08.910535 1478 update_attempter.cc:509] Updating boot flags... Feb 13 19:41:08.938394 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2953) Feb 13 19:41:08.973832 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2954) Feb 13 19:41:09.007231 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2954) Feb 13 19:41:09.837384 containerd[1486]: time="2025-02-13T19:41:09.837323083Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:09.838112 containerd[1486]: time="2025-02-13T19:41:09.838050557Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Feb 13 19:41:09.839289 containerd[1486]: time="2025-02-13T19:41:09.839247612Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:09.842121 containerd[1486]: time="2025-02-13T19:41:09.842091175Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:09.843302 containerd[1486]: time="2025-02-13T19:41:09.843272161Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 4.075455288s" Feb 13 19:41:09.843340 containerd[1486]: time="2025-02-13T19:41:09.843304892Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Feb 13 19:41:09.845278 containerd[1486]: time="2025-02-13T19:41:09.845238628Z" level=info msg="CreateContainer within sandbox \"43f67968ae78655fd954471d65e71a387d1281e96cd982ec125611719fe18e23\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:41:09.858282 containerd[1486]: time="2025-02-13T19:41:09.858239731Z" level=info msg="CreateContainer within sandbox \"43f67968ae78655fd954471d65e71a387d1281e96cd982ec125611719fe18e23\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"df830d978148f1f30747ae278b2bbf0b8063bfa9e1d102a5a93c981b87eab0d1\"" Feb 13 19:41:09.858728 containerd[1486]: time="2025-02-13T19:41:09.858688483Z" level=info msg="StartContainer for \"df830d978148f1f30747ae278b2bbf0b8063bfa9e1d102a5a93c981b87eab0d1\"" Feb 13 19:41:09.897354 systemd[1]: Started cri-containerd-df830d978148f1f30747ae278b2bbf0b8063bfa9e1d102a5a93c981b87eab0d1.scope - libcontainer container df830d978148f1f30747ae278b2bbf0b8063bfa9e1d102a5a93c981b87eab0d1. Feb 13 19:41:09.921996 systemd[1]: cri-containerd-df830d978148f1f30747ae278b2bbf0b8063bfa9e1d102a5a93c981b87eab0d1.scope: Deactivated successfully. Feb 13 19:41:09.986782 kubelet[2549]: I0213 19:41:09.986749 2549 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:41:10.110730 containerd[1486]: time="2025-02-13T19:41:10.110106021Z" level=info msg="StartContainer for \"df830d978148f1f30747ae278b2bbf0b8063bfa9e1d102a5a93c981b87eab0d1\" returns successfully" Feb 13 19:41:10.114392 kubelet[2549]: I0213 19:41:10.113772 2549 status_manager.go:890] "Failed to get status for pod" podUID="3788b637-b23e-4863-99bf-2437633f3c89" pod="kube-system/coredns-668d6bf9bc-4dq56" err="pods \"coredns-668d6bf9bc-4dq56\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Feb 13 19:41:10.115068 kubelet[2549]: W0213 19:41:10.114927 2549 reflector.go:569] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Feb 13 19:41:10.115068 kubelet[2549]: E0213 19:41:10.114964 2549 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Feb 13 19:41:10.121229 systemd[1]: Created slice kubepods-burstable-pod3788b637_b23e_4863_99bf_2437633f3c89.slice - libcontainer container kubepods-burstable-pod3788b637_b23e_4863_99bf_2437633f3c89.slice. Feb 13 19:41:10.129820 systemd[1]: Created slice kubepods-burstable-podacfc6747_5e69_4e48_b96e_d309412ca49e.slice - libcontainer container kubepods-burstable-podacfc6747_5e69_4e48_b96e_d309412ca49e.slice. Feb 13 19:41:10.137441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df830d978148f1f30747ae278b2bbf0b8063bfa9e1d102a5a93c981b87eab0d1-rootfs.mount: Deactivated successfully. Feb 13 19:41:10.143953 containerd[1486]: time="2025-02-13T19:41:10.143888650Z" level=info msg="shim disconnected" id=df830d978148f1f30747ae278b2bbf0b8063bfa9e1d102a5a93c981b87eab0d1 namespace=k8s.io Feb 13 19:41:10.143953 containerd[1486]: time="2025-02-13T19:41:10.143948312Z" level=warning msg="cleaning up after shim disconnected" id=df830d978148f1f30747ae278b2bbf0b8063bfa9e1d102a5a93c981b87eab0d1 namespace=k8s.io Feb 13 19:41:10.143953 containerd[1486]: time="2025-02-13T19:41:10.143956377Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:41:10.209083 kubelet[2549]: I0213 19:41:10.209024 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq2w8\" (UniqueName: \"kubernetes.io/projected/3788b637-b23e-4863-99bf-2437633f3c89-kube-api-access-pq2w8\") pod \"coredns-668d6bf9bc-4dq56\" (UID: \"3788b637-b23e-4863-99bf-2437633f3c89\") " pod="kube-system/coredns-668d6bf9bc-4dq56" Feb 13 19:41:10.209083 kubelet[2549]: I0213 19:41:10.209069 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58c7n\" (UniqueName: \"kubernetes.io/projected/acfc6747-5e69-4e48-b96e-d309412ca49e-kube-api-access-58c7n\") pod \"coredns-668d6bf9bc-vt2p2\" (UID: \"acfc6747-5e69-4e48-b96e-d309412ca49e\") " pod="kube-system/coredns-668d6bf9bc-vt2p2" Feb 13 19:41:10.209083 kubelet[2549]: I0213 19:41:10.209087 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3788b637-b23e-4863-99bf-2437633f3c89-config-volume\") pod \"coredns-668d6bf9bc-4dq56\" (UID: \"3788b637-b23e-4863-99bf-2437633f3c89\") " pod="kube-system/coredns-668d6bf9bc-4dq56" Feb 13 19:41:10.209083 kubelet[2549]: I0213 19:41:10.209104 2549 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acfc6747-5e69-4e48-b96e-d309412ca49e-config-volume\") pod \"coredns-668d6bf9bc-vt2p2\" (UID: \"acfc6747-5e69-4e48-b96e-d309412ca49e\") " pod="kube-system/coredns-668d6bf9bc-vt2p2" Feb 13 19:41:10.778444 kubelet[2549]: E0213 19:41:10.778399 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:10.780574 containerd[1486]: time="2025-02-13T19:41:10.780403788Z" level=info msg="CreateContainer within sandbox \"43f67968ae78655fd954471d65e71a387d1281e96cd982ec125611719fe18e23\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 13 19:41:10.795187 containerd[1486]: time="2025-02-13T19:41:10.795107484Z" level=info msg="CreateContainer within sandbox \"43f67968ae78655fd954471d65e71a387d1281e96cd982ec125611719fe18e23\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"4028b3d0fad47450ae7c333f1a2d9dba882c43b50158e7f20717ddc550449390\"" Feb 13 19:41:10.795661 containerd[1486]: time="2025-02-13T19:41:10.795598415Z" level=info msg="StartContainer for \"4028b3d0fad47450ae7c333f1a2d9dba882c43b50158e7f20717ddc550449390\"" Feb 13 19:41:10.822402 systemd[1]: Started cri-containerd-4028b3d0fad47450ae7c333f1a2d9dba882c43b50158e7f20717ddc550449390.scope - libcontainer container 4028b3d0fad47450ae7c333f1a2d9dba882c43b50158e7f20717ddc550449390. Feb 13 19:41:10.861223 containerd[1486]: time="2025-02-13T19:41:10.861152380Z" level=info msg="StartContainer for \"4028b3d0fad47450ae7c333f1a2d9dba882c43b50158e7f20717ddc550449390\" returns successfully" Feb 13 19:41:11.325184 kubelet[2549]: E0213 19:41:11.325135 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:11.325816 containerd[1486]: time="2025-02-13T19:41:11.325765418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dq56,Uid:3788b637-b23e-4863-99bf-2437633f3c89,Namespace:kube-system,Attempt:0,}" Feb 13 19:41:11.333989 kubelet[2549]: E0213 19:41:11.333950 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:11.334520 containerd[1486]: time="2025-02-13T19:41:11.334476666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vt2p2,Uid:acfc6747-5e69-4e48-b96e-d309412ca49e,Namespace:kube-system,Attempt:0,}" Feb 13 19:41:11.374718 containerd[1486]: time="2025-02-13T19:41:11.374526378Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dq56,Uid:3788b637-b23e-4863-99bf-2437633f3c89,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c80c13be509562c1da32ed5552f40e63e23c2b3d035fbfc5c425f3dd14de6775\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:41:11.374993 kubelet[2549]: E0213 19:41:11.374864 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c80c13be509562c1da32ed5552f40e63e23c2b3d035fbfc5c425f3dd14de6775\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:41:11.374993 kubelet[2549]: E0213 19:41:11.374959 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c80c13be509562c1da32ed5552f40e63e23c2b3d035fbfc5c425f3dd14de6775\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-4dq56" Feb 13 19:41:11.374993 kubelet[2549]: E0213 19:41:11.374985 2549 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c80c13be509562c1da32ed5552f40e63e23c2b3d035fbfc5c425f3dd14de6775\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-4dq56" Feb 13 19:41:11.375284 kubelet[2549]: E0213 19:41:11.375038 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4dq56_kube-system(3788b637-b23e-4863-99bf-2437633f3c89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4dq56_kube-system(3788b637-b23e-4863-99bf-2437633f3c89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c80c13be509562c1da32ed5552f40e63e23c2b3d035fbfc5c425f3dd14de6775\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-4dq56" podUID="3788b637-b23e-4863-99bf-2437633f3c89" Feb 13 19:41:11.380321 containerd[1486]: time="2025-02-13T19:41:11.380269719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vt2p2,Uid:acfc6747-5e69-4e48-b96e-d309412ca49e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a51e99ad9aa46890508dc5742258bad2e9c3b2c0a3e564c9bc897afdd60e5333\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:41:11.380509 kubelet[2549]: E0213 19:41:11.380473 2549 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a51e99ad9aa46890508dc5742258bad2e9c3b2c0a3e564c9bc897afdd60e5333\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 13 19:41:11.380552 kubelet[2549]: E0213 19:41:11.380536 2549 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a51e99ad9aa46890508dc5742258bad2e9c3b2c0a3e564c9bc897afdd60e5333\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-vt2p2" Feb 13 19:41:11.380552 kubelet[2549]: E0213 19:41:11.380560 2549 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a51e99ad9aa46890508dc5742258bad2e9c3b2c0a3e564c9bc897afdd60e5333\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-668d6bf9bc-vt2p2" Feb 13 19:41:11.380635 kubelet[2549]: E0213 19:41:11.380609 2549 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-vt2p2_kube-system(acfc6747-5e69-4e48-b96e-d309412ca49e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-vt2p2_kube-system(acfc6747-5e69-4e48-b96e-d309412ca49e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a51e99ad9aa46890508dc5742258bad2e9c3b2c0a3e564c9bc897afdd60e5333\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-668d6bf9bc-vt2p2" podUID="acfc6747-5e69-4e48-b96e-d309412ca49e" Feb 13 19:41:11.782081 kubelet[2549]: E0213 19:41:11.782044 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:11.792456 kubelet[2549]: I0213 19:41:11.792385 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-s5k2c" podStartSLOduration=2.8573801420000002 podStartE2EDuration="9.792366375s" podCreationTimestamp="2025-02-13 19:41:02 +0000 UTC" firstStartedPulling="2025-02-13 19:41:02.909134148 +0000 UTC m=+6.247788925" lastFinishedPulling="2025-02-13 19:41:09.844120381 +0000 UTC m=+13.182775158" observedRunningTime="2025-02-13 19:41:11.791981002 +0000 UTC m=+15.130635779" watchObservedRunningTime="2025-02-13 19:41:11.792366375 +0000 UTC m=+15.131021172" Feb 13 19:41:11.855537 systemd[1]: run-netns-cni\x2da876830b\x2d6caf\x2d4315\x2d4eac\x2d6f4155ec41bd.mount: Deactivated successfully. Feb 13 19:41:11.855692 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a51e99ad9aa46890508dc5742258bad2e9c3b2c0a3e564c9bc897afdd60e5333-shm.mount: Deactivated successfully. Feb 13 19:41:11.855802 systemd[1]: run-netns-cni\x2da49ccdc5\x2de1f2\x2d1a83\x2d03e4\x2df8f478fcdf54.mount: Deactivated successfully. Feb 13 19:41:11.855897 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c80c13be509562c1da32ed5552f40e63e23c2b3d035fbfc5c425f3dd14de6775-shm.mount: Deactivated successfully. Feb 13 19:41:11.897470 systemd-networkd[1436]: flannel.1: Link UP Feb 13 19:41:11.897480 systemd-networkd[1436]: flannel.1: Gained carrier Feb 13 19:41:12.783924 kubelet[2549]: E0213 19:41:12.783881 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:12.973339 systemd-networkd[1436]: flannel.1: Gained IPv6LL Feb 13 19:41:22.894600 systemd[1]: Started sshd@6-10.0.0.96:22-10.0.0.1:50216.service - OpenSSH per-connection server daemon (10.0.0.1:50216). Feb 13 19:41:22.942271 sshd[3254]: Accepted publickey for core from 10.0.0.1 port 50216 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:22.944035 sshd-session[3254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:22.948296 systemd-logind[1476]: New session 6 of user core. Feb 13 19:41:22.960300 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:41:23.070478 sshd[3256]: Connection closed by 10.0.0.1 port 50216 Feb 13 19:41:23.070854 sshd-session[3254]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:23.074934 systemd[1]: sshd@6-10.0.0.96:22-10.0.0.1:50216.service: Deactivated successfully. Feb 13 19:41:23.077592 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:41:23.078284 systemd-logind[1476]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:41:23.079229 systemd-logind[1476]: Removed session 6. Feb 13 19:41:24.742995 kubelet[2549]: E0213 19:41:24.742957 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:24.744237 kubelet[2549]: E0213 19:41:24.743046 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:24.744291 containerd[1486]: time="2025-02-13T19:41:24.743510975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dq56,Uid:3788b637-b23e-4863-99bf-2437633f3c89,Namespace:kube-system,Attempt:0,}" Feb 13 19:41:24.744291 containerd[1486]: time="2025-02-13T19:41:24.743535200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vt2p2,Uid:acfc6747-5e69-4e48-b96e-d309412ca49e,Namespace:kube-system,Attempt:0,}" Feb 13 19:41:24.778495 systemd-networkd[1436]: cni0: Link UP Feb 13 19:41:24.778503 systemd-networkd[1436]: cni0: Gained carrier Feb 13 19:41:24.783505 systemd-networkd[1436]: cni0: Lost carrier Feb 13 19:41:24.786700 systemd-networkd[1436]: veth0b2ce375: Link UP Feb 13 19:41:24.788231 kernel: cni0: port 1(veth0b2ce375) entered blocking state Feb 13 19:41:24.788295 kernel: cni0: port 1(veth0b2ce375) entered disabled state Feb 13 19:41:24.789642 kernel: veth0b2ce375: entered allmulticast mode Feb 13 19:41:24.789717 kernel: veth0b2ce375: entered promiscuous mode Feb 13 19:41:24.790598 kernel: cni0: port 1(veth0b2ce375) entered blocking state Feb 13 19:41:24.790625 kernel: cni0: port 1(veth0b2ce375) entered forwarding state Feb 13 19:41:24.792203 kernel: cni0: port 1(veth0b2ce375) entered disabled state Feb 13 19:41:24.798183 kernel: cni0: port 2(veth44ebb37a) entered blocking state Feb 13 19:41:24.798271 kernel: cni0: port 2(veth44ebb37a) entered disabled state Feb 13 19:41:24.798305 kernel: veth44ebb37a: entered allmulticast mode Feb 13 19:41:24.798335 kernel: veth44ebb37a: entered promiscuous mode Feb 13 19:41:24.796846 systemd-networkd[1436]: veth44ebb37a: Link UP Feb 13 19:41:24.801053 kernel: cni0: port 2(veth44ebb37a) entered blocking state Feb 13 19:41:24.801098 kernel: cni0: port 2(veth44ebb37a) entered forwarding state Feb 13 19:41:24.801116 kernel: cni0: port 2(veth44ebb37a) entered disabled state Feb 13 19:41:24.806770 kernel: cni0: port 1(veth0b2ce375) entered blocking state Feb 13 19:41:24.806853 kernel: cni0: port 1(veth0b2ce375) entered forwarding state Feb 13 19:41:24.804072 systemd-networkd[1436]: veth0b2ce375: Gained carrier Feb 13 19:41:24.804364 systemd-networkd[1436]: cni0: Gained carrier Feb 13 19:41:24.811521 containerd[1486]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00011c8e8), "name":"cbr0", "type":"bridge"} Feb 13 19:41:24.811521 containerd[1486]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:41:24.815541 kernel: cni0: port 2(veth44ebb37a) entered blocking state Feb 13 19:41:24.815611 kernel: cni0: port 2(veth44ebb37a) entered forwarding state Feb 13 19:41:24.815767 systemd-networkd[1436]: veth44ebb37a: Gained carrier Feb 13 19:41:24.817965 containerd[1486]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Feb 13 19:41:24.817965 containerd[1486]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000ae8e8), "name":"cbr0", "type":"bridge"} Feb 13 19:41:24.817965 containerd[1486]: delegateAdd: netconf sent to delegate plugin: Feb 13 19:41:24.835652 containerd[1486]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-02-13T19:41:24.835561638Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:41:24.835836 containerd[1486]: time="2025-02-13T19:41:24.835640647Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:41:24.835836 containerd[1486]: time="2025-02-13T19:41:24.835658400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:24.835836 containerd[1486]: time="2025-02-13T19:41:24.835762756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:24.843025 containerd[1486]: time="2025-02-13T19:41:24.842886636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:41:24.843278 containerd[1486]: time="2025-02-13T19:41:24.843006331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:41:24.843278 containerd[1486]: time="2025-02-13T19:41:24.843017702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:24.843278 containerd[1486]: time="2025-02-13T19:41:24.843135673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:41:24.855307 systemd[1]: Started cri-containerd-2bbead8fac500b8e47b53c8d0cd2a2953f221debe0e04a9578224d285ade093b.scope - libcontainer container 2bbead8fac500b8e47b53c8d0cd2a2953f221debe0e04a9578224d285ade093b. Feb 13 19:41:24.859734 systemd[1]: Started cri-containerd-eb2b7980a1b528bf041b59ccb6a5f27511983efbb33a0149a15a701cc9f9ee7b.scope - libcontainer container eb2b7980a1b528bf041b59ccb6a5f27511983efbb33a0149a15a701cc9f9ee7b. Feb 13 19:41:24.869773 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:41:24.872749 systemd-resolved[1331]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:41:24.897430 containerd[1486]: time="2025-02-13T19:41:24.897387147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vt2p2,Uid:acfc6747-5e69-4e48-b96e-d309412ca49e,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb2b7980a1b528bf041b59ccb6a5f27511983efbb33a0149a15a701cc9f9ee7b\"" Feb 13 19:41:24.897839 containerd[1486]: time="2025-02-13T19:41:24.897821662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4dq56,Uid:3788b637-b23e-4863-99bf-2437633f3c89,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bbead8fac500b8e47b53c8d0cd2a2953f221debe0e04a9578224d285ade093b\"" Feb 13 19:41:24.898550 kubelet[2549]: E0213 19:41:24.898525 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:24.898636 kubelet[2549]: E0213 19:41:24.898526 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:24.900896 containerd[1486]: time="2025-02-13T19:41:24.900866663Z" level=info msg="CreateContainer within sandbox \"eb2b7980a1b528bf041b59ccb6a5f27511983efbb33a0149a15a701cc9f9ee7b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:41:24.901117 containerd[1486]: time="2025-02-13T19:41:24.900876461Z" level=info msg="CreateContainer within sandbox \"2bbead8fac500b8e47b53c8d0cd2a2953f221debe0e04a9578224d285ade093b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:41:24.918528 containerd[1486]: time="2025-02-13T19:41:24.918487430Z" level=info msg="CreateContainer within sandbox \"eb2b7980a1b528bf041b59ccb6a5f27511983efbb33a0149a15a701cc9f9ee7b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c958f3cae80a68fbf716418d2705753c836fa22a2c35079628b5076e6292957\"" Feb 13 19:41:24.918956 containerd[1486]: time="2025-02-13T19:41:24.918920943Z" level=info msg="StartContainer for \"2c958f3cae80a68fbf716418d2705753c836fa22a2c35079628b5076e6292957\"" Feb 13 19:41:24.922839 containerd[1486]: time="2025-02-13T19:41:24.922797924Z" level=info msg="CreateContainer within sandbox \"2bbead8fac500b8e47b53c8d0cd2a2953f221debe0e04a9578224d285ade093b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3c857de38f28b8f9e56f83d4c8a43d2aaf6104bde53d3e3ce1e4a1c8f4a11c94\"" Feb 13 19:41:24.923504 containerd[1486]: time="2025-02-13T19:41:24.923320253Z" level=info msg="StartContainer for \"3c857de38f28b8f9e56f83d4c8a43d2aaf6104bde53d3e3ce1e4a1c8f4a11c94\"" Feb 13 19:41:24.947308 systemd[1]: Started cri-containerd-2c958f3cae80a68fbf716418d2705753c836fa22a2c35079628b5076e6292957.scope - libcontainer container 2c958f3cae80a68fbf716418d2705753c836fa22a2c35079628b5076e6292957. Feb 13 19:41:24.950314 systemd[1]: Started cri-containerd-3c857de38f28b8f9e56f83d4c8a43d2aaf6104bde53d3e3ce1e4a1c8f4a11c94.scope - libcontainer container 3c857de38f28b8f9e56f83d4c8a43d2aaf6104bde53d3e3ce1e4a1c8f4a11c94. Feb 13 19:41:24.979656 containerd[1486]: time="2025-02-13T19:41:24.979615470Z" level=info msg="StartContainer for \"2c958f3cae80a68fbf716418d2705753c836fa22a2c35079628b5076e6292957\" returns successfully" Feb 13 19:41:24.983842 containerd[1486]: time="2025-02-13T19:41:24.983791963Z" level=info msg="StartContainer for \"3c857de38f28b8f9e56f83d4c8a43d2aaf6104bde53d3e3ce1e4a1c8f4a11c94\" returns successfully" Feb 13 19:41:25.807579 kubelet[2549]: E0213 19:41:25.807090 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:25.810114 kubelet[2549]: E0213 19:41:25.810090 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:25.821626 kubelet[2549]: I0213 19:41:25.821399 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vt2p2" podStartSLOduration=23.821380903 podStartE2EDuration="23.821380903s" podCreationTimestamp="2025-02-13 19:41:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:41:25.820625978 +0000 UTC m=+29.159280755" watchObservedRunningTime="2025-02-13 19:41:25.821380903 +0000 UTC m=+29.160035690" Feb 13 19:41:25.845292 kubelet[2549]: I0213 19:41:25.845229 2549 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4dq56" podStartSLOduration=23.845208439 podStartE2EDuration="23.845208439s" podCreationTimestamp="2025-02-13 19:41:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:41:25.832351567 +0000 UTC m=+29.171006344" watchObservedRunningTime="2025-02-13 19:41:25.845208439 +0000 UTC m=+29.183863226" Feb 13 19:41:26.093421 systemd-networkd[1436]: veth44ebb37a: Gained IPv6LL Feb 13 19:41:26.221362 systemd-networkd[1436]: veth0b2ce375: Gained IPv6LL Feb 13 19:41:26.541394 systemd-networkd[1436]: cni0: Gained IPv6LL Feb 13 19:41:26.812347 kubelet[2549]: E0213 19:41:26.812174 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:26.812731 kubelet[2549]: E0213 19:41:26.812362 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:27.813261 kubelet[2549]: E0213 19:41:27.813230 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:27.813709 kubelet[2549]: E0213 19:41:27.813524 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:41:28.082403 systemd[1]: Started sshd@7-10.0.0.96:22-10.0.0.1:39534.service - OpenSSH per-connection server daemon (10.0.0.1:39534). Feb 13 19:41:28.122141 sshd[3541]: Accepted publickey for core from 10.0.0.1 port 39534 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:28.124081 sshd-session[3541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:28.128372 systemd-logind[1476]: New session 7 of user core. Feb 13 19:41:28.138339 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:41:28.253708 sshd[3543]: Connection closed by 10.0.0.1 port 39534 Feb 13 19:41:28.254352 sshd-session[3541]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:28.259755 systemd[1]: sshd@7-10.0.0.96:22-10.0.0.1:39534.service: Deactivated successfully. Feb 13 19:41:28.261857 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:41:28.262505 systemd-logind[1476]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:41:28.263313 systemd-logind[1476]: Removed session 7. Feb 13 19:41:33.269865 systemd[1]: Started sshd@8-10.0.0.96:22-10.0.0.1:39540.service - OpenSSH per-connection server daemon (10.0.0.1:39540). Feb 13 19:41:33.337488 sshd[3580]: Accepted publickey for core from 10.0.0.1 port 39540 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:33.338863 sshd-session[3580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:33.350933 systemd-logind[1476]: New session 8 of user core. Feb 13 19:41:33.360306 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:41:33.476296 sshd[3582]: Connection closed by 10.0.0.1 port 39540 Feb 13 19:41:33.476699 sshd-session[3580]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:33.488143 systemd[1]: sshd@8-10.0.0.96:22-10.0.0.1:39540.service: Deactivated successfully. Feb 13 19:41:33.490012 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:41:33.491668 systemd-logind[1476]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:41:33.493008 systemd[1]: Started sshd@9-10.0.0.96:22-10.0.0.1:39544.service - OpenSSH per-connection server daemon (10.0.0.1:39544). Feb 13 19:41:33.493951 systemd-logind[1476]: Removed session 8. Feb 13 19:41:33.532516 sshd[3596]: Accepted publickey for core from 10.0.0.1 port 39544 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:33.534043 sshd-session[3596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:33.538272 systemd-logind[1476]: New session 9 of user core. Feb 13 19:41:33.549299 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:41:33.683609 sshd[3598]: Connection closed by 10.0.0.1 port 39544 Feb 13 19:41:33.684197 sshd-session[3596]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:33.694617 systemd[1]: sshd@9-10.0.0.96:22-10.0.0.1:39544.service: Deactivated successfully. Feb 13 19:41:33.698697 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:41:33.700949 systemd-logind[1476]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:41:33.708414 systemd[1]: Started sshd@10-10.0.0.96:22-10.0.0.1:39550.service - OpenSSH per-connection server daemon (10.0.0.1:39550). Feb 13 19:41:33.709348 systemd-logind[1476]: Removed session 9. Feb 13 19:41:33.740405 sshd[3608]: Accepted publickey for core from 10.0.0.1 port 39550 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:33.741841 sshd-session[3608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:33.745854 systemd-logind[1476]: New session 10 of user core. Feb 13 19:41:33.760307 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:41:33.867230 sshd[3610]: Connection closed by 10.0.0.1 port 39550 Feb 13 19:41:33.867518 sshd-session[3608]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:33.871857 systemd[1]: sshd@10-10.0.0.96:22-10.0.0.1:39550.service: Deactivated successfully. Feb 13 19:41:33.873998 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:41:33.874636 systemd-logind[1476]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:41:33.875560 systemd-logind[1476]: Removed session 10. Feb 13 19:41:38.883238 systemd[1]: Started sshd@11-10.0.0.96:22-10.0.0.1:57886.service - OpenSSH per-connection server daemon (10.0.0.1:57886). Feb 13 19:41:38.921569 sshd[3643]: Accepted publickey for core from 10.0.0.1 port 57886 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:38.922995 sshd-session[3643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:38.926869 systemd-logind[1476]: New session 11 of user core. Feb 13 19:41:38.940297 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:41:39.062137 sshd[3645]: Connection closed by 10.0.0.1 port 57886 Feb 13 19:41:39.062534 sshd-session[3643]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:39.067103 systemd[1]: sshd@11-10.0.0.96:22-10.0.0.1:57886.service: Deactivated successfully. Feb 13 19:41:39.069249 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:41:39.069947 systemd-logind[1476]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:41:39.070940 systemd-logind[1476]: Removed session 11. Feb 13 19:41:44.074405 systemd[1]: Started sshd@12-10.0.0.96:22-10.0.0.1:57896.service - OpenSSH per-connection server daemon (10.0.0.1:57896). Feb 13 19:41:44.111365 sshd[3678]: Accepted publickey for core from 10.0.0.1 port 57896 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:44.112714 sshd-session[3678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:44.116758 systemd-logind[1476]: New session 12 of user core. Feb 13 19:41:44.123276 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:41:44.228780 sshd[3680]: Connection closed by 10.0.0.1 port 57896 Feb 13 19:41:44.229147 sshd-session[3678]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:44.233358 systemd[1]: sshd@12-10.0.0.96:22-10.0.0.1:57896.service: Deactivated successfully. Feb 13 19:41:44.235669 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:41:44.236351 systemd-logind[1476]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:41:44.237302 systemd-logind[1476]: Removed session 12. Feb 13 19:41:49.242327 systemd[1]: Started sshd@13-10.0.0.96:22-10.0.0.1:41892.service - OpenSSH per-connection server daemon (10.0.0.1:41892). Feb 13 19:41:49.284024 sshd[3714]: Accepted publickey for core from 10.0.0.1 port 41892 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:49.285660 sshd-session[3714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:49.289989 systemd-logind[1476]: New session 13 of user core. Feb 13 19:41:49.297474 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:41:49.406177 sshd[3716]: Connection closed by 10.0.0.1 port 41892 Feb 13 19:41:49.406561 sshd-session[3714]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:49.410763 systemd[1]: sshd@13-10.0.0.96:22-10.0.0.1:41892.service: Deactivated successfully. Feb 13 19:41:49.412983 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:41:49.413915 systemd-logind[1476]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:41:49.414970 systemd-logind[1476]: Removed session 13. Feb 13 19:41:54.417416 systemd[1]: Started sshd@14-10.0.0.96:22-10.0.0.1:41896.service - OpenSSH per-connection server daemon (10.0.0.1:41896). Feb 13 19:41:54.454827 sshd[3749]: Accepted publickey for core from 10.0.0.1 port 41896 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:54.456339 sshd-session[3749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:54.460228 systemd-logind[1476]: New session 14 of user core. Feb 13 19:41:54.479418 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:41:54.588146 sshd[3751]: Connection closed by 10.0.0.1 port 41896 Feb 13 19:41:54.588616 sshd-session[3749]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:54.595972 systemd[1]: sshd@14-10.0.0.96:22-10.0.0.1:41896.service: Deactivated successfully. Feb 13 19:41:54.597642 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:41:54.599535 systemd-logind[1476]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:41:54.606401 systemd[1]: Started sshd@15-10.0.0.96:22-10.0.0.1:57930.service - OpenSSH per-connection server daemon (10.0.0.1:57930). Feb 13 19:41:54.607339 systemd-logind[1476]: Removed session 14. Feb 13 19:41:54.643713 sshd[3763]: Accepted publickey for core from 10.0.0.1 port 57930 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:54.645175 sshd-session[3763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:54.648944 systemd-logind[1476]: New session 15 of user core. Feb 13 19:41:54.653262 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:41:54.917960 sshd[3765]: Connection closed by 10.0.0.1 port 57930 Feb 13 19:41:54.919332 sshd-session[3763]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:54.928361 systemd[1]: sshd@15-10.0.0.96:22-10.0.0.1:57930.service: Deactivated successfully. Feb 13 19:41:54.930736 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:41:54.932728 systemd-logind[1476]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:41:54.952478 systemd[1]: Started sshd@16-10.0.0.96:22-10.0.0.1:57940.service - OpenSSH per-connection server daemon (10.0.0.1:57940). Feb 13 19:41:54.953418 systemd-logind[1476]: Removed session 15. Feb 13 19:41:54.990889 sshd[3777]: Accepted publickey for core from 10.0.0.1 port 57940 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:54.992329 sshd-session[3777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:54.996240 systemd-logind[1476]: New session 16 of user core. Feb 13 19:41:55.004281 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:41:55.869430 sshd[3779]: Connection closed by 10.0.0.1 port 57940 Feb 13 19:41:55.870409 sshd-session[3777]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:55.880446 systemd[1]: sshd@16-10.0.0.96:22-10.0.0.1:57940.service: Deactivated successfully. Feb 13 19:41:55.884283 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:41:55.886960 systemd-logind[1476]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:41:55.896535 systemd[1]: Started sshd@17-10.0.0.96:22-10.0.0.1:57948.service - OpenSSH per-connection server daemon (10.0.0.1:57948). Feb 13 19:41:55.897699 systemd-logind[1476]: Removed session 16. Feb 13 19:41:55.928745 sshd[3799]: Accepted publickey for core from 10.0.0.1 port 57948 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:55.930366 sshd-session[3799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:55.934453 systemd-logind[1476]: New session 17 of user core. Feb 13 19:41:55.943283 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:41:56.159611 sshd[3801]: Connection closed by 10.0.0.1 port 57948 Feb 13 19:41:56.160963 sshd-session[3799]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:56.169579 systemd[1]: sshd@17-10.0.0.96:22-10.0.0.1:57948.service: Deactivated successfully. Feb 13 19:41:56.171545 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:41:56.173233 systemd-logind[1476]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:41:56.180410 systemd[1]: Started sshd@18-10.0.0.96:22-10.0.0.1:57960.service - OpenSSH per-connection server daemon (10.0.0.1:57960). Feb 13 19:41:56.181346 systemd-logind[1476]: Removed session 17. Feb 13 19:41:56.214625 sshd[3811]: Accepted publickey for core from 10.0.0.1 port 57960 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:56.216039 sshd-session[3811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:56.220537 systemd-logind[1476]: New session 18 of user core. Feb 13 19:41:56.237312 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:41:56.341942 sshd[3813]: Connection closed by 10.0.0.1 port 57960 Feb 13 19:41:56.342292 sshd-session[3811]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:56.346290 systemd[1]: sshd@18-10.0.0.96:22-10.0.0.1:57960.service: Deactivated successfully. Feb 13 19:41:56.348371 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:41:56.348980 systemd-logind[1476]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:41:56.349856 systemd-logind[1476]: Removed session 18. Feb 13 19:42:01.357089 systemd[1]: Started sshd@19-10.0.0.96:22-10.0.0.1:57976.service - OpenSSH per-connection server daemon (10.0.0.1:57976). Feb 13 19:42:01.394715 sshd[3849]: Accepted publickey for core from 10.0.0.1 port 57976 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:42:01.396008 sshd-session[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:01.399938 systemd-logind[1476]: New session 19 of user core. Feb 13 19:42:01.408285 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:42:01.507979 sshd[3851]: Connection closed by 10.0.0.1 port 57976 Feb 13 19:42:01.508331 sshd-session[3849]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:01.512394 systemd[1]: sshd@19-10.0.0.96:22-10.0.0.1:57976.service: Deactivated successfully. Feb 13 19:42:01.514713 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:42:01.515424 systemd-logind[1476]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:42:01.516621 systemd-logind[1476]: Removed session 19. Feb 13 19:42:06.520393 systemd[1]: Started sshd@20-10.0.0.96:22-10.0.0.1:54524.service - OpenSSH per-connection server daemon (10.0.0.1:54524). Feb 13 19:42:06.557402 sshd[3888]: Accepted publickey for core from 10.0.0.1 port 54524 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:42:06.558929 sshd-session[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:06.562450 systemd-logind[1476]: New session 20 of user core. Feb 13 19:42:06.572276 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:42:06.674052 sshd[3890]: Connection closed by 10.0.0.1 port 54524 Feb 13 19:42:06.674425 sshd-session[3888]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:06.677992 systemd[1]: sshd@20-10.0.0.96:22-10.0.0.1:54524.service: Deactivated successfully. Feb 13 19:42:06.680066 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:42:06.680714 systemd-logind[1476]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:42:06.681632 systemd-logind[1476]: Removed session 20. Feb 13 19:42:11.685077 systemd[1]: Started sshd@21-10.0.0.96:22-10.0.0.1:54528.service - OpenSSH per-connection server daemon (10.0.0.1:54528). Feb 13 19:42:11.721743 sshd[3924]: Accepted publickey for core from 10.0.0.1 port 54528 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:42:11.723274 sshd-session[3924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:11.727109 systemd-logind[1476]: New session 21 of user core. Feb 13 19:42:11.734285 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:42:11.839958 sshd[3926]: Connection closed by 10.0.0.1 port 54528 Feb 13 19:42:11.840349 sshd-session[3924]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:11.844572 systemd[1]: sshd@21-10.0.0.96:22-10.0.0.1:54528.service: Deactivated successfully. Feb 13 19:42:11.846810 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:42:11.847449 systemd-logind[1476]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:42:11.848388 systemd-logind[1476]: Removed session 21. Feb 13 19:42:16.742778 kubelet[2549]: E0213 19:42:16.742735 2549 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:16.856348 systemd[1]: Started sshd@22-10.0.0.96:22-10.0.0.1:38914.service - OpenSSH per-connection server daemon (10.0.0.1:38914). Feb 13 19:42:16.893037 sshd[3960]: Accepted publickey for core from 10.0.0.1 port 38914 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:42:16.894509 sshd-session[3960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:16.899542 systemd-logind[1476]: New session 22 of user core. Feb 13 19:42:16.905351 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:42:17.006954 sshd[3962]: Connection closed by 10.0.0.1 port 38914 Feb 13 19:42:17.007227 sshd-session[3960]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:17.010756 systemd[1]: sshd@22-10.0.0.96:22-10.0.0.1:38914.service: Deactivated successfully. Feb 13 19:42:17.012587 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:42:17.013283 systemd-logind[1476]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:42:17.014263 systemd-logind[1476]: Removed session 22.