Jan 30 13:42:01.872117 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:42:01.872139 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:42:01.872150 kernel: BIOS-provided physical RAM map: Jan 30 13:42:01.872157 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 30 13:42:01.872163 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 30 13:42:01.872170 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 30 13:42:01.872182 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jan 30 13:42:01.872194 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jan 30 13:42:01.872202 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jan 30 13:42:01.872214 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jan 30 13:42:01.872222 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 30 13:42:01.872230 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 30 13:42:01.872238 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 30 13:42:01.872245 kernel: NX (Execute Disable) protection: active Jan 30 13:42:01.872255 kernel: APIC: Static calls initialized Jan 30 13:42:01.872267 kernel: SMBIOS 2.8 present. Jan 30 13:42:01.872276 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jan 30 13:42:01.872284 kernel: Hypervisor detected: KVM Jan 30 13:42:01.872292 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 30 13:42:01.872301 kernel: kvm-clock: using sched offset of 2200453206 cycles Jan 30 13:42:01.872310 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 30 13:42:01.872320 kernel: tsc: Detected 2794.748 MHz processor Jan 30 13:42:01.872329 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:42:01.872339 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:42:01.872348 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jan 30 13:42:01.872361 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 30 13:42:01.872370 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:42:01.872379 kernel: Using GB pages for direct mapping Jan 30 13:42:01.872388 kernel: ACPI: Early table checksum verification disabled Jan 30 13:42:01.872396 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jan 30 13:42:01.872406 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:42:01.872415 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:42:01.872424 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:42:01.872437 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jan 30 13:42:01.872447 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:42:01.872456 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:42:01.872465 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:42:01.872474 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 30 13:42:01.872483 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Jan 30 13:42:01.872492 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Jan 30 13:42:01.872507 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jan 30 13:42:01.872519 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Jan 30 13:42:01.872528 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Jan 30 13:42:01.872538 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Jan 30 13:42:01.872548 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Jan 30 13:42:01.872558 kernel: No NUMA configuration found Jan 30 13:42:01.872567 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jan 30 13:42:01.872580 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jan 30 13:42:01.872589 kernel: Zone ranges: Jan 30 13:42:01.872599 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:42:01.872608 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jan 30 13:42:01.872618 kernel: Normal empty Jan 30 13:42:01.872628 kernel: Movable zone start for each node Jan 30 13:42:01.872638 kernel: Early memory node ranges Jan 30 13:42:01.872646 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 30 13:42:01.872653 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jan 30 13:42:01.872660 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jan 30 13:42:01.872671 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:42:01.872678 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 30 13:42:01.872685 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jan 30 13:42:01.872693 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 30 13:42:01.872700 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 30 13:42:01.872732 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:42:01.872751 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 30 13:42:01.872758 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 30 13:42:01.872766 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:42:01.872777 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 30 13:42:01.872785 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 30 13:42:01.872792 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:42:01.872799 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 30 13:42:01.872807 kernel: TSC deadline timer available Jan 30 13:42:01.872814 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jan 30 13:42:01.872821 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 30 13:42:01.872829 kernel: kvm-guest: KVM setup pv remote TLB flush Jan 30 13:42:01.872836 kernel: kvm-guest: setup PV sched yield Jan 30 13:42:01.872846 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jan 30 13:42:01.872853 kernel: Booting paravirtualized kernel on KVM Jan 30 13:42:01.872861 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:42:01.872869 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jan 30 13:42:01.872876 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Jan 30 13:42:01.872883 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Jan 30 13:42:01.872891 kernel: pcpu-alloc: [0] 0 1 2 3 Jan 30 13:42:01.872898 kernel: kvm-guest: PV spinlocks enabled Jan 30 13:42:01.872905 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:42:01.872917 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:42:01.872925 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:42:01.872938 kernel: random: crng init done Jan 30 13:42:01.872951 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:42:01.872961 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:42:01.872971 kernel: Fallback order for Node 0: 0 Jan 30 13:42:01.872981 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jan 30 13:42:01.872991 kernel: Policy zone: DMA32 Jan 30 13:42:01.873007 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:42:01.873018 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 136900K reserved, 0K cma-reserved) Jan 30 13:42:01.873028 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 30 13:42:01.873038 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:42:01.873048 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:42:01.873065 kernel: Dynamic Preempt: voluntary Jan 30 13:42:01.873075 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:42:01.873086 kernel: rcu: RCU event tracing is enabled. Jan 30 13:42:01.873097 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 30 13:42:01.873117 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:42:01.873130 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:42:01.873140 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:42:01.873151 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:42:01.873160 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 30 13:42:01.873169 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jan 30 13:42:01.873176 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:42:01.873183 kernel: Console: colour VGA+ 80x25 Jan 30 13:42:01.873191 kernel: printk: console [ttyS0] enabled Jan 30 13:42:01.873198 kernel: ACPI: Core revision 20230628 Jan 30 13:42:01.873209 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 30 13:42:01.873216 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:42:01.873223 kernel: x2apic enabled Jan 30 13:42:01.873230 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:42:01.873238 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jan 30 13:42:01.873246 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jan 30 13:42:01.873254 kernel: kvm-guest: setup PV IPIs Jan 30 13:42:01.873272 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:42:01.873279 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jan 30 13:42:01.873287 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jan 30 13:42:01.873294 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 30 13:42:01.873305 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 30 13:42:01.873312 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 30 13:42:01.873320 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:42:01.873327 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:42:01.873335 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:42:01.873345 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:42:01.873353 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jan 30 13:42:01.873360 kernel: RETBleed: Mitigation: untrained return thunk Jan 30 13:42:01.873368 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:42:01.873375 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:42:01.873383 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jan 30 13:42:01.873391 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jan 30 13:42:01.873399 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jan 30 13:42:01.873409 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:42:01.873417 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:42:01.873425 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:42:01.873436 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:42:01.873447 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:42:01.873459 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:42:01.873467 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:42:01.873477 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:42:01.873484 kernel: landlock: Up and running. Jan 30 13:42:01.873494 kernel: SELinux: Initializing. Jan 30 13:42:01.873502 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:42:01.873509 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 30 13:42:01.873517 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jan 30 13:42:01.873525 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:42:01.873532 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:42:01.873540 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 30 13:42:01.873548 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 30 13:42:01.873555 kernel: ... version: 0 Jan 30 13:42:01.873565 kernel: ... bit width: 48 Jan 30 13:42:01.873572 kernel: ... generic registers: 6 Jan 30 13:42:01.873580 kernel: ... value mask: 0000ffffffffffff Jan 30 13:42:01.873587 kernel: ... max period: 00007fffffffffff Jan 30 13:42:01.873595 kernel: ... fixed-purpose events: 0 Jan 30 13:42:01.873602 kernel: ... event mask: 000000000000003f Jan 30 13:42:01.873610 kernel: signal: max sigframe size: 1776 Jan 30 13:42:01.873617 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:42:01.873625 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:42:01.873635 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:42:01.873643 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:42:01.873650 kernel: .... node #0, CPUs: #1 #2 #3 Jan 30 13:42:01.873658 kernel: smp: Brought up 1 node, 4 CPUs Jan 30 13:42:01.873665 kernel: smpboot: Max logical packages: 1 Jan 30 13:42:01.873673 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jan 30 13:42:01.873680 kernel: devtmpfs: initialized Jan 30 13:42:01.873688 kernel: x86/mm: Memory block size: 128MB Jan 30 13:42:01.873695 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:42:01.873750 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 30 13:42:01.873761 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:42:01.873771 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:42:01.873781 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:42:01.873791 kernel: audit: type=2000 audit(1738244521.404:1): state=initialized audit_enabled=0 res=1 Jan 30 13:42:01.873801 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:42:01.873811 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:42:01.873821 kernel: cpuidle: using governor menu Jan 30 13:42:01.873831 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:42:01.873846 kernel: dca service started, version 1.12.1 Jan 30 13:42:01.873856 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jan 30 13:42:01.873866 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jan 30 13:42:01.873876 kernel: PCI: Using configuration type 1 for base access Jan 30 13:42:01.873886 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:42:01.873895 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:42:01.873903 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:42:01.873911 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:42:01.873918 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:42:01.873928 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:42:01.873936 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:42:01.873943 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:42:01.873951 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:42:01.873959 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:42:01.873966 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:42:01.873974 kernel: ACPI: Interpreter enabled Jan 30 13:42:01.873982 kernel: ACPI: PM: (supports S0 S3 S5) Jan 30 13:42:01.873989 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:42:01.873997 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:42:01.874007 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:42:01.874015 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 30 13:42:01.874022 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 30 13:42:01.874215 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:42:01.874345 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 30 13:42:01.874469 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 30 13:42:01.874480 kernel: PCI host bridge to bus 0000:00 Jan 30 13:42:01.874611 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:42:01.874748 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 30 13:42:01.874866 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:42:01.874976 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jan 30 13:42:01.875086 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:42:01.875196 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jan 30 13:42:01.875307 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 30 13:42:01.875453 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jan 30 13:42:01.875584 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jan 30 13:42:01.875718 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jan 30 13:42:01.875869 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jan 30 13:42:01.875992 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jan 30 13:42:01.876119 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:42:01.876255 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jan 30 13:42:01.876377 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jan 30 13:42:01.876503 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jan 30 13:42:01.876629 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jan 30 13:42:01.876811 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jan 30 13:42:01.876934 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jan 30 13:42:01.877053 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jan 30 13:42:01.877180 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jan 30 13:42:01.877318 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 30 13:42:01.877441 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jan 30 13:42:01.877566 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jan 30 13:42:01.877688 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jan 30 13:42:01.877868 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jan 30 13:42:01.878005 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jan 30 13:42:01.878144 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 30 13:42:01.878317 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jan 30 13:42:01.878474 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jan 30 13:42:01.878598 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jan 30 13:42:01.878771 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jan 30 13:42:01.878911 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jan 30 13:42:01.878927 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 30 13:42:01.878936 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 30 13:42:01.878944 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:42:01.878952 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 30 13:42:01.878960 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 30 13:42:01.878968 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 30 13:42:01.878976 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 30 13:42:01.878984 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 30 13:42:01.878992 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 30 13:42:01.879005 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 30 13:42:01.879017 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 30 13:42:01.879027 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 30 13:42:01.879038 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 30 13:42:01.879049 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 30 13:42:01.879057 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 30 13:42:01.879065 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 30 13:42:01.879073 kernel: iommu: Default domain type: Translated Jan 30 13:42:01.879081 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:42:01.879091 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:42:01.879099 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:42:01.879108 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 30 13:42:01.879119 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jan 30 13:42:01.879297 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 30 13:42:01.879488 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 30 13:42:01.879635 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:42:01.879647 kernel: vgaarb: loaded Jan 30 13:42:01.879655 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 30 13:42:01.879667 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 30 13:42:01.879675 kernel: clocksource: Switched to clocksource kvm-clock Jan 30 13:42:01.879682 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:42:01.879691 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:42:01.879699 kernel: pnp: PnP ACPI init Jan 30 13:42:01.879949 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jan 30 13:42:01.879968 kernel: pnp: PnP ACPI: found 6 devices Jan 30 13:42:01.879980 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:42:01.879994 kernel: NET: Registered PF_INET protocol family Jan 30 13:42:01.880002 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:42:01.880010 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 30 13:42:01.880019 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:42:01.880027 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:42:01.880035 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 30 13:42:01.880043 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 30 13:42:01.880051 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:42:01.880059 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 30 13:42:01.880069 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:42:01.880077 kernel: NET: Registered PF_XDP protocol family Jan 30 13:42:01.880226 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 30 13:42:01.880389 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 30 13:42:01.880537 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 30 13:42:01.880765 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jan 30 13:42:01.880904 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jan 30 13:42:01.881016 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jan 30 13:42:01.881032 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:42:01.881040 kernel: Initialise system trusted keyrings Jan 30 13:42:01.881050 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 30 13:42:01.881061 kernel: Key type asymmetric registered Jan 30 13:42:01.881072 kernel: Asymmetric key parser 'x509' registered Jan 30 13:42:01.881083 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:42:01.881092 kernel: io scheduler mq-deadline registered Jan 30 13:42:01.881100 kernel: io scheduler kyber registered Jan 30 13:42:01.881108 kernel: io scheduler bfq registered Jan 30 13:42:01.881123 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:42:01.881135 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 30 13:42:01.881145 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 30 13:42:01.881154 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jan 30 13:42:01.881162 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:42:01.881170 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:42:01.881178 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 30 13:42:01.881185 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:42:01.881194 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:42:01.881348 kernel: rtc_cmos 00:04: RTC can wake from S4 Jan 30 13:42:01.881361 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:42:01.881498 kernel: rtc_cmos 00:04: registered as rtc0 Jan 30 13:42:01.881618 kernel: rtc_cmos 00:04: setting system clock to 2025-01-30T13:42:01 UTC (1738244521) Jan 30 13:42:01.881759 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 30 13:42:01.881770 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 30 13:42:01.881778 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:42:01.881786 kernel: Segment Routing with IPv6 Jan 30 13:42:01.881798 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:42:01.881806 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:42:01.881814 kernel: Key type dns_resolver registered Jan 30 13:42:01.881822 kernel: IPI shorthand broadcast: enabled Jan 30 13:42:01.881830 kernel: sched_clock: Marking stable (626002784, 110711355)->(751648419, -14934280) Jan 30 13:42:01.881838 kernel: registered taskstats version 1 Jan 30 13:42:01.881846 kernel: Loading compiled-in X.509 certificates Jan 30 13:42:01.881854 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:42:01.881862 kernel: Key type .fscrypt registered Jan 30 13:42:01.881873 kernel: Key type fscrypt-provisioning registered Jan 30 13:42:01.881881 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:42:01.881889 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:42:01.881897 kernel: ima: No architecture policies found Jan 30 13:42:01.881904 kernel: clk: Disabling unused clocks Jan 30 13:42:01.881921 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:42:01.881943 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:42:01.881964 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:42:01.881988 kernel: Run /init as init process Jan 30 13:42:01.882000 kernel: with arguments: Jan 30 13:42:01.882008 kernel: /init Jan 30 13:42:01.882016 kernel: with environment: Jan 30 13:42:01.882024 kernel: HOME=/ Jan 30 13:42:01.882032 kernel: TERM=linux Jan 30 13:42:01.882040 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:42:01.882053 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:42:01.882068 systemd[1]: Detected virtualization kvm. Jan 30 13:42:01.882082 systemd[1]: Detected architecture x86-64. Jan 30 13:42:01.882093 systemd[1]: Running in initrd. Jan 30 13:42:01.882103 systemd[1]: No hostname configured, using default hostname. Jan 30 13:42:01.882114 systemd[1]: Hostname set to . Jan 30 13:42:01.882126 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:42:01.882137 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:42:01.882148 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:42:01.882159 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:42:01.882171 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:42:01.882192 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:42:01.882204 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:42:01.882213 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:42:01.882224 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:42:01.882237 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:42:01.882246 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:42:01.882254 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:42:01.882263 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:42:01.882271 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:42:01.882280 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:42:01.882288 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:42:01.882297 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:42:01.882308 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:42:01.882317 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:42:01.882325 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:42:01.882334 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:42:01.882342 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:42:01.882351 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:42:01.882359 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:42:01.882370 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:42:01.882385 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:42:01.882395 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:42:01.882404 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:42:01.882412 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:42:01.882421 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:42:01.882430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:42:01.882438 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:42:01.882447 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:42:01.882455 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:42:01.882496 systemd-journald[193]: Collecting audit messages is disabled. Jan 30 13:42:01.882530 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:42:01.882543 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:42:01.882552 systemd-journald[193]: Journal started Jan 30 13:42:01.882573 systemd-journald[193]: Runtime Journal (/run/log/journal/d364e0f1ce0b47f5839f34c122867b71) is 6.0M, max 48.4M, 42.3M free. Jan 30 13:42:01.883565 systemd-modules-load[194]: Inserted module 'overlay' Jan 30 13:42:01.923269 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:42:01.923300 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:42:01.923313 kernel: Bridge firewalling registered Jan 30 13:42:01.923323 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:42:01.910328 systemd-modules-load[194]: Inserted module 'br_netfilter' Jan 30 13:42:01.935151 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:42:01.936846 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:42:01.952011 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:42:01.953098 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:42:01.954277 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:42:01.954837 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:42:01.972858 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:42:01.973212 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:42:01.979972 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:42:01.982850 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:42:01.987329 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:42:02.007488 dracut-cmdline[231]: dracut-dracut-053 Jan 30 13:42:02.011349 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:42:02.011360 systemd-resolved[228]: Positive Trust Anchors: Jan 30 13:42:02.011368 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:42:02.011399 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:42:02.013948 systemd-resolved[228]: Defaulting to hostname 'linux'. Jan 30 13:42:02.015024 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:42:02.018347 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:42:02.108767 kernel: SCSI subsystem initialized Jan 30 13:42:02.119746 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:42:02.131729 kernel: iscsi: registered transport (tcp) Jan 30 13:42:02.152761 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:42:02.152842 kernel: QLogic iSCSI HBA Driver Jan 30 13:42:02.204907 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:42:02.211867 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:42:02.239358 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:42:02.239449 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:42:02.239462 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:42:02.280756 kernel: raid6: avx2x4 gen() 30014 MB/s Jan 30 13:42:02.297761 kernel: raid6: avx2x2 gen() 30723 MB/s Jan 30 13:42:02.314853 kernel: raid6: avx2x1 gen() 25148 MB/s Jan 30 13:42:02.314921 kernel: raid6: using algorithm avx2x2 gen() 30723 MB/s Jan 30 13:42:02.332852 kernel: raid6: .... xor() 19656 MB/s, rmw enabled Jan 30 13:42:02.332939 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:42:02.352771 kernel: xor: automatically using best checksumming function avx Jan 30 13:42:02.502753 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:42:02.513935 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:42:02.525904 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:42:02.537379 systemd-udevd[413]: Using default interface naming scheme 'v255'. Jan 30 13:42:02.541912 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:42:02.551902 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:42:02.563742 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Jan 30 13:42:02.592145 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:42:02.604862 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:42:02.668597 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:42:02.678894 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:42:02.694425 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:42:02.697642 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:42:02.700944 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:42:02.702644 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:42:02.714738 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:42:02.714781 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 30 13:42:02.740498 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jan 30 13:42:02.740683 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 30 13:42:02.740701 kernel: GPT:9289727 != 19775487 Jan 30 13:42:02.740747 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 30 13:42:02.740763 kernel: GPT:9289727 != 19775487 Jan 30 13:42:02.740783 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:42:02.740797 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:42:02.717547 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:42:02.748069 kernel: libata version 3.00 loaded. Jan 30 13:42:02.748090 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:42:02.748107 kernel: AES CTR mode by8 optimization enabled Jan 30 13:42:02.728104 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:42:02.728239 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:42:02.729924 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:42:02.731369 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:42:02.762801 kernel: ahci 0000:00:1f.2: version 3.0 Jan 30 13:42:02.781871 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 30 13:42:02.781890 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jan 30 13:42:02.782368 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 30 13:42:02.782521 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (468) Jan 30 13:42:02.782533 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (464) Jan 30 13:42:02.782550 kernel: scsi host0: ahci Jan 30 13:42:02.782701 kernel: scsi host1: ahci Jan 30 13:42:02.782876 kernel: scsi host2: ahci Jan 30 13:42:02.783017 kernel: scsi host3: ahci Jan 30 13:42:02.783157 kernel: scsi host4: ahci Jan 30 13:42:02.783341 kernel: scsi host5: ahci Jan 30 13:42:02.783578 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jan 30 13:42:02.783776 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jan 30 13:42:02.783812 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jan 30 13:42:02.783823 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jan 30 13:42:02.783833 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jan 30 13:42:02.783844 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jan 30 13:42:02.731576 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:42:02.734136 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:42:02.739195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:42:02.755576 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:42:02.779393 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 30 13:42:02.819767 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:42:02.831009 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 30 13:42:02.840027 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:42:02.847836 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 30 13:42:02.850493 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 30 13:42:02.867952 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:42:02.871177 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:42:02.892591 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:42:03.090411 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 30 13:42:03.090491 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 30 13:42:03.090517 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 30 13:42:03.090527 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 30 13:42:03.091736 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jan 30 13:42:03.092743 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 30 13:42:03.092768 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 30 13:42:03.093338 kernel: ata3.00: applying bridge limits Jan 30 13:42:03.094782 kernel: ata3.00: configured for UDMA/100 Jan 30 13:42:03.094852 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 30 13:42:03.126178 disk-uuid[552]: Primary Header is updated. Jan 30 13:42:03.126178 disk-uuid[552]: Secondary Entries is updated. Jan 30 13:42:03.126178 disk-uuid[552]: Secondary Header is updated. Jan 30 13:42:03.129924 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:42:03.133739 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:42:03.137730 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:42:03.147745 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 30 13:42:03.159664 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:42:03.159692 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:42:04.137639 disk-uuid[574]: The operation has completed successfully. Jan 30 13:42:04.139273 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 30 13:42:04.167218 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:42:04.167339 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:42:04.191883 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:42:04.195118 sh[592]: Success Jan 30 13:42:04.207724 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jan 30 13:42:04.239512 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:42:04.254364 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:42:04.258069 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:42:04.268527 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:42:04.268557 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:42:04.268569 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:42:04.269553 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:42:04.270926 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:42:04.275131 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:42:04.277544 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:42:04.289870 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:42:04.291418 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:42:04.304667 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:42:04.304751 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:42:04.304771 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:42:04.307734 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:42:04.317135 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:42:04.318858 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:42:04.328649 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:42:04.336973 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:42:04.397436 ignition[688]: Ignition 2.19.0 Jan 30 13:42:04.397453 ignition[688]: Stage: fetch-offline Jan 30 13:42:04.397501 ignition[688]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:42:04.397512 ignition[688]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:42:04.397653 ignition[688]: parsed url from cmdline: "" Jan 30 13:42:04.397658 ignition[688]: no config URL provided Jan 30 13:42:04.397665 ignition[688]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:42:04.397692 ignition[688]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:42:04.397746 ignition[688]: op(1): [started] loading QEMU firmware config module Jan 30 13:42:04.397754 ignition[688]: op(1): executing: "modprobe" "qemu_fw_cfg" Jan 30 13:42:04.411085 ignition[688]: op(1): [finished] loading QEMU firmware config module Jan 30 13:42:04.423556 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:42:04.428685 ignition[688]: parsing config with SHA512: bb10ec02efbc90ca059c24676e15de1f54c0a74e823a6cf9c122cab8128ffa07635a657369ced94bdc50a36398df11b778c52d2d2c3ff444c10ae8c52a0afcea Jan 30 13:42:04.432541 unknown[688]: fetched base config from "system" Jan 30 13:42:04.432696 unknown[688]: fetched user config from "qemu" Jan 30 13:42:04.433260 ignition[688]: fetch-offline: fetch-offline passed Jan 30 13:42:04.433351 ignition[688]: Ignition finished successfully Jan 30 13:42:04.435544 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:42:04.437569 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:42:04.456664 systemd-networkd[781]: lo: Link UP Jan 30 13:42:04.456684 systemd-networkd[781]: lo: Gained carrier Jan 30 13:42:04.458295 systemd-networkd[781]: Enumeration completed Jan 30 13:42:04.458397 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:42:04.458689 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:42:04.458693 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:42:04.459580 systemd-networkd[781]: eth0: Link UP Jan 30 13:42:04.459583 systemd-networkd[781]: eth0: Gained carrier Jan 30 13:42:04.459590 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:42:04.462222 systemd[1]: Reached target network.target - Network. Jan 30 13:42:04.465384 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:42:04.472845 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:42:04.476765 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:42:04.487567 ignition[785]: Ignition 2.19.0 Jan 30 13:42:04.487579 ignition[785]: Stage: kargs Jan 30 13:42:04.487793 ignition[785]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:42:04.487804 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:42:04.488582 ignition[785]: kargs: kargs passed Jan 30 13:42:04.488621 ignition[785]: Ignition finished successfully Jan 30 13:42:04.491815 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:42:04.501937 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:42:04.515331 ignition[795]: Ignition 2.19.0 Jan 30 13:42:04.515344 ignition[795]: Stage: disks Jan 30 13:42:04.515524 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:42:04.515536 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:42:04.516296 ignition[795]: disks: disks passed Jan 30 13:42:04.516338 ignition[795]: Ignition finished successfully Jan 30 13:42:04.521756 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:42:04.521999 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:42:04.524780 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:42:04.524982 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:42:04.525316 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:42:04.525646 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:42:04.542863 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:42:04.554599 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 30 13:42:04.561279 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:42:04.564996 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:42:04.654745 kernel: EXT4-fs (vda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:42:04.655288 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:42:04.656013 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:42:04.669783 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:42:04.671972 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:42:04.672297 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:42:04.679229 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (814) Jan 30 13:42:04.679253 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:42:04.672337 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:42:04.685327 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:42:04.685351 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:42:04.685365 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:42:04.672358 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:42:04.686941 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:42:04.709899 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:42:04.719869 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:42:04.756079 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:42:04.760415 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:42:04.765397 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:42:04.769422 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:42:04.852225 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:42:04.859832 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:42:04.861539 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:42:04.868737 kernel: BTRFS info (device vda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:42:04.885611 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:42:04.892005 ignition[928]: INFO : Ignition 2.19.0 Jan 30 13:42:04.892005 ignition[928]: INFO : Stage: mount Jan 30 13:42:04.893952 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:42:04.893952 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:42:04.893952 ignition[928]: INFO : mount: mount passed Jan 30 13:42:04.893952 ignition[928]: INFO : Ignition finished successfully Jan 30 13:42:04.895321 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:42:04.910873 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:42:05.268010 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:42:05.288852 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:42:05.294731 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (940) Jan 30 13:42:05.296747 kernel: BTRFS info (device vda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:42:05.296768 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:42:05.296785 kernel: BTRFS info (device vda6): using free space tree Jan 30 13:42:05.299731 kernel: BTRFS info (device vda6): auto enabling async discard Jan 30 13:42:05.301478 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:42:05.319465 ignition[957]: INFO : Ignition 2.19.0 Jan 30 13:42:05.319465 ignition[957]: INFO : Stage: files Jan 30 13:42:05.321097 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:42:05.321097 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:42:05.321097 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:42:05.324801 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:42:05.324801 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:42:05.327924 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:42:05.329381 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:42:05.330753 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:42:05.329771 unknown[957]: wrote ssh authorized keys file for user: core Jan 30 13:42:05.333274 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:42:05.333274 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:42:05.372925 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:42:05.493502 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:42:05.493502 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:42:05.497248 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:42:05.497248 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:42:05.500810 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:42:05.502493 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:42:05.504379 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:42:05.506081 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:42:05.507820 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:42:05.509660 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:42:05.511734 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:42:05.513517 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:42:05.516042 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:42:05.518506 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:42:05.520589 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 13:42:06.011254 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:42:06.395022 systemd-networkd[781]: eth0: Gained IPv6LL Jan 30 13:42:06.470371 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:42:06.470371 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:42:06.475028 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:42:06.475028 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:42:06.475028 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:42:06.475028 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 30 13:42:06.475028 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:42:06.475028 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:42:06.475028 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 30 13:42:06.475028 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:42:06.498422 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:42:06.504566 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:42:06.506303 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:42:06.506303 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:42:06.506303 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:42:06.506303 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:42:06.506303 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:42:06.506303 ignition[957]: INFO : files: files passed Jan 30 13:42:06.506303 ignition[957]: INFO : Ignition finished successfully Jan 30 13:42:06.516644 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:42:06.528053 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:42:06.531332 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:42:06.534346 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:42:06.535426 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:42:06.543328 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Jan 30 13:42:06.547512 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:42:06.547512 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:42:06.550935 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:42:06.551956 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:42:06.553361 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:42:06.567943 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:42:06.594252 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:42:06.594381 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:42:06.596767 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:42:06.598844 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:42:06.600902 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:42:06.614849 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:42:06.631043 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:42:06.642825 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:42:06.652827 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:42:06.654134 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:42:06.656372 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:42:06.658389 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:42:06.658500 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:42:06.661007 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:42:06.662560 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:42:06.664586 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:42:06.666642 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:42:06.668658 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:42:06.670829 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:42:06.672965 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:42:06.675255 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:42:06.677237 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:42:06.679422 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:42:06.681203 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:42:06.681317 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:42:06.683625 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:42:06.685082 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:42:06.687170 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:42:06.687290 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:42:06.689410 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:42:06.689527 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:42:06.691893 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:42:06.692007 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:42:06.693872 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:42:06.695582 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:42:06.698756 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:42:06.700156 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:42:06.702093 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:42:06.704133 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:42:06.704235 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:42:06.705953 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:42:06.706061 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:42:06.708012 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:42:06.708122 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:42:06.710659 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:42:06.710795 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:42:06.760024 ignition[1012]: INFO : Ignition 2.19.0 Jan 30 13:42:06.760024 ignition[1012]: INFO : Stage: umount Jan 30 13:42:06.760024 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:42:06.760024 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jan 30 13:42:06.760024 ignition[1012]: INFO : umount: umount passed Jan 30 13:42:06.760024 ignition[1012]: INFO : Ignition finished successfully Jan 30 13:42:06.724864 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:42:06.760901 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:42:06.761827 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:42:06.761951 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:42:06.763789 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:42:06.763901 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:42:06.767876 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:42:06.767985 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:42:06.770470 systemd[1]: Stopped target network.target - Network. Jan 30 13:42:06.772531 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:42:06.774935 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:42:06.783012 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:42:06.783987 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:42:06.786006 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:42:06.786060 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:42:06.788884 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:42:06.790090 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:42:06.792439 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:42:06.794675 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:42:06.797977 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:42:06.799761 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:42:06.800877 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:42:06.801760 systemd-networkd[781]: eth0: DHCPv6 lease lost Jan 30 13:42:06.804482 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:42:06.805935 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:42:06.808084 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:42:06.808216 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:42:06.813463 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:42:06.813530 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:42:06.823931 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:42:06.825202 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:42:06.825313 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:42:06.828393 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:42:06.828496 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:42:06.830805 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:42:06.830876 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:42:06.833284 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:42:06.833350 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:42:06.833601 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:42:06.844894 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:42:06.845081 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:42:06.852897 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:42:06.853153 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:42:06.855666 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:42:06.855761 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:42:06.857977 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:42:06.858027 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:42:06.860029 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:42:06.860102 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:42:06.862320 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:42:06.862385 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:42:06.864366 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:42:06.864428 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:42:06.883913 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:42:06.886223 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:42:06.886303 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:42:06.886435 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:42:06.886493 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:42:06.886776 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:42:06.886833 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:42:06.887148 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:42:06.887201 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:42:06.895590 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:42:06.895794 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:42:07.451514 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:42:07.452580 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:42:07.454655 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:42:07.456765 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:42:07.457752 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:42:07.473899 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:42:07.480339 systemd[1]: Switching root. Jan 30 13:42:07.516808 systemd-journald[193]: Journal stopped Jan 30 13:42:09.098485 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jan 30 13:42:09.098562 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:42:09.098588 kernel: SELinux: policy capability open_perms=1 Jan 30 13:42:09.098600 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:42:09.098611 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:42:09.098622 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:42:09.098636 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:42:09.098647 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:42:09.098660 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:42:09.098678 kernel: audit: type=1403 audit(1738244528.292:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:42:09.098690 systemd[1]: Successfully loaded SELinux policy in 40.253ms. Jan 30 13:42:09.098720 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.834ms. Jan 30 13:42:09.098734 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:42:09.098746 systemd[1]: Detected virtualization kvm. Jan 30 13:42:09.098762 systemd[1]: Detected architecture x86-64. Jan 30 13:42:09.098774 systemd[1]: Detected first boot. Jan 30 13:42:09.098785 systemd[1]: Initializing machine ID from VM UUID. Jan 30 13:42:09.098797 zram_generator::config[1056]: No configuration found. Jan 30 13:42:09.098816 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:42:09.098827 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:42:09.098839 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:42:09.098851 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:42:09.098868 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:42:09.098880 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:42:09.098892 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:42:09.098903 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:42:09.098915 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:42:09.098927 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:42:09.098939 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:42:09.098950 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:42:09.098962 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:42:09.098977 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:42:09.098989 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:42:09.099001 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:42:09.099013 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:42:09.099025 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:42:09.099037 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:42:09.099048 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:42:09.099060 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:42:09.099073 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:42:09.099091 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:42:09.099104 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:42:09.099116 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:42:09.099129 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:42:09.099141 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:42:09.099159 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:42:09.099170 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:42:09.099182 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:42:09.099197 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:42:09.099209 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:42:09.099221 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:42:09.099233 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:42:09.099245 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:42:09.099256 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:42:09.099268 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:42:09.099280 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:42:09.099292 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:42:09.099306 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:42:09.099318 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:42:09.099330 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:42:09.099347 systemd[1]: Reached target machines.target - Containers. Jan 30 13:42:09.099359 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:42:09.099371 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:42:09.099384 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:42:09.099397 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:42:09.099412 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:42:09.099423 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:42:09.099436 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:42:09.099447 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:42:09.099459 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:42:09.099471 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:42:09.099483 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:42:09.099495 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:42:09.099509 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:42:09.099521 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:42:09.099538 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:42:09.099560 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:42:09.099571 kernel: loop: module loaded Jan 30 13:42:09.099583 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:42:09.099594 kernel: fuse: init (API version 7.39) Jan 30 13:42:09.099622 systemd-journald[1119]: Collecting audit messages is disabled. Jan 30 13:42:09.099647 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:42:09.099659 systemd-journald[1119]: Journal started Jan 30 13:42:09.099681 systemd-journald[1119]: Runtime Journal (/run/log/journal/d364e0f1ce0b47f5839f34c122867b71) is 6.0M, max 48.4M, 42.3M free. Jan 30 13:42:08.853675 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:42:08.872110 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 30 13:42:08.872767 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:42:09.105731 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:42:09.112740 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:42:09.112771 systemd[1]: Stopped verity-setup.service. Jan 30 13:42:09.112786 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:42:09.117508 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:42:09.118048 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:42:09.119241 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:42:09.120469 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:42:09.121740 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:42:09.123030 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:42:09.124331 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:42:09.125804 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:42:09.127408 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:42:09.127596 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:42:09.129269 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:42:09.129489 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:42:09.131644 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:42:09.131724 kernel: ACPI: bus type drm_connector registered Jan 30 13:42:09.131829 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:42:09.133538 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:42:09.133733 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:42:09.135364 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:42:09.135539 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:42:09.137016 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:42:09.137188 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:42:09.138607 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:42:09.140144 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:42:09.141817 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:42:09.157012 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:42:09.178853 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:42:09.181422 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:42:09.182570 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:42:09.182599 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:42:09.184583 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:42:09.186891 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:42:09.190117 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:42:09.191372 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:42:09.194880 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:42:09.197457 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:42:09.199778 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:42:09.201903 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:42:09.203192 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:42:09.204967 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:42:09.207318 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:42:09.213951 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:42:09.217256 systemd-journald[1119]: Time spent on flushing to /var/log/journal/d364e0f1ce0b47f5839f34c122867b71 is 16.869ms for 952 entries. Jan 30 13:42:09.217256 systemd-journald[1119]: System Journal (/var/log/journal/d364e0f1ce0b47f5839f34c122867b71) is 8.0M, max 195.6M, 187.6M free. Jan 30 13:42:09.441899 systemd-journald[1119]: Received client request to flush runtime journal. Jan 30 13:42:09.441932 kernel: loop0: detected capacity change from 0 to 205544 Jan 30 13:42:09.441956 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:42:09.441969 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 13:42:09.441982 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 13:42:09.441995 kernel: loop3: detected capacity change from 0 to 205544 Jan 30 13:42:09.217118 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:42:09.219796 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:42:09.221106 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:42:09.222566 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:42:09.237290 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:42:09.251259 udevadm[1163]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 30 13:42:09.289530 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jan 30 13:42:09.289554 systemd-tmpfiles[1156]: ACLs are not supported, ignoring. Jan 30 13:42:09.290973 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:42:09.299951 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:42:09.426462 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:42:09.428956 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:42:09.435884 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:42:09.445046 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:42:09.449745 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 13:42:09.484744 kernel: loop5: detected capacity change from 0 to 142488 Jan 30 13:42:09.493167 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jan 30 13:42:09.494804 (sd-merge)[1179]: Merged extensions into '/usr'. Jan 30 13:42:09.498867 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:42:09.498883 systemd[1]: Reloading... Jan 30 13:42:09.544823 zram_generator::config[1217]: No configuration found. Jan 30 13:42:09.628578 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:42:09.667512 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:42:09.717371 systemd[1]: Reloading finished in 218 ms. Jan 30 13:42:09.750154 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:42:09.760482 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:42:09.761045 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:42:09.762498 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:42:09.764122 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:42:09.781942 systemd[1]: Starting ensure-sysext.service... Jan 30 13:42:09.784186 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:42:09.791217 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:42:09.791241 systemd[1]: Reloading... Jan 30 13:42:09.853750 zram_generator::config[1285]: No configuration found. Jan 30 13:42:09.957957 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:42:10.007159 systemd[1]: Reloading finished in 215 ms. Jan 30 13:42:10.034101 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:42:10.042302 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:42:10.044541 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:42:10.047192 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:42:10.047362 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:42:10.048540 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:42:10.053816 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:42:10.056111 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:42:10.057402 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:42:10.057512 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:42:10.058435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:42:10.058618 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:42:10.060491 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:42:10.060672 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:42:10.064052 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:42:10.064231 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:42:10.067967 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:42:10.068203 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:42:10.069252 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jan 30 13:42:10.069280 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jan 30 13:42:10.072973 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:42:10.073271 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:42:10.074168 systemd-tmpfiles[1324]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:42:10.074435 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Jan 30 13:42:10.074510 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Jan 30 13:42:10.077760 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:42:10.077768 systemd-tmpfiles[1324]: Skipping /boot Jan 30 13:42:10.082027 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:42:10.084346 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:42:10.087792 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:42:10.088950 systemd-tmpfiles[1324]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:42:10.088966 systemd-tmpfiles[1324]: Skipping /boot Jan 30 13:42:10.094961 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:42:10.095150 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:42:10.096408 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:42:10.098395 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:42:10.098573 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:42:10.100212 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:42:10.100391 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:42:10.102096 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:42:10.102258 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:42:10.109432 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:42:10.109648 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:42:10.119926 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:42:10.122187 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:42:10.124343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:42:10.129820 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:42:10.131089 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:42:10.131224 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:42:10.143224 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:42:10.145002 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:42:10.145189 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:42:10.146832 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:42:10.147003 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:42:10.148645 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:42:10.148860 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:42:10.150535 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:42:10.150727 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:42:10.158380 systemd[1]: Finished ensure-sysext.service. Jan 30 13:42:10.175894 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:42:10.178420 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:42:10.180645 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:42:10.181823 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:42:10.181886 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:42:10.186844 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:42:10.198835 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:42:10.201986 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:42:10.203822 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:42:10.209486 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:42:10.213897 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:42:10.225449 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:42:10.237863 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:42:10.240965 systemd-udevd[1367]: Using default interface naming scheme 'v255'. Jan 30 13:42:10.243175 augenrules[1376]: No rules Jan 30 13:42:10.245004 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:42:10.247379 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:42:10.254495 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:42:10.265147 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:42:10.267755 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:42:10.276867 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:42:10.278350 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:42:10.286686 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:42:10.313776 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:42:10.328740 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1406) Jan 30 13:42:10.373537 systemd-networkd[1390]: lo: Link UP Jan 30 13:42:10.373547 systemd-networkd[1390]: lo: Gained carrier Jan 30 13:42:10.375102 systemd-networkd[1390]: Enumeration completed Jan 30 13:42:10.375192 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:42:10.376086 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:42:10.376140 systemd-networkd[1390]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:42:10.376889 systemd-networkd[1390]: eth0: Link UP Jan 30 13:42:10.376954 systemd-networkd[1390]: eth0: Gained carrier Jan 30 13:42:10.377006 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:42:10.377272 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:42:10.378814 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:42:10.384880 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jan 30 13:42:10.388644 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:42:10.389778 systemd-networkd[1390]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jan 30 13:42:10.391425 systemd-networkd[1390]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:42:10.393792 systemd-timesyncd[1355]: Network configuration changed, trying to establish connection. Jan 30 13:42:10.928010 systemd-timesyncd[1355]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jan 30 13:42:10.928121 systemd-timesyncd[1355]: Initial clock synchronization to Thu 2025-01-30 13:42:10.927863 UTC. Jan 30 13:42:10.932284 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:42:10.933601 systemd-resolved[1354]: Positive Trust Anchors: Jan 30 13:42:10.933849 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:42:10.933922 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:42:10.934724 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 30 13:42:10.938332 systemd-resolved[1354]: Defaulting to hostname 'linux'. Jan 30 13:42:10.946438 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:42:10.948613 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:42:10.950134 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 30 13:42:10.955148 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jan 30 13:42:10.955379 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 30 13:42:10.950359 systemd[1]: Reached target network.target - Network. Jan 30 13:42:10.951439 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:42:10.961252 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jan 30 13:42:10.969333 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:42:10.985499 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:42:10.994266 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:42:11.073688 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:42:11.088641 kernel: kvm_amd: TSC scaling supported Jan 30 13:42:11.088680 kernel: kvm_amd: Nested Virtualization enabled Jan 30 13:42:11.088709 kernel: kvm_amd: Nested Paging enabled Jan 30 13:42:11.088722 kernel: kvm_amd: LBR virtualization supported Jan 30 13:42:11.089707 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jan 30 13:42:11.089721 kernel: kvm_amd: Virtual GIF supported Jan 30 13:42:11.111734 kernel: EDAC MC: Ver: 3.0.0 Jan 30 13:42:11.136188 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:42:11.151393 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:42:11.160489 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:42:11.192348 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:42:11.193915 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:42:11.195038 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:42:11.196194 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:42:11.197430 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:42:11.198920 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:42:11.200130 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:42:11.201397 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:42:11.202621 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:42:11.202656 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:42:11.203572 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:42:11.205051 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:42:11.207805 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:42:11.215716 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:42:11.217953 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:42:11.219532 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:42:11.220672 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:42:11.221631 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:42:11.222600 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:42:11.222630 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:42:11.223650 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:42:11.225774 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:42:11.228480 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:42:11.232126 lvm[1440]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:42:11.232476 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:42:11.233572 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:42:11.236459 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:42:11.240470 jq[1443]: false Jan 30 13:42:11.241248 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:42:11.246457 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:42:11.250018 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:42:11.256000 dbus-daemon[1442]: [system] SELinux support is enabled Jan 30 13:42:11.257389 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:42:11.258943 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:42:11.259574 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:42:11.260437 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:42:11.263754 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:42:11.266149 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:42:11.268189 extend-filesystems[1444]: Found loop3 Jan 30 13:42:11.271598 extend-filesystems[1444]: Found loop4 Jan 30 13:42:11.271598 extend-filesystems[1444]: Found loop5 Jan 30 13:42:11.271598 extend-filesystems[1444]: Found sr0 Jan 30 13:42:11.271598 extend-filesystems[1444]: Found vda Jan 30 13:42:11.271598 extend-filesystems[1444]: Found vda1 Jan 30 13:42:11.271598 extend-filesystems[1444]: Found vda2 Jan 30 13:42:11.271598 extend-filesystems[1444]: Found vda3 Jan 30 13:42:11.271598 extend-filesystems[1444]: Found usr Jan 30 13:42:11.271598 extend-filesystems[1444]: Found vda4 Jan 30 13:42:11.271598 extend-filesystems[1444]: Found vda6 Jan 30 13:42:11.271598 extend-filesystems[1444]: Found vda7 Jan 30 13:42:11.271598 extend-filesystems[1444]: Found vda9 Jan 30 13:42:11.271598 extend-filesystems[1444]: Checking size of /dev/vda9 Jan 30 13:42:11.284290 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:42:11.285666 jq[1458]: true Jan 30 13:42:11.286803 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:42:11.287062 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:42:11.287471 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:42:11.287713 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:42:11.290962 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:42:11.291210 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:42:11.298436 extend-filesystems[1444]: Resized partition /dev/vda9 Jan 30 13:42:11.314351 extend-filesystems[1470]: resize2fs 1.47.1 (20-May-2024) Jan 30 13:42:11.320059 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1401) Jan 30 13:42:11.319432 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:42:11.320163 update_engine[1457]: I20250130 13:42:11.318936 1457 main.cc:92] Flatcar Update Engine starting Jan 30 13:42:11.319462 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:42:11.325073 tar[1464]: linux-amd64/helm Jan 30 13:42:11.325964 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jan 30 13:42:11.325989 jq[1467]: true Jan 30 13:42:11.321194 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:42:11.321220 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:42:11.329772 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:42:11.332376 update_engine[1457]: I20250130 13:42:11.330018 1457 update_check_scheduler.cc:74] Next update check in 10m32s Jan 30 13:42:11.332428 (ntainerd)[1479]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:42:11.333398 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:42:11.346071 systemd-logind[1452]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:42:11.346105 systemd-logind[1452]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:42:11.346355 systemd-logind[1452]: New seat seat0. Jan 30 13:42:11.348183 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:42:11.358295 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jan 30 13:42:11.385536 extend-filesystems[1470]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 30 13:42:11.385536 extend-filesystems[1470]: old_desc_blocks = 1, new_desc_blocks = 1 Jan 30 13:42:11.385536 extend-filesystems[1470]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jan 30 13:42:11.393227 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Jan 30 13:42:11.388515 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:42:11.388751 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:42:11.398182 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:42:11.400830 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:42:11.404143 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:42:11.410532 locksmithd[1482]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:42:11.521659 containerd[1479]: time="2025-01-30T13:42:11.521584200Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:42:11.543727 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:42:11.545640 containerd[1479]: time="2025-01-30T13:42:11.545497716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:42:11.547061 containerd[1479]: time="2025-01-30T13:42:11.547034108Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:42:11.548391 containerd[1479]: time="2025-01-30T13:42:11.547104980Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:42:11.548391 containerd[1479]: time="2025-01-30T13:42:11.547123796Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:42:11.548391 containerd[1479]: time="2025-01-30T13:42:11.547311107Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:42:11.548391 containerd[1479]: time="2025-01-30T13:42:11.547331135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:42:11.548391 containerd[1479]: time="2025-01-30T13:42:11.547395976Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:42:11.548391 containerd[1479]: time="2025-01-30T13:42:11.547408289Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:42:11.548391 containerd[1479]: time="2025-01-30T13:42:11.547591443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:42:11.548391 containerd[1479]: time="2025-01-30T13:42:11.547604998Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:42:11.548391 containerd[1479]: time="2025-01-30T13:42:11.547618303Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:42:11.548391 containerd[1479]: time="2025-01-30T13:42:11.547627791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:42:11.548391 containerd[1479]: time="2025-01-30T13:42:11.547716628Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:42:11.548391 containerd[1479]: time="2025-01-30T13:42:11.547937271Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:42:11.548648 containerd[1479]: time="2025-01-30T13:42:11.548060062Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:42:11.548648 containerd[1479]: time="2025-01-30T13:42:11.548073477Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:42:11.548648 containerd[1479]: time="2025-01-30T13:42:11.548179385Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:42:11.548648 containerd[1479]: time="2025-01-30T13:42:11.548251891Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:42:11.555298 containerd[1479]: time="2025-01-30T13:42:11.555276054Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:42:11.555400 containerd[1479]: time="2025-01-30T13:42:11.555382905Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:42:11.555471 containerd[1479]: time="2025-01-30T13:42:11.555454369Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:42:11.555525 containerd[1479]: time="2025-01-30T13:42:11.555513279Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:42:11.555571 containerd[1479]: time="2025-01-30T13:42:11.555561019Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:42:11.555739 containerd[1479]: time="2025-01-30T13:42:11.555723103Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:42:11.556053 containerd[1479]: time="2025-01-30T13:42:11.556030228Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:42:11.556219 containerd[1479]: time="2025-01-30T13:42:11.556203584Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:42:11.556314 containerd[1479]: time="2025-01-30T13:42:11.556300145Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:42:11.556364 containerd[1479]: time="2025-01-30T13:42:11.556352864Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:42:11.556412 containerd[1479]: time="2025-01-30T13:42:11.556400563Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:42:11.556457 containerd[1479]: time="2025-01-30T13:42:11.556446409Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:42:11.556502 containerd[1479]: time="2025-01-30T13:42:11.556491514Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:42:11.556563 containerd[1479]: time="2025-01-30T13:42:11.556549502Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:42:11.556612 containerd[1479]: time="2025-01-30T13:42:11.556600618Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:42:11.556658 containerd[1479]: time="2025-01-30T13:42:11.556647346Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:42:11.556702 containerd[1479]: time="2025-01-30T13:42:11.556691949Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:42:11.556748 containerd[1479]: time="2025-01-30T13:42:11.556736232Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:42:11.556805 containerd[1479]: time="2025-01-30T13:42:11.556794041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.556865 containerd[1479]: time="2025-01-30T13:42:11.556852450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.556913 containerd[1479]: time="2025-01-30T13:42:11.556902013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.556972 containerd[1479]: time="2025-01-30T13:42:11.556960122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.557046 containerd[1479]: time="2025-01-30T13:42:11.557032769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.557114 containerd[1479]: time="2025-01-30T13:42:11.557101628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.557189 containerd[1479]: time="2025-01-30T13:42:11.557176628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.557271 containerd[1479]: time="2025-01-30T13:42:11.557229387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.557342 containerd[1479]: time="2025-01-30T13:42:11.557314106Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.557392 containerd[1479]: time="2025-01-30T13:42:11.557381282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.557450 containerd[1479]: time="2025-01-30T13:42:11.557438700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.557503 containerd[1479]: time="2025-01-30T13:42:11.557491789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.557584 containerd[1479]: time="2025-01-30T13:42:11.557571489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.557655 containerd[1479]: time="2025-01-30T13:42:11.557643774Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:42:11.557721 containerd[1479]: time="2025-01-30T13:42:11.557709678Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.557777 containerd[1479]: time="2025-01-30T13:42:11.557765292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.557832 containerd[1479]: time="2025-01-30T13:42:11.557811399Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:42:11.557924 containerd[1479]: time="2025-01-30T13:42:11.557907489Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:42:11.558028 containerd[1479]: time="2025-01-30T13:42:11.557998620Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:42:11.558095 containerd[1479]: time="2025-01-30T13:42:11.558081966Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:42:11.558162 containerd[1479]: time="2025-01-30T13:42:11.558133162Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:42:11.558207 containerd[1479]: time="2025-01-30T13:42:11.558195870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.558292 containerd[1479]: time="2025-01-30T13:42:11.558279517Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:42:11.558353 containerd[1479]: time="2025-01-30T13:42:11.558327236Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:42:11.558400 containerd[1479]: time="2025-01-30T13:42:11.558388822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:42:11.558906 containerd[1479]: time="2025-01-30T13:42:11.558688794Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:42:11.559132 containerd[1479]: time="2025-01-30T13:42:11.559114202Z" level=info msg="Connect containerd service" Jan 30 13:42:11.559317 containerd[1479]: time="2025-01-30T13:42:11.559299279Z" level=info msg="using legacy CRI server" Jan 30 13:42:11.559395 containerd[1479]: time="2025-01-30T13:42:11.559379730Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:42:11.559625 containerd[1479]: time="2025-01-30T13:42:11.559586648Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:42:11.560754 containerd[1479]: time="2025-01-30T13:42:11.560724081Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:42:11.560918 containerd[1479]: time="2025-01-30T13:42:11.560881667Z" level=info msg="Start subscribing containerd event" Jan 30 13:42:11.561093 containerd[1479]: time="2025-01-30T13:42:11.561066814Z" level=info msg="Start recovering state" Jan 30 13:42:11.561196 containerd[1479]: time="2025-01-30T13:42:11.561174836Z" level=info msg="Start event monitor" Jan 30 13:42:11.561266 containerd[1479]: time="2025-01-30T13:42:11.561212447Z" level=info msg="Start snapshots syncer" Jan 30 13:42:11.561266 containerd[1479]: time="2025-01-30T13:42:11.561225562Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:42:11.561322 containerd[1479]: time="2025-01-30T13:42:11.561283721Z" level=info msg="Start streaming server" Jan 30 13:42:11.561555 containerd[1479]: time="2025-01-30T13:42:11.561535272Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:42:11.561697 containerd[1479]: time="2025-01-30T13:42:11.561679914Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:42:11.561926 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:42:11.563602 containerd[1479]: time="2025-01-30T13:42:11.563581540Z" level=info msg="containerd successfully booted in 0.041385s" Jan 30 13:42:11.568212 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:42:11.581562 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:42:11.593305 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:42:11.593521 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:42:11.603517 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:42:11.615091 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:42:11.618411 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:42:11.620745 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:42:11.622196 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:42:11.747075 tar[1464]: linux-amd64/LICENSE Jan 30 13:42:11.747180 tar[1464]: linux-amd64/README.md Jan 30 13:42:11.761232 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:42:12.751380 systemd-networkd[1390]: eth0: Gained IPv6LL Jan 30 13:42:12.754412 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:42:12.756268 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:42:12.765446 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jan 30 13:42:12.767957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:42:12.770331 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:42:12.790630 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:42:12.791325 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jan 30 13:42:12.793378 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:42:12.796033 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:42:13.384857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:13.386757 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:42:13.388205 systemd[1]: Startup finished in 755ms (kernel) + 6.603s (initrd) + 4.601s (userspace) = 11.960s. Jan 30 13:42:13.400719 (kubelet)[1554]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:42:13.803292 kubelet[1554]: E0130 13:42:13.803148 1554 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:42:13.807306 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:42:13.807563 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:42:21.148691 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:42:21.150024 systemd[1]: Started sshd@0-10.0.0.30:22-10.0.0.1:53234.service - OpenSSH per-connection server daemon (10.0.0.1:53234). Jan 30 13:42:21.195850 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 53234 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:42:21.197873 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:21.207542 systemd-logind[1452]: New session 1 of user core. Jan 30 13:42:21.208980 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:42:21.219484 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:42:21.233774 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:42:21.236948 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:42:21.245198 (systemd)[1571]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:42:21.353134 systemd[1571]: Queued start job for default target default.target. Jan 30 13:42:21.368566 systemd[1571]: Created slice app.slice - User Application Slice. Jan 30 13:42:21.368592 systemd[1571]: Reached target paths.target - Paths. Jan 30 13:42:21.368606 systemd[1571]: Reached target timers.target - Timers. Jan 30 13:42:21.370143 systemd[1571]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:42:21.382161 systemd[1571]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:42:21.382303 systemd[1571]: Reached target sockets.target - Sockets. Jan 30 13:42:21.382322 systemd[1571]: Reached target basic.target - Basic System. Jan 30 13:42:21.382357 systemd[1571]: Reached target default.target - Main User Target. Jan 30 13:42:21.382389 systemd[1571]: Startup finished in 129ms. Jan 30 13:42:21.382991 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:42:21.384872 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:42:21.446607 systemd[1]: Started sshd@1-10.0.0.30:22-10.0.0.1:53250.service - OpenSSH per-connection server daemon (10.0.0.1:53250). Jan 30 13:42:21.503333 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 53250 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:42:21.504961 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:21.508604 systemd-logind[1452]: New session 2 of user core. Jan 30 13:42:21.518379 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:42:21.571434 sshd[1582]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:21.584208 systemd[1]: sshd@1-10.0.0.30:22-10.0.0.1:53250.service: Deactivated successfully. Jan 30 13:42:21.586139 systemd[1]: session-2.scope: Deactivated successfully. Jan 30 13:42:21.587860 systemd-logind[1452]: Session 2 logged out. Waiting for processes to exit. Jan 30 13:42:21.599533 systemd[1]: Started sshd@2-10.0.0.30:22-10.0.0.1:53256.service - OpenSSH per-connection server daemon (10.0.0.1:53256). Jan 30 13:42:21.600538 systemd-logind[1452]: Removed session 2. Jan 30 13:42:21.631910 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 53256 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:42:21.633395 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:21.637674 systemd-logind[1452]: New session 3 of user core. Jan 30 13:42:21.647389 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:42:21.697629 sshd[1589]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:21.706906 systemd[1]: sshd@2-10.0.0.30:22-10.0.0.1:53256.service: Deactivated successfully. Jan 30 13:42:21.708537 systemd[1]: session-3.scope: Deactivated successfully. Jan 30 13:42:21.709909 systemd-logind[1452]: Session 3 logged out. Waiting for processes to exit. Jan 30 13:42:21.711157 systemd[1]: Started sshd@3-10.0.0.30:22-10.0.0.1:53270.service - OpenSSH per-connection server daemon (10.0.0.1:53270). Jan 30 13:42:21.711984 systemd-logind[1452]: Removed session 3. Jan 30 13:42:21.748988 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 53270 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:42:21.750576 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:21.754474 systemd-logind[1452]: New session 4 of user core. Jan 30 13:42:21.776353 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:42:21.830632 sshd[1596]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:21.844166 systemd[1]: sshd@3-10.0.0.30:22-10.0.0.1:53270.service: Deactivated successfully. Jan 30 13:42:21.845909 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:42:21.847270 systemd-logind[1452]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:42:21.848635 systemd[1]: Started sshd@4-10.0.0.30:22-10.0.0.1:53278.service - OpenSSH per-connection server daemon (10.0.0.1:53278). Jan 30 13:42:21.849464 systemd-logind[1452]: Removed session 4. Jan 30 13:42:21.884599 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 53278 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:42:21.886051 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:42:21.889686 systemd-logind[1452]: New session 5 of user core. Jan 30 13:42:21.903413 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:42:22.337693 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:42:22.338043 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:42:22.633480 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:42:22.633630 (dockerd)[1624]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:42:22.898865 dockerd[1624]: time="2025-01-30T13:42:22.898687409Z" level=info msg="Starting up" Jan 30 13:42:23.891812 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:42:23.908421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:42:24.052857 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:24.057147 (kubelet)[1655]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:42:24.626188 kubelet[1655]: E0130 13:42:24.626128 1655 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:42:24.633342 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:42:24.633544 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:42:24.794104 dockerd[1624]: time="2025-01-30T13:42:24.794021227Z" level=info msg="Loading containers: start." Jan 30 13:42:24.918275 kernel: Initializing XFRM netlink socket Jan 30 13:42:25.003822 systemd-networkd[1390]: docker0: Link UP Jan 30 13:42:25.038734 dockerd[1624]: time="2025-01-30T13:42:25.038676162Z" level=info msg="Loading containers: done." Jan 30 13:42:25.054673 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3281408129-merged.mount: Deactivated successfully. Jan 30 13:42:25.059136 dockerd[1624]: time="2025-01-30T13:42:25.059069808Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:42:25.059253 dockerd[1624]: time="2025-01-30T13:42:25.059208478Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:42:25.059417 dockerd[1624]: time="2025-01-30T13:42:25.059367726Z" level=info msg="Daemon has completed initialization" Jan 30 13:42:25.097423 dockerd[1624]: time="2025-01-30T13:42:25.097350478Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:42:25.097556 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:42:25.749566 containerd[1479]: time="2025-01-30T13:42:25.749525539Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 13:42:27.764488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4249325779.mount: Deactivated successfully. Jan 30 13:42:29.210922 containerd[1479]: time="2025-01-30T13:42:29.210869624Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:29.211569 containerd[1479]: time="2025-01-30T13:42:29.211533008Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976721" Jan 30 13:42:29.212888 containerd[1479]: time="2025-01-30T13:42:29.212844117Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:29.215397 containerd[1479]: time="2025-01-30T13:42:29.215373390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:29.216492 containerd[1479]: time="2025-01-30T13:42:29.216462523Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 3.466897731s" Jan 30 13:42:29.216538 containerd[1479]: time="2025-01-30T13:42:29.216494303Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 30 13:42:29.217841 containerd[1479]: time="2025-01-30T13:42:29.217815641Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 13:42:30.946139 containerd[1479]: time="2025-01-30T13:42:30.946076717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:30.947275 containerd[1479]: time="2025-01-30T13:42:30.947194223Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701143" Jan 30 13:42:30.948944 containerd[1479]: time="2025-01-30T13:42:30.948890895Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:30.951757 containerd[1479]: time="2025-01-30T13:42:30.951725250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:30.952797 containerd[1479]: time="2025-01-30T13:42:30.952755121Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.734908012s" Jan 30 13:42:30.952797 containerd[1479]: time="2025-01-30T13:42:30.952791630Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 30 13:42:30.953335 containerd[1479]: time="2025-01-30T13:42:30.953305654Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 13:42:32.350057 containerd[1479]: time="2025-01-30T13:42:32.349987641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:32.350881 containerd[1479]: time="2025-01-30T13:42:32.350840490Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652053" Jan 30 13:42:32.352011 containerd[1479]: time="2025-01-30T13:42:32.351962424Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:32.356406 containerd[1479]: time="2025-01-30T13:42:32.354883292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:32.356483 containerd[1479]: time="2025-01-30T13:42:32.356415886Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.40300223s" Jan 30 13:42:32.356483 containerd[1479]: time="2025-01-30T13:42:32.356448307Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 30 13:42:32.357061 containerd[1479]: time="2025-01-30T13:42:32.357021592Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 13:42:34.064624 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount13666657.mount: Deactivated successfully. Jan 30 13:42:34.405630 containerd[1479]: time="2025-01-30T13:42:34.405505691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:34.406475 containerd[1479]: time="2025-01-30T13:42:34.406441556Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 30 13:42:34.407600 containerd[1479]: time="2025-01-30T13:42:34.407569151Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:34.410161 containerd[1479]: time="2025-01-30T13:42:34.410109826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:34.410693 containerd[1479]: time="2025-01-30T13:42:34.410660999Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 2.053436356s" Jan 30 13:42:34.410743 containerd[1479]: time="2025-01-30T13:42:34.410696345Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 30 13:42:34.411207 containerd[1479]: time="2025-01-30T13:42:34.411174893Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:42:34.661094 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:42:34.671387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:42:34.821817 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:34.827088 (kubelet)[1866]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:42:35.408510 kubelet[1866]: E0130 13:42:35.408406 1866 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:42:35.412466 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:42:35.412671 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:42:35.745052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4111866671.mount: Deactivated successfully. Jan 30 13:42:37.672891 containerd[1479]: time="2025-01-30T13:42:37.672823919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:37.673790 containerd[1479]: time="2025-01-30T13:42:37.673743714Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:42:37.675034 containerd[1479]: time="2025-01-30T13:42:37.675002104Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:37.678250 containerd[1479]: time="2025-01-30T13:42:37.678200282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:37.679203 containerd[1479]: time="2025-01-30T13:42:37.679161835Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.26795923s" Jan 30 13:42:37.679203 containerd[1479]: time="2025-01-30T13:42:37.679197622Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:42:37.679825 containerd[1479]: time="2025-01-30T13:42:37.679652976Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:42:38.273969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1119348626.mount: Deactivated successfully. Jan 30 13:42:38.279674 containerd[1479]: time="2025-01-30T13:42:38.279613226Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:38.280549 containerd[1479]: time="2025-01-30T13:42:38.280500860Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 30 13:42:38.281605 containerd[1479]: time="2025-01-30T13:42:38.281547734Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:38.283704 containerd[1479]: time="2025-01-30T13:42:38.283658231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:38.284386 containerd[1479]: time="2025-01-30T13:42:38.284337956Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 604.654032ms" Jan 30 13:42:38.284386 containerd[1479]: time="2025-01-30T13:42:38.284374445Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:42:38.284862 containerd[1479]: time="2025-01-30T13:42:38.284769896Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 13:42:38.829755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1094107844.mount: Deactivated successfully. Jan 30 13:42:40.869485 containerd[1479]: time="2025-01-30T13:42:40.869408416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:40.870615 containerd[1479]: time="2025-01-30T13:42:40.870575475Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Jan 30 13:42:40.872143 containerd[1479]: time="2025-01-30T13:42:40.872113138Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:40.875627 containerd[1479]: time="2025-01-30T13:42:40.875584679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:42:40.876794 containerd[1479]: time="2025-01-30T13:42:40.876761105Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.591961604s" Jan 30 13:42:40.876794 containerd[1479]: time="2025-01-30T13:42:40.876791732Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 30 13:42:43.453542 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:43.464439 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:42:43.487762 systemd[1]: Reloading requested from client PID 2007 ('systemctl') (unit session-5.scope)... Jan 30 13:42:43.487785 systemd[1]: Reloading... Jan 30 13:42:43.560271 zram_generator::config[2047]: No configuration found. Jan 30 13:42:43.780958 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:42:43.857692 systemd[1]: Reloading finished in 369 ms. Jan 30 13:42:43.905266 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:42:43.905363 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:42:43.905689 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:43.907652 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:42:44.060890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:44.066117 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:42:44.100042 kubelet[2094]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:42:44.100042 kubelet[2094]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:42:44.100042 kubelet[2094]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:42:44.100452 kubelet[2094]: I0130 13:42:44.100094 2094 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:42:44.677549 kubelet[2094]: I0130 13:42:44.677503 2094 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:42:44.677549 kubelet[2094]: I0130 13:42:44.677535 2094 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:42:44.677769 kubelet[2094]: I0130 13:42:44.677753 2094 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:42:44.698613 kubelet[2094]: E0130 13:42:44.698579 2094 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:42:44.700849 kubelet[2094]: I0130 13:42:44.700831 2094 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:42:44.707497 kubelet[2094]: E0130 13:42:44.707428 2094 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:42:44.707497 kubelet[2094]: I0130 13:42:44.707496 2094 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:42:44.714814 kubelet[2094]: I0130 13:42:44.714789 2094 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:42:44.715938 kubelet[2094]: I0130 13:42:44.715913 2094 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:42:44.716130 kubelet[2094]: I0130 13:42:44.716095 2094 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:42:44.716303 kubelet[2094]: I0130 13:42:44.716123 2094 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:42:44.716381 kubelet[2094]: I0130 13:42:44.716312 2094 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:42:44.716381 kubelet[2094]: I0130 13:42:44.716321 2094 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:42:44.716449 kubelet[2094]: I0130 13:42:44.716434 2094 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:42:44.717836 kubelet[2094]: I0130 13:42:44.717805 2094 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:42:44.717836 kubelet[2094]: I0130 13:42:44.717825 2094 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:42:44.717901 kubelet[2094]: I0130 13:42:44.717854 2094 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:42:44.717901 kubelet[2094]: I0130 13:42:44.717864 2094 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:42:44.720802 kubelet[2094]: W0130 13:42:44.720765 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Jan 30 13:42:44.720910 kubelet[2094]: E0130 13:42:44.720878 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:42:44.721211 kubelet[2094]: W0130 13:42:44.721140 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Jan 30 13:42:44.721285 kubelet[2094]: E0130 13:42:44.721230 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:42:44.722585 kubelet[2094]: I0130 13:42:44.722565 2094 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:42:44.723924 kubelet[2094]: I0130 13:42:44.723905 2094 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:42:44.724408 kubelet[2094]: W0130 13:42:44.724390 2094 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:42:44.724956 kubelet[2094]: I0130 13:42:44.724935 2094 server.go:1269] "Started kubelet" Jan 30 13:42:44.726270 kubelet[2094]: I0130 13:42:44.725222 2094 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:42:44.726270 kubelet[2094]: I0130 13:42:44.725558 2094 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:42:44.726270 kubelet[2094]: I0130 13:42:44.725608 2094 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:42:44.726270 kubelet[2094]: I0130 13:42:44.726255 2094 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:42:44.726556 kubelet[2094]: I0130 13:42:44.726453 2094 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:42:44.729042 kubelet[2094]: I0130 13:42:44.726874 2094 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:42:44.729042 kubelet[2094]: E0130 13:42:44.727194 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:42:44.729042 kubelet[2094]: I0130 13:42:44.727219 2094 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:42:44.729042 kubelet[2094]: I0130 13:42:44.727368 2094 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:42:44.729042 kubelet[2094]: I0130 13:42:44.727427 2094 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:42:44.729042 kubelet[2094]: W0130 13:42:44.727652 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Jan 30 13:42:44.729042 kubelet[2094]: E0130 13:42:44.727685 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:42:44.729042 kubelet[2094]: E0130 13:42:44.727841 2094 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="200ms" Jan 30 13:42:44.729042 kubelet[2094]: I0130 13:42:44.728254 2094 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:42:44.729042 kubelet[2094]: I0130 13:42:44.728323 2094 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:42:44.729042 kubelet[2094]: E0130 13:42:44.728990 2094 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:42:44.729565 kubelet[2094]: I0130 13:42:44.729552 2094 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:42:44.733836 kubelet[2094]: E0130 13:42:44.729427 2094 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.30:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.30:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7c38d52e9095 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:42:44.724912277 +0000 UTC m=+0.654560080,LastTimestamp:2025-01-30 13:42:44.724912277 +0000 UTC m=+0.654560080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:42:44.747913 kubelet[2094]: I0130 13:42:44.747864 2094 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:42:44.747913 kubelet[2094]: I0130 13:42:44.747897 2094 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:42:44.747913 kubelet[2094]: I0130 13:42:44.747915 2094 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:42:44.748821 kubelet[2094]: I0130 13:42:44.748784 2094 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:42:44.750068 kubelet[2094]: I0130 13:42:44.750048 2094 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:42:44.750321 kubelet[2094]: I0130 13:42:44.750154 2094 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:42:44.750387 kubelet[2094]: I0130 13:42:44.750376 2094 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:42:44.750504 kubelet[2094]: E0130 13:42:44.750460 2094 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:42:44.750883 kubelet[2094]: W0130 13:42:44.750850 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Jan 30 13:42:44.750915 kubelet[2094]: E0130 13:42:44.750887 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:42:44.828217 kubelet[2094]: E0130 13:42:44.828141 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:42:44.851536 kubelet[2094]: E0130 13:42:44.851493 2094 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:42:44.928949 kubelet[2094]: E0130 13:42:44.928830 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:42:44.929217 kubelet[2094]: E0130 13:42:44.929183 2094 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="400ms" Jan 30 13:42:45.028993 kubelet[2094]: E0130 13:42:45.028950 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:42:45.052185 kubelet[2094]: E0130 13:42:45.052124 2094 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 30 13:42:45.129444 kubelet[2094]: E0130 13:42:45.129422 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:42:45.230330 kubelet[2094]: E0130 13:42:45.230228 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:42:45.302273 kubelet[2094]: I0130 13:42:45.302212 2094 policy_none.go:49] "None policy: Start" Jan 30 13:42:45.303253 kubelet[2094]: I0130 13:42:45.303210 2094 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:42:45.303334 kubelet[2094]: I0130 13:42:45.303273 2094 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:42:45.312188 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:42:45.326385 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:42:45.329252 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:42:45.329682 kubelet[2094]: E0130 13:42:45.329645 2094 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="800ms" Jan 30 13:42:45.331010 kubelet[2094]: E0130 13:42:45.330978 2094 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:42:45.345444 kubelet[2094]: I0130 13:42:45.345377 2094 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:42:45.345706 kubelet[2094]: I0130 13:42:45.345686 2094 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:42:45.345835 kubelet[2094]: I0130 13:42:45.345708 2094 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:42:45.345925 kubelet[2094]: I0130 13:42:45.345904 2094 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:42:45.346784 kubelet[2094]: E0130 13:42:45.346761 2094 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:42:45.448123 kubelet[2094]: I0130 13:42:45.448073 2094 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:42:45.448492 kubelet[2094]: E0130 13:42:45.448454 2094 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Jan 30 13:42:45.463399 systemd[1]: Created slice kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice - libcontainer container kubepods-burstable-podc988230cd0d49eebfaffbefbe8c74a10.slice. Jan 30 13:42:45.475562 systemd[1]: Created slice kubepods-burstable-pod9a2efcbec7fdc95a7087efb9be4fdc33.slice - libcontainer container kubepods-burstable-pod9a2efcbec7fdc95a7087efb9be4fdc33.slice. Jan 30 13:42:45.486115 systemd[1]: Created slice kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice - libcontainer container kubepods-burstable-podfa5289f3c0ba7f1736282e713231ffc5.slice. Jan 30 13:42:45.532780 kubelet[2094]: I0130 13:42:45.532706 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:45.532780 kubelet[2094]: I0130 13:42:45.532756 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:45.532780 kubelet[2094]: I0130 13:42:45.532776 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:45.533071 kubelet[2094]: I0130 13:42:45.532795 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:45.533071 kubelet[2094]: I0130 13:42:45.532816 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:45.533071 kubelet[2094]: I0130 13:42:45.532833 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a2efcbec7fdc95a7087efb9be4fdc33-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9a2efcbec7fdc95a7087efb9be4fdc33\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:42:45.533071 kubelet[2094]: I0130 13:42:45.532849 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a2efcbec7fdc95a7087efb9be4fdc33-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9a2efcbec7fdc95a7087efb9be4fdc33\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:42:45.533071 kubelet[2094]: I0130 13:42:45.532915 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a2efcbec7fdc95a7087efb9be4fdc33-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9a2efcbec7fdc95a7087efb9be4fdc33\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:42:45.533204 kubelet[2094]: I0130 13:42:45.532969 2094 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:42:45.537898 kubelet[2094]: W0130 13:42:45.537817 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Jan 30 13:42:45.537963 kubelet[2094]: E0130 13:42:45.537911 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.30:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:42:45.565519 kubelet[2094]: W0130 13:42:45.565389 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Jan 30 13:42:45.565519 kubelet[2094]: E0130 13:42:45.565503 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.30:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:42:45.649650 kubelet[2094]: I0130 13:42:45.649619 2094 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:42:45.650004 kubelet[2094]: E0130 13:42:45.649976 2094 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Jan 30 13:42:45.661473 kubelet[2094]: W0130 13:42:45.661414 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Jan 30 13:42:45.661544 kubelet[2094]: E0130 13:42:45.661473 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.30:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:42:45.772941 kubelet[2094]: E0130 13:42:45.772789 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:45.773588 containerd[1479]: time="2025-01-30T13:42:45.773552676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,}" Jan 30 13:42:45.783753 kubelet[2094]: E0130 13:42:45.783721 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:45.784096 containerd[1479]: time="2025-01-30T13:42:45.784064448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9a2efcbec7fdc95a7087efb9be4fdc33,Namespace:kube-system,Attempt:0,}" Jan 30 13:42:45.789326 kubelet[2094]: E0130 13:42:45.789296 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:45.789613 containerd[1479]: time="2025-01-30T13:42:45.789585800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,}" Jan 30 13:42:46.051483 kubelet[2094]: I0130 13:42:46.051372 2094 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:42:46.051735 kubelet[2094]: E0130 13:42:46.051707 2094 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Jan 30 13:42:46.130691 kubelet[2094]: E0130 13:42:46.130622 2094 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.30:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.30:6443: connect: connection refused" interval="1.6s" Jan 30 13:42:46.166622 kubelet[2094]: W0130 13:42:46.166569 2094 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.30:6443: connect: connection refused Jan 30 13:42:46.166622 kubelet[2094]: E0130 13:42:46.166626 2094 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.30:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:42:46.718775 kubelet[2094]: E0130 13:42:46.718711 2094 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.30:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.30:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:42:46.854009 kubelet[2094]: I0130 13:42:46.853959 2094 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:42:46.854353 kubelet[2094]: E0130 13:42:46.854329 2094 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.30:6443/api/v1/nodes\": dial tcp 10.0.0.30:6443: connect: connection refused" node="localhost" Jan 30 13:42:47.229422 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327274688.mount: Deactivated successfully. Jan 30 13:42:47.238630 containerd[1479]: time="2025-01-30T13:42:47.238569229Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:42:47.239618 containerd[1479]: time="2025-01-30T13:42:47.239583506Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:42:47.240657 containerd[1479]: time="2025-01-30T13:42:47.240585198Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:42:47.241791 containerd[1479]: time="2025-01-30T13:42:47.241752548Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:42:47.242672 containerd[1479]: time="2025-01-30T13:42:47.242628049Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:42:47.243696 containerd[1479]: time="2025-01-30T13:42:47.243652465Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:42:47.244685 containerd[1479]: time="2025-01-30T13:42:47.244654219Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:42:47.249012 containerd[1479]: time="2025-01-30T13:42:47.248973239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:42:47.249979 containerd[1479]: time="2025-01-30T13:42:47.249931168Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.465800582s" Jan 30 13:42:47.252867 containerd[1479]: time="2025-01-30T13:42:47.252829451Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.463194156s" Jan 30 13:42:47.253477 containerd[1479]: time="2025-01-30T13:42:47.253454391Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.479813553s" Jan 30 13:42:47.384964 containerd[1479]: time="2025-01-30T13:42:47.384844069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:42:47.384964 containerd[1479]: time="2025-01-30T13:42:47.384904486Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:42:47.384964 containerd[1479]: time="2025-01-30T13:42:47.384917540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:47.385158 containerd[1479]: time="2025-01-30T13:42:47.384991872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:47.385767 containerd[1479]: time="2025-01-30T13:42:47.385499337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:42:47.385767 containerd[1479]: time="2025-01-30T13:42:47.385603817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:42:47.385767 containerd[1479]: time="2025-01-30T13:42:47.385681727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:47.385950 containerd[1479]: time="2025-01-30T13:42:47.385859929Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:47.388118 containerd[1479]: time="2025-01-30T13:42:47.387997261Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:42:47.388118 containerd[1479]: time="2025-01-30T13:42:47.388049913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:42:47.388118 containerd[1479]: time="2025-01-30T13:42:47.388066164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:47.388366 containerd[1479]: time="2025-01-30T13:42:47.388144575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:47.414430 systemd[1]: Started cri-containerd-0b9b0a044ccd086c9b4a86d29953e72c4ca9ebcdf22009dee6ab4fc01c89e187.scope - libcontainer container 0b9b0a044ccd086c9b4a86d29953e72c4ca9ebcdf22009dee6ab4fc01c89e187. Jan 30 13:42:47.416314 systemd[1]: Started cri-containerd-2b31027b635c0ff99dacd7ef7a5a8a981177e33f7ea91773d2883fc6812a4f59.scope - libcontainer container 2b31027b635c0ff99dacd7ef7a5a8a981177e33f7ea91773d2883fc6812a4f59. Jan 30 13:42:47.419088 systemd[1]: Started cri-containerd-48f74cc32b8d67c3dd980403ba0f0162ce2233665222998f13fb0557328ef227.scope - libcontainer container 48f74cc32b8d67c3dd980403ba0f0162ce2233665222998f13fb0557328ef227. Jan 30 13:42:47.460874 containerd[1479]: time="2025-01-30T13:42:47.460808551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9a2efcbec7fdc95a7087efb9be4fdc33,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b9b0a044ccd086c9b4a86d29953e72c4ca9ebcdf22009dee6ab4fc01c89e187\"" Jan 30 13:42:47.463419 kubelet[2094]: E0130 13:42:47.462989 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:47.466015 containerd[1479]: time="2025-01-30T13:42:47.465975399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c988230cd0d49eebfaffbefbe8c74a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"2b31027b635c0ff99dacd7ef7a5a8a981177e33f7ea91773d2883fc6812a4f59\"" Jan 30 13:42:47.466170 containerd[1479]: time="2025-01-30T13:42:47.466141437Z" level=info msg="CreateContainer within sandbox \"0b9b0a044ccd086c9b4a86d29953e72c4ca9ebcdf22009dee6ab4fc01c89e187\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:42:47.466637 kubelet[2094]: E0130 13:42:47.466614 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:47.468399 containerd[1479]: time="2025-01-30T13:42:47.468372470Z" level=info msg="CreateContainer within sandbox \"2b31027b635c0ff99dacd7ef7a5a8a981177e33f7ea91773d2883fc6812a4f59\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:42:47.469282 containerd[1479]: time="2025-01-30T13:42:47.469124593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fa5289f3c0ba7f1736282e713231ffc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"48f74cc32b8d67c3dd980403ba0f0162ce2233665222998f13fb0557328ef227\"" Jan 30 13:42:47.469981 kubelet[2094]: E0130 13:42:47.469939 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:47.471663 containerd[1479]: time="2025-01-30T13:42:47.471625765Z" level=info msg="CreateContainer within sandbox \"48f74cc32b8d67c3dd980403ba0f0162ce2233665222998f13fb0557328ef227\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:42:47.500946 containerd[1479]: time="2025-01-30T13:42:47.500805083Z" level=info msg="CreateContainer within sandbox \"2b31027b635c0ff99dacd7ef7a5a8a981177e33f7ea91773d2883fc6812a4f59\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fe76a27c629f50c6b8b7355bc9f35af66864b8b92ac143a2f19cfca2b11e2f21\"" Jan 30 13:42:47.501649 containerd[1479]: time="2025-01-30T13:42:47.501612593Z" level=info msg="StartContainer for \"fe76a27c629f50c6b8b7355bc9f35af66864b8b92ac143a2f19cfca2b11e2f21\"" Jan 30 13:42:47.506813 containerd[1479]: time="2025-01-30T13:42:47.506764673Z" level=info msg="CreateContainer within sandbox \"0b9b0a044ccd086c9b4a86d29953e72c4ca9ebcdf22009dee6ab4fc01c89e187\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0b651fb21fbb31da5963a51d40b77e693d249168703ed98b0d43b46db87b6c5d\"" Jan 30 13:42:47.507351 containerd[1479]: time="2025-01-30T13:42:47.507318535Z" level=info msg="StartContainer for \"0b651fb21fbb31da5963a51d40b77e693d249168703ed98b0d43b46db87b6c5d\"" Jan 30 13:42:47.507643 containerd[1479]: time="2025-01-30T13:42:47.507609103Z" level=info msg="CreateContainer within sandbox \"48f74cc32b8d67c3dd980403ba0f0162ce2233665222998f13fb0557328ef227\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"44847a7b0698c6eac026fa9444af7496d59233005254238eb24238531c593285\"" Jan 30 13:42:47.508022 containerd[1479]: time="2025-01-30T13:42:47.507998440Z" level=info msg="StartContainer for \"44847a7b0698c6eac026fa9444af7496d59233005254238eb24238531c593285\"" Jan 30 13:42:47.531326 systemd[1]: Started cri-containerd-fe76a27c629f50c6b8b7355bc9f35af66864b8b92ac143a2f19cfca2b11e2f21.scope - libcontainer container fe76a27c629f50c6b8b7355bc9f35af66864b8b92ac143a2f19cfca2b11e2f21. Jan 30 13:42:47.535967 systemd[1]: Started cri-containerd-0b651fb21fbb31da5963a51d40b77e693d249168703ed98b0d43b46db87b6c5d.scope - libcontainer container 0b651fb21fbb31da5963a51d40b77e693d249168703ed98b0d43b46db87b6c5d. Jan 30 13:42:47.537598 systemd[1]: Started cri-containerd-44847a7b0698c6eac026fa9444af7496d59233005254238eb24238531c593285.scope - libcontainer container 44847a7b0698c6eac026fa9444af7496d59233005254238eb24238531c593285. Jan 30 13:42:47.825870 containerd[1479]: time="2025-01-30T13:42:47.825596011Z" level=info msg="StartContainer for \"0b651fb21fbb31da5963a51d40b77e693d249168703ed98b0d43b46db87b6c5d\" returns successfully" Jan 30 13:42:47.825870 containerd[1479]: time="2025-01-30T13:42:47.825712765Z" level=info msg="StartContainer for \"fe76a27c629f50c6b8b7355bc9f35af66864b8b92ac143a2f19cfca2b11e2f21\" returns successfully" Jan 30 13:42:47.825870 containerd[1479]: time="2025-01-30T13:42:47.825758753Z" level=info msg="StartContainer for \"44847a7b0698c6eac026fa9444af7496d59233005254238eb24238531c593285\" returns successfully" Jan 30 13:42:47.830190 kubelet[2094]: E0130 13:42:47.830113 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:47.836251 kubelet[2094]: E0130 13:42:47.834434 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:47.839773 kubelet[2094]: E0130 13:42:47.839757 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:48.457870 kubelet[2094]: I0130 13:42:48.456069 2094 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:42:48.511066 kubelet[2094]: E0130 13:42:48.511017 2094 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:42:48.608796 kubelet[2094]: I0130 13:42:48.608761 2094 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 30 13:42:48.723306 kubelet[2094]: I0130 13:42:48.723186 2094 apiserver.go:52] "Watching apiserver" Jan 30 13:42:48.727699 kubelet[2094]: I0130 13:42:48.727664 2094 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:42:48.776939 kubelet[2094]: E0130 13:42:48.776846 2094 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f7c38d52e9095 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:42:44.724912277 +0000 UTC m=+0.654560080,LastTimestamp:2025-01-30 13:42:44.724912277 +0000 UTC m=+0.654560080,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:42:48.831154 kubelet[2094]: E0130 13:42:48.831045 2094 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.181f7c38d56cac83 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:42:44.728982659 +0000 UTC m=+0.658630462,LastTimestamp:2025-01-30 13:42:44.728982659 +0000 UTC m=+0.658630462,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:42:48.837958 kubelet[2094]: E0130 13:42:48.837911 2094 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:48.838087 kubelet[2094]: E0130 13:42:48.838065 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:48.838325 kubelet[2094]: E0130 13:42:48.838307 2094 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 30 13:42:48.838460 kubelet[2094]: E0130 13:42:48.838447 2094 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:50.502940 systemd[1]: Reloading requested from client PID 2366 ('systemctl') (unit session-5.scope)... Jan 30 13:42:50.502963 systemd[1]: Reloading... Jan 30 13:42:50.580290 zram_generator::config[2408]: No configuration found. Jan 30 13:42:50.690405 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:42:50.786923 systemd[1]: Reloading finished in 283 ms. Jan 30 13:42:50.833500 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:42:50.855780 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:42:50.856102 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:50.856169 systemd[1]: kubelet.service: Consumed 1.145s CPU time, 121.1M memory peak, 0B memory swap peak. Jan 30 13:42:50.870718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:42:51.028272 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:42:51.034625 (kubelet)[2450]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:42:51.075337 kubelet[2450]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:42:51.075337 kubelet[2450]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:42:51.075337 kubelet[2450]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:42:51.075693 kubelet[2450]: I0130 13:42:51.075318 2450 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:42:51.084169 kubelet[2450]: I0130 13:42:51.084130 2450 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:42:51.084169 kubelet[2450]: I0130 13:42:51.084164 2450 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:42:51.084506 kubelet[2450]: I0130 13:42:51.084483 2450 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:42:51.086146 kubelet[2450]: I0130 13:42:51.086119 2450 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:42:51.088642 kubelet[2450]: I0130 13:42:51.088598 2450 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:42:51.092084 kubelet[2450]: E0130 13:42:51.092041 2450 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:42:51.092084 kubelet[2450]: I0130 13:42:51.092074 2450 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:42:51.098065 kubelet[2450]: I0130 13:42:51.098032 2450 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:42:51.098468 kubelet[2450]: I0130 13:42:51.098200 2450 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:42:51.098629 kubelet[2450]: I0130 13:42:51.098602 2450 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:42:51.098805 kubelet[2450]: I0130 13:42:51.098630 2450 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:42:51.098956 kubelet[2450]: I0130 13:42:51.098813 2450 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:42:51.098956 kubelet[2450]: I0130 13:42:51.098823 2450 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:42:51.098956 kubelet[2450]: I0130 13:42:51.098852 2450 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:42:51.099025 kubelet[2450]: I0130 13:42:51.098961 2450 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:42:51.099025 kubelet[2450]: I0130 13:42:51.098975 2450 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:42:51.099025 kubelet[2450]: I0130 13:42:51.099004 2450 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:42:51.099025 kubelet[2450]: I0130 13:42:51.099017 2450 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:42:51.099645 kubelet[2450]: I0130 13:42:51.099489 2450 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:42:51.099837 kubelet[2450]: I0130 13:42:51.099819 2450 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:42:51.100174 kubelet[2450]: I0130 13:42:51.100161 2450 server.go:1269] "Started kubelet" Jan 30 13:42:51.101174 kubelet[2450]: I0130 13:42:51.101099 2450 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:42:51.101759 kubelet[2450]: I0130 13:42:51.101712 2450 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:42:51.101944 kubelet[2450]: I0130 13:42:51.101923 2450 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:42:51.102178 kubelet[2450]: I0130 13:42:51.102133 2450 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:42:51.103119 kubelet[2450]: I0130 13:42:51.103092 2450 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:42:51.105437 kubelet[2450]: E0130 13:42:51.105413 2450 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:42:51.105738 kubelet[2450]: I0130 13:42:51.105658 2450 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:42:51.111096 kubelet[2450]: I0130 13:42:51.111079 2450 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:42:51.111341 kubelet[2450]: I0130 13:42:51.111329 2450 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:42:51.111639 kubelet[2450]: I0130 13:42:51.111558 2450 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:42:51.111981 kubelet[2450]: I0130 13:42:51.111967 2450 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:42:51.112158 kubelet[2450]: I0130 13:42:51.112139 2450 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:42:51.113680 kubelet[2450]: I0130 13:42:51.113655 2450 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:42:51.118256 kubelet[2450]: I0130 13:42:51.118181 2450 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:42:51.120092 kubelet[2450]: I0130 13:42:51.119620 2450 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:42:51.120092 kubelet[2450]: I0130 13:42:51.119661 2450 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:42:51.120092 kubelet[2450]: I0130 13:42:51.119687 2450 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:42:51.120092 kubelet[2450]: E0130 13:42:51.119766 2450 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:42:51.157020 kubelet[2450]: I0130 13:42:51.156995 2450 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:42:51.157170 kubelet[2450]: I0130 13:42:51.157158 2450 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:42:51.157228 kubelet[2450]: I0130 13:42:51.157220 2450 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:42:51.157464 kubelet[2450]: I0130 13:42:51.157428 2450 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:42:51.157464 kubelet[2450]: I0130 13:42:51.157443 2450 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:42:51.157464 kubelet[2450]: I0130 13:42:51.157460 2450 policy_none.go:49] "None policy: Start" Jan 30 13:42:51.159022 kubelet[2450]: I0130 13:42:51.158997 2450 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:42:51.159071 kubelet[2450]: I0130 13:42:51.159031 2450 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:42:51.159222 kubelet[2450]: I0130 13:42:51.159197 2450 state_mem.go:75] "Updated machine memory state" Jan 30 13:42:51.163514 kubelet[2450]: I0130 13:42:51.163488 2450 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:42:51.163682 kubelet[2450]: I0130 13:42:51.163660 2450 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:42:51.163717 kubelet[2450]: I0130 13:42:51.163679 2450 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:42:51.164608 kubelet[2450]: I0130 13:42:51.164200 2450 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:42:51.271778 kubelet[2450]: I0130 13:42:51.271737 2450 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jan 30 13:42:51.412396 kubelet[2450]: I0130 13:42:51.412356 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:51.412396 kubelet[2450]: I0130 13:42:51.412396 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:51.412576 kubelet[2450]: I0130 13:42:51.412420 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:51.412576 kubelet[2450]: I0130 13:42:51.412436 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:51.412576 kubelet[2450]: I0130 13:42:51.412455 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c988230cd0d49eebfaffbefbe8c74a10-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c988230cd0d49eebfaffbefbe8c74a10\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:42:51.412576 kubelet[2450]: I0130 13:42:51.412470 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a2efcbec7fdc95a7087efb9be4fdc33-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9a2efcbec7fdc95a7087efb9be4fdc33\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:42:51.412576 kubelet[2450]: I0130 13:42:51.412485 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fa5289f3c0ba7f1736282e713231ffc5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fa5289f3c0ba7f1736282e713231ffc5\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:42:51.412806 kubelet[2450]: I0130 13:42:51.412502 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a2efcbec7fdc95a7087efb9be4fdc33-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9a2efcbec7fdc95a7087efb9be4fdc33\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:42:51.412806 kubelet[2450]: I0130 13:42:51.412530 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a2efcbec7fdc95a7087efb9be4fdc33-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9a2efcbec7fdc95a7087efb9be4fdc33\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:42:51.482783 kubelet[2450]: I0130 13:42:51.482751 2450 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jan 30 13:42:51.482913 kubelet[2450]: I0130 13:42:51.482833 2450 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jan 30 13:42:51.727726 kubelet[2450]: E0130 13:42:51.727538 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:51.727726 kubelet[2450]: E0130 13:42:51.727572 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:51.727726 kubelet[2450]: E0130 13:42:51.727596 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:52.099945 kubelet[2450]: I0130 13:42:52.099806 2450 apiserver.go:52] "Watching apiserver" Jan 30 13:42:52.111599 kubelet[2450]: I0130 13:42:52.111539 2450 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:42:52.134034 kubelet[2450]: E0130 13:42:52.133995 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:52.134034 kubelet[2450]: E0130 13:42:52.134035 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:52.140494 kubelet[2450]: E0130 13:42:52.139899 2450 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:42:52.140494 kubelet[2450]: E0130 13:42:52.140092 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:52.157653 kubelet[2450]: I0130 13:42:52.157580 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.157562027 podStartE2EDuration="1.157562027s" podCreationTimestamp="2025-01-30 13:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:42:52.150611871 +0000 UTC m=+1.111401130" watchObservedRunningTime="2025-01-30 13:42:52.157562027 +0000 UTC m=+1.118351286" Jan 30 13:42:52.164490 kubelet[2450]: I0130 13:42:52.164422 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.1644018919999999 podStartE2EDuration="1.164401892s" podCreationTimestamp="2025-01-30 13:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:42:52.157887027 +0000 UTC m=+1.118676286" watchObservedRunningTime="2025-01-30 13:42:52.164401892 +0000 UTC m=+1.125191152" Jan 30 13:42:52.467247 sudo[1606]: pam_unix(sudo:session): session closed for user root Jan 30 13:42:52.469110 sshd[1603]: pam_unix(sshd:session): session closed for user core Jan 30 13:42:52.473844 systemd[1]: sshd@4-10.0.0.30:22-10.0.0.1:53278.service: Deactivated successfully. Jan 30 13:42:52.475581 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:42:52.475773 systemd[1]: session-5.scope: Consumed 3.812s CPU time, 160.5M memory peak, 0B memory swap peak. Jan 30 13:42:52.476256 systemd-logind[1452]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:42:52.477004 systemd-logind[1452]: Removed session 5. Jan 30 13:42:53.135161 kubelet[2450]: E0130 13:42:53.135132 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:55.336632 kubelet[2450]: E0130 13:42:55.336576 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:55.399654 kubelet[2450]: I0130 13:42:55.399608 2450 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:42:55.399966 containerd[1479]: time="2025-01-30T13:42:55.399928462Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:42:55.400432 kubelet[2450]: I0130 13:42:55.400149 2450 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:42:56.431675 update_engine[1457]: I20250130 13:42:56.431602 1457 update_attempter.cc:509] Updating boot flags... Jan 30 13:42:56.469740 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2525) Jan 30 13:42:56.500111 kubelet[2450]: I0130 13:42:56.498783 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.498746955 podStartE2EDuration="5.498746955s" podCreationTimestamp="2025-01-30 13:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:42:52.164571547 +0000 UTC m=+1.125360806" watchObservedRunningTime="2025-01-30 13:42:56.498746955 +0000 UTC m=+5.459536214" Jan 30 13:42:56.515925 systemd[1]: Created slice kubepods-besteffort-podaef72e3a_0c8a_403d_8958_b8efcc800550.slice - libcontainer container kubepods-besteffort-podaef72e3a_0c8a_403d_8958_b8efcc800550.slice. Jan 30 13:42:56.517562 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2525) Jan 30 13:42:56.541428 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2525) Jan 30 13:42:56.545302 kubelet[2450]: I0130 13:42:56.545076 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aef72e3a-0c8a-403d-8958-b8efcc800550-xtables-lock\") pod \"kube-proxy-82x86\" (UID: \"aef72e3a-0c8a-403d-8958-b8efcc800550\") " pod="kube-system/kube-proxy-82x86" Jan 30 13:42:56.545302 kubelet[2450]: I0130 13:42:56.545119 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/aef72e3a-0c8a-403d-8958-b8efcc800550-kube-proxy\") pod \"kube-proxy-82x86\" (UID: \"aef72e3a-0c8a-403d-8958-b8efcc800550\") " pod="kube-system/kube-proxy-82x86" Jan 30 13:42:56.545302 kubelet[2450]: I0130 13:42:56.545141 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aef72e3a-0c8a-403d-8958-b8efcc800550-lib-modules\") pod \"kube-proxy-82x86\" (UID: \"aef72e3a-0c8a-403d-8958-b8efcc800550\") " pod="kube-system/kube-proxy-82x86" Jan 30 13:42:56.545302 kubelet[2450]: I0130 13:42:56.545158 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxq9q\" (UniqueName: \"kubernetes.io/projected/aef72e3a-0c8a-403d-8958-b8efcc800550-kube-api-access-cxq9q\") pod \"kube-proxy-82x86\" (UID: \"aef72e3a-0c8a-403d-8958-b8efcc800550\") " pod="kube-system/kube-proxy-82x86" Jan 30 13:42:56.595523 systemd[1]: Created slice kubepods-burstable-pod82d75570_9919_40f8_b39a_5a5bcbeca62f.slice - libcontainer container kubepods-burstable-pod82d75570_9919_40f8_b39a_5a5bcbeca62f.slice. Jan 30 13:42:56.645859 kubelet[2450]: I0130 13:42:56.645770 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/82d75570-9919-40f8-b39a-5a5bcbeca62f-flannel-cfg\") pod \"kube-flannel-ds-cgzst\" (UID: \"82d75570-9919-40f8-b39a-5a5bcbeca62f\") " pod="kube-flannel/kube-flannel-ds-cgzst" Jan 30 13:42:56.645859 kubelet[2450]: I0130 13:42:56.645809 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs4mp\" (UniqueName: \"kubernetes.io/projected/82d75570-9919-40f8-b39a-5a5bcbeca62f-kube-api-access-qs4mp\") pod \"kube-flannel-ds-cgzst\" (UID: \"82d75570-9919-40f8-b39a-5a5bcbeca62f\") " pod="kube-flannel/kube-flannel-ds-cgzst" Jan 30 13:42:56.645859 kubelet[2450]: I0130 13:42:56.645831 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82d75570-9919-40f8-b39a-5a5bcbeca62f-xtables-lock\") pod \"kube-flannel-ds-cgzst\" (UID: \"82d75570-9919-40f8-b39a-5a5bcbeca62f\") " pod="kube-flannel/kube-flannel-ds-cgzst" Jan 30 13:42:56.645859 kubelet[2450]: I0130 13:42:56.645860 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/82d75570-9919-40f8-b39a-5a5bcbeca62f-cni-plugin\") pod \"kube-flannel-ds-cgzst\" (UID: \"82d75570-9919-40f8-b39a-5a5bcbeca62f\") " pod="kube-flannel/kube-flannel-ds-cgzst" Jan 30 13:42:56.645859 kubelet[2450]: I0130 13:42:56.645879 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/82d75570-9919-40f8-b39a-5a5bcbeca62f-cni\") pod \"kube-flannel-ds-cgzst\" (UID: \"82d75570-9919-40f8-b39a-5a5bcbeca62f\") " pod="kube-flannel/kube-flannel-ds-cgzst" Jan 30 13:42:56.646499 kubelet[2450]: I0130 13:42:56.645898 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/82d75570-9919-40f8-b39a-5a5bcbeca62f-run\") pod \"kube-flannel-ds-cgzst\" (UID: \"82d75570-9919-40f8-b39a-5a5bcbeca62f\") " pod="kube-flannel/kube-flannel-ds-cgzst" Jan 30 13:42:56.825948 kubelet[2450]: E0130 13:42:56.825635 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:56.826590 containerd[1479]: time="2025-01-30T13:42:56.826529885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-82x86,Uid:aef72e3a-0c8a-403d-8958-b8efcc800550,Namespace:kube-system,Attempt:0,}" Jan 30 13:42:56.873009 containerd[1479]: time="2025-01-30T13:42:56.872892115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:42:56.873009 containerd[1479]: time="2025-01-30T13:42:56.872981104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:42:56.873009 containerd[1479]: time="2025-01-30T13:42:56.872998106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:56.873187 containerd[1479]: time="2025-01-30T13:42:56.873120819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:56.893433 systemd[1]: Started cri-containerd-570376e9a986c26efc8d22b569e44734953d6fcfee242d964b342b186794f780.scope - libcontainer container 570376e9a986c26efc8d22b569e44734953d6fcfee242d964b342b186794f780. Jan 30 13:42:56.900348 kubelet[2450]: E0130 13:42:56.900322 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:56.901055 containerd[1479]: time="2025-01-30T13:42:56.900987678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-cgzst,Uid:82d75570-9919-40f8-b39a-5a5bcbeca62f,Namespace:kube-flannel,Attempt:0,}" Jan 30 13:42:56.919292 containerd[1479]: time="2025-01-30T13:42:56.919182157Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-82x86,Uid:aef72e3a-0c8a-403d-8958-b8efcc800550,Namespace:kube-system,Attempt:0,} returns sandbox id \"570376e9a986c26efc8d22b569e44734953d6fcfee242d964b342b186794f780\"" Jan 30 13:42:56.920126 kubelet[2450]: E0130 13:42:56.920090 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:56.925846 containerd[1479]: time="2025-01-30T13:42:56.925755167Z" level=info msg="CreateContainer within sandbox \"570376e9a986c26efc8d22b569e44734953d6fcfee242d964b342b186794f780\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:42:56.931573 containerd[1479]: time="2025-01-30T13:42:56.931448965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:42:56.931573 containerd[1479]: time="2025-01-30T13:42:56.931532494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:42:56.931926 containerd[1479]: time="2025-01-30T13:42:56.931569164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:56.931926 containerd[1479]: time="2025-01-30T13:42:56.931770527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:42:56.950884 containerd[1479]: time="2025-01-30T13:42:56.950830130Z" level=info msg="CreateContainer within sandbox \"570376e9a986c26efc8d22b569e44734953d6fcfee242d964b342b186794f780\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7d7f4a9b5031e8ffdee67af430f58a1ad8936137ba1ec676b3672aa62f01f8fb\"" Jan 30 13:42:56.951530 containerd[1479]: time="2025-01-30T13:42:56.951473122Z" level=info msg="StartContainer for \"7d7f4a9b5031e8ffdee67af430f58a1ad8936137ba1ec676b3672aa62f01f8fb\"" Jan 30 13:42:56.952526 systemd[1]: Started cri-containerd-f87b3998c4d492ced923412970f60ab841ff38d887afc79b3291c66903af93b3.scope - libcontainer container f87b3998c4d492ced923412970f60ab841ff38d887afc79b3291c66903af93b3. Jan 30 13:42:56.982496 systemd[1]: Started cri-containerd-7d7f4a9b5031e8ffdee67af430f58a1ad8936137ba1ec676b3672aa62f01f8fb.scope - libcontainer container 7d7f4a9b5031e8ffdee67af430f58a1ad8936137ba1ec676b3672aa62f01f8fb. Jan 30 13:42:56.991558 containerd[1479]: time="2025-01-30T13:42:56.991517798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-cgzst,Uid:82d75570-9919-40f8-b39a-5a5bcbeca62f,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"f87b3998c4d492ced923412970f60ab841ff38d887afc79b3291c66903af93b3\"" Jan 30 13:42:56.992561 kubelet[2450]: E0130 13:42:56.992519 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:56.994077 containerd[1479]: time="2025-01-30T13:42:56.994034059Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jan 30 13:42:57.029650 containerd[1479]: time="2025-01-30T13:42:57.029532684Z" level=info msg="StartContainer for \"7d7f4a9b5031e8ffdee67af430f58a1ad8936137ba1ec676b3672aa62f01f8fb\" returns successfully" Jan 30 13:42:57.142160 kubelet[2450]: E0130 13:42:57.142055 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:59.902598 kubelet[2450]: E0130 13:42:59.902545 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:42:59.915629 kubelet[2450]: I0130 13:42:59.915498 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-82x86" podStartSLOduration=3.9154654840000003 podStartE2EDuration="3.915465484s" podCreationTimestamp="2025-01-30 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:42:57.399689947 +0000 UTC m=+6.360479206" watchObservedRunningTime="2025-01-30 13:42:59.915465484 +0000 UTC m=+8.876254743" Jan 30 13:43:00.140913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067761246.mount: Deactivated successfully. Jan 30 13:43:00.146441 kubelet[2450]: E0130 13:43:00.146408 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:00.181570 containerd[1479]: time="2025-01-30T13:43:00.181510215Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:00.182354 containerd[1479]: time="2025-01-30T13:43:00.182282548Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jan 30 13:43:00.183431 containerd[1479]: time="2025-01-30T13:43:00.183392360Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:00.185951 containerd[1479]: time="2025-01-30T13:43:00.185911533Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:00.186997 containerd[1479]: time="2025-01-30T13:43:00.186942656Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 3.19286826s" Jan 30 13:43:00.187062 containerd[1479]: time="2025-01-30T13:43:00.186993021Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jan 30 13:43:00.189189 containerd[1479]: time="2025-01-30T13:43:00.189154928Z" level=info msg="CreateContainer within sandbox \"f87b3998c4d492ced923412970f60ab841ff38d887afc79b3291c66903af93b3\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jan 30 13:43:00.201181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount9832152.mount: Deactivated successfully. Jan 30 13:43:00.202701 containerd[1479]: time="2025-01-30T13:43:00.202659251Z" level=info msg="CreateContainer within sandbox \"f87b3998c4d492ced923412970f60ab841ff38d887afc79b3291c66903af93b3\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"6cd8e5b2d4ab8196cf09a29aee4d0e8d89b26d55674765e8bb4b23285210ae39\"" Jan 30 13:43:00.203072 containerd[1479]: time="2025-01-30T13:43:00.202977433Z" level=info msg="StartContainer for \"6cd8e5b2d4ab8196cf09a29aee4d0e8d89b26d55674765e8bb4b23285210ae39\"" Jan 30 13:43:00.240392 systemd[1]: Started cri-containerd-6cd8e5b2d4ab8196cf09a29aee4d0e8d89b26d55674765e8bb4b23285210ae39.scope - libcontainer container 6cd8e5b2d4ab8196cf09a29aee4d0e8d89b26d55674765e8bb4b23285210ae39. Jan 30 13:43:00.362697 systemd[1]: cri-containerd-6cd8e5b2d4ab8196cf09a29aee4d0e8d89b26d55674765e8bb4b23285210ae39.scope: Deactivated successfully. Jan 30 13:43:00.364748 containerd[1479]: time="2025-01-30T13:43:00.364712758Z" level=info msg="StartContainer for \"6cd8e5b2d4ab8196cf09a29aee4d0e8d89b26d55674765e8bb4b23285210ae39\" returns successfully" Jan 30 13:43:00.420357 containerd[1479]: time="2025-01-30T13:43:00.420258375Z" level=info msg="shim disconnected" id=6cd8e5b2d4ab8196cf09a29aee4d0e8d89b26d55674765e8bb4b23285210ae39 namespace=k8s.io Jan 30 13:43:00.420357 containerd[1479]: time="2025-01-30T13:43:00.420345791Z" level=warning msg="cleaning up after shim disconnected" id=6cd8e5b2d4ab8196cf09a29aee4d0e8d89b26d55674765e8bb4b23285210ae39 namespace=k8s.io Jan 30 13:43:00.420357 containerd[1479]: time="2025-01-30T13:43:00.420358475Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:43:01.070905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6cd8e5b2d4ab8196cf09a29aee4d0e8d89b26d55674765e8bb4b23285210ae39-rootfs.mount: Deactivated successfully. Jan 30 13:43:01.149372 kubelet[2450]: E0130 13:43:01.149224 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:01.149911 containerd[1479]: time="2025-01-30T13:43:01.149845493Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jan 30 13:43:01.347092 kubelet[2450]: E0130 13:43:01.346946 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:02.150880 kubelet[2450]: E0130 13:43:02.150835 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:03.253084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3787165638.mount: Deactivated successfully. Jan 30 13:43:04.019813 containerd[1479]: time="2025-01-30T13:43:04.019734070Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:04.020458 containerd[1479]: time="2025-01-30T13:43:04.020402132Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jan 30 13:43:04.021592 containerd[1479]: time="2025-01-30T13:43:04.021545794Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:04.024354 containerd[1479]: time="2025-01-30T13:43:04.024302244Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:43:04.025474 containerd[1479]: time="2025-01-30T13:43:04.025424225Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.875542543s" Jan 30 13:43:04.025474 containerd[1479]: time="2025-01-30T13:43:04.025470462Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jan 30 13:43:04.027864 containerd[1479]: time="2025-01-30T13:43:04.027821777Z" level=info msg="CreateContainer within sandbox \"f87b3998c4d492ced923412970f60ab841ff38d887afc79b3291c66903af93b3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:43:04.041450 containerd[1479]: time="2025-01-30T13:43:04.041407004Z" level=info msg="CreateContainer within sandbox \"f87b3998c4d492ced923412970f60ab841ff38d887afc79b3291c66903af93b3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7775463557eb6b45f35ba2d78eb282da49f172c0f4db026e2825ef75256dd009\"" Jan 30 13:43:04.041547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount684381822.mount: Deactivated successfully. Jan 30 13:43:04.041911 containerd[1479]: time="2025-01-30T13:43:04.041883103Z" level=info msg="StartContainer for \"7775463557eb6b45f35ba2d78eb282da49f172c0f4db026e2825ef75256dd009\"" Jan 30 13:43:04.068381 systemd[1]: Started cri-containerd-7775463557eb6b45f35ba2d78eb282da49f172c0f4db026e2825ef75256dd009.scope - libcontainer container 7775463557eb6b45f35ba2d78eb282da49f172c0f4db026e2825ef75256dd009. Jan 30 13:43:04.096088 systemd[1]: cri-containerd-7775463557eb6b45f35ba2d78eb282da49f172c0f4db026e2825ef75256dd009.scope: Deactivated successfully. Jan 30 13:43:04.104329 containerd[1479]: time="2025-01-30T13:43:04.104288235Z" level=info msg="StartContainer for \"7775463557eb6b45f35ba2d78eb282da49f172c0f4db026e2825ef75256dd009\" returns successfully" Jan 30 13:43:04.142056 containerd[1479]: time="2025-01-30T13:43:04.141971880Z" level=info msg="shim disconnected" id=7775463557eb6b45f35ba2d78eb282da49f172c0f4db026e2825ef75256dd009 namespace=k8s.io Jan 30 13:43:04.142056 containerd[1479]: time="2025-01-30T13:43:04.142021763Z" level=warning msg="cleaning up after shim disconnected" id=7775463557eb6b45f35ba2d78eb282da49f172c0f4db026e2825ef75256dd009 namespace=k8s.io Jan 30 13:43:04.142056 containerd[1479]: time="2025-01-30T13:43:04.142031152Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:43:04.161649 kubelet[2450]: E0130 13:43:04.161615 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:04.164437 containerd[1479]: time="2025-01-30T13:43:04.164400933Z" level=info msg="CreateContainer within sandbox \"f87b3998c4d492ced923412970f60ab841ff38d887afc79b3291c66903af93b3\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jan 30 13:43:04.175780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7775463557eb6b45f35ba2d78eb282da49f172c0f4db026e2825ef75256dd009-rootfs.mount: Deactivated successfully. Jan 30 13:43:04.178684 containerd[1479]: time="2025-01-30T13:43:04.178628965Z" level=info msg="CreateContainer within sandbox \"f87b3998c4d492ced923412970f60ab841ff38d887afc79b3291c66903af93b3\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"36ad89d825211ea736f87749af751cc489c14a38e98ae2afbe2c51346da6efb3\"" Jan 30 13:43:04.179159 containerd[1479]: time="2025-01-30T13:43:04.179131885Z" level=info msg="StartContainer for \"36ad89d825211ea736f87749af751cc489c14a38e98ae2afbe2c51346da6efb3\"" Jan 30 13:43:04.187252 kubelet[2450]: I0130 13:43:04.187174 2450 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 13:43:04.208443 systemd[1]: Started cri-containerd-36ad89d825211ea736f87749af751cc489c14a38e98ae2afbe2c51346da6efb3.scope - libcontainer container 36ad89d825211ea736f87749af751cc489c14a38e98ae2afbe2c51346da6efb3. Jan 30 13:43:04.210204 kubelet[2450]: W0130 13:43:04.210111 2450 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Jan 30 13:43:04.210204 kubelet[2450]: E0130 13:43:04.210160 2450 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jan 30 13:43:04.219046 systemd[1]: Created slice kubepods-burstable-podade7dee8_cac7_465e_9e07_d856927cad78.slice - libcontainer container kubepods-burstable-podade7dee8_cac7_465e_9e07_d856927cad78.slice. Jan 30 13:43:04.227634 systemd[1]: Created slice kubepods-burstable-pod349ca47b_02bf_4de5_91eb_acf5200624c3.slice - libcontainer container kubepods-burstable-pod349ca47b_02bf_4de5_91eb_acf5200624c3.slice. Jan 30 13:43:04.240518 containerd[1479]: time="2025-01-30T13:43:04.240475270Z" level=info msg="StartContainer for \"36ad89d825211ea736f87749af751cc489c14a38e98ae2afbe2c51346da6efb3\" returns successfully" Jan 30 13:43:04.393757 kubelet[2450]: I0130 13:43:04.393584 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/349ca47b-02bf-4de5-91eb-acf5200624c3-config-volume\") pod \"coredns-6f6b679f8f-rhx7t\" (UID: \"349ca47b-02bf-4de5-91eb-acf5200624c3\") " pod="kube-system/coredns-6f6b679f8f-rhx7t" Jan 30 13:43:04.393757 kubelet[2450]: I0130 13:43:04.393647 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5dfd\" (UniqueName: \"kubernetes.io/projected/349ca47b-02bf-4de5-91eb-acf5200624c3-kube-api-access-t5dfd\") pod \"coredns-6f6b679f8f-rhx7t\" (UID: \"349ca47b-02bf-4de5-91eb-acf5200624c3\") " pod="kube-system/coredns-6f6b679f8f-rhx7t" Jan 30 13:43:04.393757 kubelet[2450]: I0130 13:43:04.393675 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ade7dee8-cac7-465e-9e07-d856927cad78-config-volume\") pod \"coredns-6f6b679f8f-4zcfm\" (UID: \"ade7dee8-cac7-465e-9e07-d856927cad78\") " pod="kube-system/coredns-6f6b679f8f-4zcfm" Jan 30 13:43:04.393757 kubelet[2450]: I0130 13:43:04.393701 2450 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7qv5\" (UniqueName: \"kubernetes.io/projected/ade7dee8-cac7-465e-9e07-d856927cad78-kube-api-access-p7qv5\") pod \"coredns-6f6b679f8f-4zcfm\" (UID: \"ade7dee8-cac7-465e-9e07-d856927cad78\") " pod="kube-system/coredns-6f6b679f8f-4zcfm" Jan 30 13:43:05.167737 kubelet[2450]: E0130 13:43:05.167709 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:05.177712 kubelet[2450]: I0130 13:43:05.177647 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-cgzst" podStartSLOduration=2.144460052 podStartE2EDuration="9.177626202s" podCreationTimestamp="2025-01-30 13:42:56 +0000 UTC" firstStartedPulling="2025-01-30 13:42:56.993295857 +0000 UTC m=+5.954085116" lastFinishedPulling="2025-01-30 13:43:04.026462007 +0000 UTC m=+12.987251266" observedRunningTime="2025-01-30 13:43:05.1774878 +0000 UTC m=+14.138277049" watchObservedRunningTime="2025-01-30 13:43:05.177626202 +0000 UTC m=+14.138415461" Jan 30 13:43:05.286024 systemd-networkd[1390]: flannel.1: Link UP Jan 30 13:43:05.286037 systemd-networkd[1390]: flannel.1: Gained carrier Jan 30 13:43:05.340711 kubelet[2450]: E0130 13:43:05.340583 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:05.423329 kubelet[2450]: E0130 13:43:05.423162 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:05.423950 containerd[1479]: time="2025-01-30T13:43:05.423887449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4zcfm,Uid:ade7dee8-cac7-465e-9e07-d856927cad78,Namespace:kube-system,Attempt:0,}" Jan 30 13:43:05.430374 kubelet[2450]: E0130 13:43:05.430349 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:05.430978 containerd[1479]: time="2025-01-30T13:43:05.430870411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhx7t,Uid:349ca47b-02bf-4de5-91eb-acf5200624c3,Namespace:kube-system,Attempt:0,}" Jan 30 13:43:05.459635 systemd-networkd[1390]: cni0: Link UP Jan 30 13:43:05.459847 systemd-networkd[1390]: cni0: Gained carrier Jan 30 13:43:05.460399 systemd-networkd[1390]: cni0: Lost carrier Jan 30 13:43:05.471352 kernel: cni0: port 1(veth56e2f6ca) entered blocking state Jan 30 13:43:05.471439 kernel: cni0: port 1(veth56e2f6ca) entered disabled state Jan 30 13:43:05.471587 systemd-networkd[1390]: veth28d65c92: Link UP Jan 30 13:43:05.472088 systemd-networkd[1390]: veth56e2f6ca: Link UP Jan 30 13:43:05.472419 kernel: veth56e2f6ca: entered allmulticast mode Jan 30 13:43:05.473385 kernel: veth56e2f6ca: entered promiscuous mode Jan 30 13:43:05.474634 kernel: cni0: port 1(veth56e2f6ca) entered blocking state Jan 30 13:43:05.474669 kernel: cni0: port 1(veth56e2f6ca) entered forwarding state Jan 30 13:43:05.476297 kernel: cni0: port 1(veth56e2f6ca) entered disabled state Jan 30 13:43:05.477324 kernel: cni0: port 2(veth28d65c92) entered blocking state Jan 30 13:43:05.479267 kernel: cni0: port 2(veth28d65c92) entered disabled state Jan 30 13:43:05.479321 kernel: veth28d65c92: entered allmulticast mode Jan 30 13:43:05.480298 kernel: veth28d65c92: entered promiscuous mode Jan 30 13:43:05.487458 kernel: cni0: port 2(veth28d65c92) entered blocking state Jan 30 13:43:05.487639 kernel: cni0: port 2(veth28d65c92) entered forwarding state Jan 30 13:43:05.489935 systemd-networkd[1390]: veth28d65c92: Gained carrier Jan 30 13:43:05.490478 systemd-networkd[1390]: cni0: Gained carrier Jan 30 13:43:05.492560 containerd[1479]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001c938), "name":"cbr0", "type":"bridge"} Jan 30 13:43:05.492560 containerd[1479]: delegateAdd: netconf sent to delegate plugin: Jan 30 13:43:05.497134 kernel: cni0: port 1(veth56e2f6ca) entered blocking state Jan 30 13:43:05.497221 kernel: cni0: port 1(veth56e2f6ca) entered forwarding state Jan 30 13:43:05.497177 systemd-networkd[1390]: veth56e2f6ca: Gained carrier Jan 30 13:43:05.499033 containerd[1479]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Jan 30 13:43:05.499033 containerd[1479]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000018938), "name":"cbr0", "type":"bridge"} Jan 30 13:43:05.499033 containerd[1479]: delegateAdd: netconf sent to delegate plugin: Jan 30 13:43:05.515049 containerd[1479]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-01-30T13:43:05.514924773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:43:05.515049 containerd[1479]: time="2025-01-30T13:43:05.514989205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:43:05.515341 containerd[1479]: time="2025-01-30T13:43:05.515013942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:05.515773 containerd[1479]: time="2025-01-30T13:43:05.515719114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:05.519804 containerd[1479]: time="2025-01-30T13:43:05.519490078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:43:05.519804 containerd[1479]: time="2025-01-30T13:43:05.519564909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:43:05.519804 containerd[1479]: time="2025-01-30T13:43:05.519584266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:05.519804 containerd[1479]: time="2025-01-30T13:43:05.519703842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:43:05.539384 systemd[1]: Started cri-containerd-3745d27c8548ae8bf22f6ddad6a7a87505c9061e9f618609b12a6df6482923fe.scope - libcontainer container 3745d27c8548ae8bf22f6ddad6a7a87505c9061e9f618609b12a6df6482923fe. Jan 30 13:43:05.544044 systemd[1]: Started cri-containerd-b2ce8fa4dce321e446f484f7cb71adb4e4cc6b5cac9f996957e904cd5fac648a.scope - libcontainer container b2ce8fa4dce321e446f484f7cb71adb4e4cc6b5cac9f996957e904cd5fac648a. Jan 30 13:43:05.552589 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:43:05.560748 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:43:05.578188 containerd[1479]: time="2025-01-30T13:43:05.578136250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4zcfm,Uid:ade7dee8-cac7-465e-9e07-d856927cad78,Namespace:kube-system,Attempt:0,} returns sandbox id \"3745d27c8548ae8bf22f6ddad6a7a87505c9061e9f618609b12a6df6482923fe\"" Jan 30 13:43:05.579121 kubelet[2450]: E0130 13:43:05.579095 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:05.583396 containerd[1479]: time="2025-01-30T13:43:05.583315545Z" level=info msg="CreateContainer within sandbox \"3745d27c8548ae8bf22f6ddad6a7a87505c9061e9f618609b12a6df6482923fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:43:05.591594 containerd[1479]: time="2025-01-30T13:43:05.591549891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-rhx7t,Uid:349ca47b-02bf-4de5-91eb-acf5200624c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2ce8fa4dce321e446f484f7cb71adb4e4cc6b5cac9f996957e904cd5fac648a\"" Jan 30 13:43:05.592431 kubelet[2450]: E0130 13:43:05.592384 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:05.594467 containerd[1479]: time="2025-01-30T13:43:05.594425403Z" level=info msg="CreateContainer within sandbox \"b2ce8fa4dce321e446f484f7cb71adb4e4cc6b5cac9f996957e904cd5fac648a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:43:06.059897 containerd[1479]: time="2025-01-30T13:43:06.059819729Z" level=info msg="CreateContainer within sandbox \"3745d27c8548ae8bf22f6ddad6a7a87505c9061e9f618609b12a6df6482923fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3d81e55e36ba28803b1e4701b09bf5e9401659a172179fb0adc4a9b130e57de0\"" Jan 30 13:43:06.060436 containerd[1479]: time="2025-01-30T13:43:06.060398112Z" level=info msg="StartContainer for \"3d81e55e36ba28803b1e4701b09bf5e9401659a172179fb0adc4a9b130e57de0\"" Jan 30 13:43:06.074390 containerd[1479]: time="2025-01-30T13:43:06.074335069Z" level=info msg="CreateContainer within sandbox \"b2ce8fa4dce321e446f484f7cb71adb4e4cc6b5cac9f996957e904cd5fac648a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7d96a07b0f7144e0e12621dcdac9617e52dadde9d4e0524a648d258350a85637\"" Jan 30 13:43:06.075006 containerd[1479]: time="2025-01-30T13:43:06.074982531Z" level=info msg="StartContainer for \"7d96a07b0f7144e0e12621dcdac9617e52dadde9d4e0524a648d258350a85637\"" Jan 30 13:43:06.088463 systemd[1]: Started cri-containerd-3d81e55e36ba28803b1e4701b09bf5e9401659a172179fb0adc4a9b130e57de0.scope - libcontainer container 3d81e55e36ba28803b1e4701b09bf5e9401659a172179fb0adc4a9b130e57de0. Jan 30 13:43:06.106413 systemd[1]: Started cri-containerd-7d96a07b0f7144e0e12621dcdac9617e52dadde9d4e0524a648d258350a85637.scope - libcontainer container 7d96a07b0f7144e0e12621dcdac9617e52dadde9d4e0524a648d258350a85637. Jan 30 13:43:06.120956 containerd[1479]: time="2025-01-30T13:43:06.120865823Z" level=info msg="StartContainer for \"3d81e55e36ba28803b1e4701b09bf5e9401659a172179fb0adc4a9b130e57de0\" returns successfully" Jan 30 13:43:06.140952 containerd[1479]: time="2025-01-30T13:43:06.140837345Z" level=info msg="StartContainer for \"7d96a07b0f7144e0e12621dcdac9617e52dadde9d4e0524a648d258350a85637\" returns successfully" Jan 30 13:43:06.171098 kubelet[2450]: E0130 13:43:06.171042 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:06.174196 kubelet[2450]: E0130 13:43:06.174165 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:06.174402 kubelet[2450]: E0130 13:43:06.174331 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:06.203940 kubelet[2450]: I0130 13:43:06.202906 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4zcfm" podStartSLOduration=10.202888356 podStartE2EDuration="10.202888356s" podCreationTimestamp="2025-01-30 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:43:06.202031098 +0000 UTC m=+15.162820347" watchObservedRunningTime="2025-01-30 13:43:06.202888356 +0000 UTC m=+15.163677615" Jan 30 13:43:06.203940 kubelet[2450]: I0130 13:43:06.203034 2450 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-rhx7t" podStartSLOduration=10.203031126 podStartE2EDuration="10.203031126s" podCreationTimestamp="2025-01-30 13:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:43:06.188419995 +0000 UTC m=+15.149209254" watchObservedRunningTime="2025-01-30 13:43:06.203031126 +0000 UTC m=+15.163820385" Jan 30 13:43:06.447481 systemd-networkd[1390]: flannel.1: Gained IPv6LL Jan 30 13:43:06.703396 systemd-networkd[1390]: veth28d65c92: Gained IPv6LL Jan 30 13:43:07.176061 kubelet[2450]: E0130 13:43:07.176034 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:07.176496 kubelet[2450]: E0130 13:43:07.176146 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:07.215403 systemd-networkd[1390]: cni0: Gained IPv6LL Jan 30 13:43:07.535421 systemd-networkd[1390]: veth56e2f6ca: Gained IPv6LL Jan 30 13:43:08.177645 kubelet[2450]: E0130 13:43:08.177619 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:08.178049 kubelet[2450]: E0130 13:43:08.177674 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:43:19.649908 systemd[1]: Started sshd@5-10.0.0.30:22-10.0.0.1:53706.service - OpenSSH per-connection server daemon (10.0.0.1:53706). Jan 30 13:43:19.688676 sshd[3331]: Accepted publickey for core from 10.0.0.1 port 53706 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:19.690507 sshd[3331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:19.694447 systemd-logind[1452]: New session 6 of user core. Jan 30 13:43:19.713468 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:43:19.824486 sshd[3331]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:19.828332 systemd[1]: sshd@5-10.0.0.30:22-10.0.0.1:53706.service: Deactivated successfully. Jan 30 13:43:19.830355 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:43:19.831069 systemd-logind[1452]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:43:19.832038 systemd-logind[1452]: Removed session 6. Jan 30 13:43:24.836182 systemd[1]: Started sshd@6-10.0.0.30:22-10.0.0.1:53722.service - OpenSSH per-connection server daemon (10.0.0.1:53722). Jan 30 13:43:24.874757 sshd[3368]: Accepted publickey for core from 10.0.0.1 port 53722 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:24.876663 sshd[3368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:24.882181 systemd-logind[1452]: New session 7 of user core. Jan 30 13:43:24.888459 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:43:25.009207 sshd[3368]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:25.014923 systemd[1]: sshd@6-10.0.0.30:22-10.0.0.1:53722.service: Deactivated successfully. Jan 30 13:43:25.017482 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:43:25.018309 systemd-logind[1452]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:43:25.019327 systemd-logind[1452]: Removed session 7. Jan 30 13:43:30.020382 systemd[1]: Started sshd@7-10.0.0.30:22-10.0.0.1:56382.service - OpenSSH per-connection server daemon (10.0.0.1:56382). Jan 30 13:43:30.062982 sshd[3407]: Accepted publickey for core from 10.0.0.1 port 56382 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:30.064576 sshd[3407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:30.068591 systemd-logind[1452]: New session 8 of user core. Jan 30 13:43:30.079376 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:43:30.188637 sshd[3407]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:30.193964 systemd[1]: sshd@7-10.0.0.30:22-10.0.0.1:56382.service: Deactivated successfully. Jan 30 13:43:30.196467 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:43:30.197393 systemd-logind[1452]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:43:30.198307 systemd-logind[1452]: Removed session 8. Jan 30 13:43:35.202988 systemd[1]: Started sshd@8-10.0.0.30:22-10.0.0.1:56398.service - OpenSSH per-connection server daemon (10.0.0.1:56398). Jan 30 13:43:35.238409 sshd[3444]: Accepted publickey for core from 10.0.0.1 port 56398 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:35.239831 sshd[3444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:35.244543 systemd-logind[1452]: New session 9 of user core. Jan 30 13:43:35.259523 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:43:35.374465 sshd[3444]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:35.385136 systemd[1]: sshd@8-10.0.0.30:22-10.0.0.1:56398.service: Deactivated successfully. Jan 30 13:43:35.387137 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:43:35.388780 systemd-logind[1452]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:43:35.396579 systemd[1]: Started sshd@9-10.0.0.30:22-10.0.0.1:56404.service - OpenSSH per-connection server daemon (10.0.0.1:56404). Jan 30 13:43:35.397524 systemd-logind[1452]: Removed session 9. Jan 30 13:43:35.427765 sshd[3462]: Accepted publickey for core from 10.0.0.1 port 56404 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:35.429655 sshd[3462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:35.433812 systemd-logind[1452]: New session 10 of user core. Jan 30 13:43:35.441351 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:43:35.591900 sshd[3462]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:35.604420 systemd[1]: sshd@9-10.0.0.30:22-10.0.0.1:56404.service: Deactivated successfully. Jan 30 13:43:35.606432 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:43:35.608632 systemd-logind[1452]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:43:35.610479 systemd[1]: Started sshd@10-10.0.0.30:22-10.0.0.1:56418.service - OpenSSH per-connection server daemon (10.0.0.1:56418). Jan 30 13:43:35.611359 systemd-logind[1452]: Removed session 10. Jan 30 13:43:35.645792 sshd[3493]: Accepted publickey for core from 10.0.0.1 port 56418 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:35.647520 sshd[3493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:35.651744 systemd-logind[1452]: New session 11 of user core. Jan 30 13:43:35.662377 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:43:35.828394 sshd[3493]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:35.833710 systemd[1]: sshd@10-10.0.0.30:22-10.0.0.1:56418.service: Deactivated successfully. Jan 30 13:43:35.836936 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:43:35.837888 systemd-logind[1452]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:43:35.838863 systemd-logind[1452]: Removed session 11. Jan 30 13:43:40.838882 systemd[1]: Started sshd@11-10.0.0.30:22-10.0.0.1:33186.service - OpenSSH per-connection server daemon (10.0.0.1:33186). Jan 30 13:43:40.875415 sshd[3528]: Accepted publickey for core from 10.0.0.1 port 33186 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:40.876823 sshd[3528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:40.881030 systemd-logind[1452]: New session 12 of user core. Jan 30 13:43:40.890366 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:43:40.995042 sshd[3528]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:40.998852 systemd[1]: sshd@11-10.0.0.30:22-10.0.0.1:33186.service: Deactivated successfully. Jan 30 13:43:41.000628 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:43:41.001224 systemd-logind[1452]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:43:41.002124 systemd-logind[1452]: Removed session 12. Jan 30 13:43:46.006445 systemd[1]: Started sshd@12-10.0.0.30:22-10.0.0.1:33188.service - OpenSSH per-connection server daemon (10.0.0.1:33188). Jan 30 13:43:46.041998 sshd[3564]: Accepted publickey for core from 10.0.0.1 port 33188 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:46.043669 sshd[3564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:46.047922 systemd-logind[1452]: New session 13 of user core. Jan 30 13:43:46.065421 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:43:46.174198 sshd[3564]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:46.185952 systemd[1]: sshd@12-10.0.0.30:22-10.0.0.1:33188.service: Deactivated successfully. Jan 30 13:43:46.188058 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:43:46.190125 systemd-logind[1452]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:43:46.201819 systemd[1]: Started sshd@13-10.0.0.30:22-10.0.0.1:33190.service - OpenSSH per-connection server daemon (10.0.0.1:33190). Jan 30 13:43:46.202942 systemd-logind[1452]: Removed session 13. Jan 30 13:43:46.232595 sshd[3579]: Accepted publickey for core from 10.0.0.1 port 33190 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:46.234032 sshd[3579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:46.238153 systemd-logind[1452]: New session 14 of user core. Jan 30 13:43:46.244397 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:43:46.414348 sshd[3579]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:46.421999 systemd[1]: sshd@13-10.0.0.30:22-10.0.0.1:33190.service: Deactivated successfully. Jan 30 13:43:46.423756 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:43:46.425547 systemd-logind[1452]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:43:46.434482 systemd[1]: Started sshd@14-10.0.0.30:22-10.0.0.1:33206.service - OpenSSH per-connection server daemon (10.0.0.1:33206). Jan 30 13:43:46.435458 systemd-logind[1452]: Removed session 14. Jan 30 13:43:46.467269 sshd[3591]: Accepted publickey for core from 10.0.0.1 port 33206 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:46.468917 sshd[3591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:46.472942 systemd-logind[1452]: New session 15 of user core. Jan 30 13:43:46.482389 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:43:47.877021 sshd[3591]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:47.886320 systemd[1]: sshd@14-10.0.0.30:22-10.0.0.1:33206.service: Deactivated successfully. Jan 30 13:43:47.889179 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:43:47.891084 systemd-logind[1452]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:43:47.899558 systemd[1]: Started sshd@15-10.0.0.30:22-10.0.0.1:59908.service - OpenSSH per-connection server daemon (10.0.0.1:59908). Jan 30 13:43:47.900535 systemd-logind[1452]: Removed session 15. Jan 30 13:43:47.931403 sshd[3613]: Accepted publickey for core from 10.0.0.1 port 59908 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:47.933040 sshd[3613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:47.936865 systemd-logind[1452]: New session 16 of user core. Jan 30 13:43:47.944391 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:43:48.131763 sshd[3613]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:48.150235 systemd[1]: sshd@15-10.0.0.30:22-10.0.0.1:59908.service: Deactivated successfully. Jan 30 13:43:48.152137 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:43:48.153699 systemd-logind[1452]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:43:48.155127 systemd[1]: Started sshd@16-10.0.0.30:22-10.0.0.1:59910.service - OpenSSH per-connection server daemon (10.0.0.1:59910). Jan 30 13:43:48.156082 systemd-logind[1452]: Removed session 16. Jan 30 13:43:48.191000 sshd[3625]: Accepted publickey for core from 10.0.0.1 port 59910 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:48.192700 sshd[3625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:48.197348 systemd-logind[1452]: New session 17 of user core. Jan 30 13:43:48.206399 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:43:48.327870 sshd[3625]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:48.332311 systemd[1]: sshd@16-10.0.0.30:22-10.0.0.1:59910.service: Deactivated successfully. Jan 30 13:43:48.334411 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:43:48.335133 systemd-logind[1452]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:43:48.336137 systemd-logind[1452]: Removed session 17. Jan 30 13:43:53.376386 systemd[1]: Started sshd@17-10.0.0.30:22-10.0.0.1:59926.service - OpenSSH per-connection server daemon (10.0.0.1:59926). Jan 30 13:43:53.442065 sshd[3663]: Accepted publickey for core from 10.0.0.1 port 59926 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:53.445114 sshd[3663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:53.461947 systemd-logind[1452]: New session 18 of user core. Jan 30 13:43:53.475704 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:43:53.706615 sshd[3663]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:53.720420 systemd[1]: sshd@17-10.0.0.30:22-10.0.0.1:59926.service: Deactivated successfully. Jan 30 13:43:53.723671 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:43:53.728373 systemd-logind[1452]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:43:53.733000 systemd-logind[1452]: Removed session 18. Jan 30 13:43:58.715770 systemd[1]: Started sshd@18-10.0.0.30:22-10.0.0.1:54024.service - OpenSSH per-connection server daemon (10.0.0.1:54024). Jan 30 13:43:58.750131 sshd[3703]: Accepted publickey for core from 10.0.0.1 port 54024 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:43:58.751470 sshd[3703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:43:58.754891 systemd-logind[1452]: New session 19 of user core. Jan 30 13:43:58.768368 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:43:58.868189 sshd[3703]: pam_unix(sshd:session): session closed for user core Jan 30 13:43:58.871697 systemd[1]: sshd@18-10.0.0.30:22-10.0.0.1:54024.service: Deactivated successfully. Jan 30 13:43:58.873512 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:43:58.874153 systemd-logind[1452]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:43:58.874994 systemd-logind[1452]: Removed session 19. Jan 30 13:44:03.879940 systemd[1]: Started sshd@19-10.0.0.30:22-10.0.0.1:54032.service - OpenSSH per-connection server daemon (10.0.0.1:54032). Jan 30 13:44:03.915471 sshd[3738]: Accepted publickey for core from 10.0.0.1 port 54032 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:44:03.917339 sshd[3738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:03.921602 systemd-logind[1452]: New session 20 of user core. Jan 30 13:44:03.934520 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:44:04.038068 sshd[3738]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:04.041869 systemd[1]: sshd@19-10.0.0.30:22-10.0.0.1:54032.service: Deactivated successfully. Jan 30 13:44:04.043916 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:44:04.044711 systemd-logind[1452]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:44:04.045801 systemd-logind[1452]: Removed session 20. Jan 30 13:44:04.121070 kubelet[2450]: E0130 13:44:04.121040 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:07.120573 kubelet[2450]: E0130 13:44:07.120533 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:09.052784 systemd[1]: Started sshd@20-10.0.0.30:22-10.0.0.1:34110.service - OpenSSH per-connection server daemon (10.0.0.1:34110). Jan 30 13:44:09.086989 sshd[3774]: Accepted publickey for core from 10.0.0.1 port 34110 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:44:09.088374 sshd[3774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:09.091840 systemd-logind[1452]: New session 21 of user core. Jan 30 13:44:09.102351 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:44:09.209808 sshd[3774]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:09.213549 systemd[1]: sshd@20-10.0.0.30:22-10.0.0.1:34110.service: Deactivated successfully. Jan 30 13:44:09.215615 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:44:09.216510 systemd-logind[1452]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:44:09.217460 systemd-logind[1452]: Removed session 21. Jan 30 13:44:14.120818 kubelet[2450]: E0130 13:44:14.120774 2450 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jan 30 13:44:14.221176 systemd[1]: Started sshd@21-10.0.0.30:22-10.0.0.1:34120.service - OpenSSH per-connection server daemon (10.0.0.1:34120). Jan 30 13:44:14.258748 sshd[3809]: Accepted publickey for core from 10.0.0.1 port 34120 ssh2: RSA SHA256:5CVmNz7KcUi5XiFI6hIHcAt9PUhPYHR+qHQIWL4Xluc Jan 30 13:44:14.260410 sshd[3809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:44:14.265204 systemd-logind[1452]: New session 22 of user core. Jan 30 13:44:14.278426 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:44:14.379815 sshd[3809]: pam_unix(sshd:session): session closed for user core Jan 30 13:44:14.383813 systemd[1]: sshd@21-10.0.0.30:22-10.0.0.1:34120.service: Deactivated successfully. Jan 30 13:44:14.385866 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:44:14.386653 systemd-logind[1452]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:44:14.387595 systemd-logind[1452]: Removed session 22.