Jul 6 23:55:11.028127 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:55:11.028149 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:11.028160 kernel: BIOS-provided physical RAM map: Jul 6 23:55:11.028167 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 6 23:55:11.028173 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 6 23:55:11.028179 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 6 23:55:11.028187 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 6 23:55:11.028193 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 6 23:55:11.028199 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 6 23:55:11.028208 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 6 23:55:11.028214 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 6 23:55:11.028221 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 6 23:55:11.028231 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 6 23:55:11.028238 kernel: NX (Execute Disable) protection: active Jul 6 23:55:11.028246 kernel: APIC: Static calls initialized Jul 6 23:55:11.028267 kernel: SMBIOS 2.8 present. Jul 6 23:55:11.028274 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 6 23:55:11.028281 kernel: Hypervisor detected: KVM Jul 6 23:55:11.028300 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:55:11.028320 kernel: kvm-clock: using sched offset of 3156477582 cycles Jul 6 23:55:11.028327 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:55:11.028335 kernel: tsc: Detected 2794.748 MHz processor Jul 6 23:55:11.028342 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:55:11.028349 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:55:11.028356 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 6 23:55:11.028367 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 6 23:55:11.028374 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:55:11.028381 kernel: Using GB pages for direct mapping Jul 6 23:55:11.028388 kernel: ACPI: Early table checksum verification disabled Jul 6 23:55:11.028395 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 6 23:55:11.028402 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:11.028409 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:11.028416 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:11.028426 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 6 23:55:11.028433 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:11.028440 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:11.028447 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:11.028454 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:55:11.028461 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 6 23:55:11.028468 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 6 23:55:11.028479 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 6 23:55:11.028489 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 6 23:55:11.028496 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 6 23:55:11.028503 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 6 23:55:11.028510 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 6 23:55:11.028518 kernel: No NUMA configuration found Jul 6 23:55:11.028525 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 6 23:55:11.028532 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 6 23:55:11.028542 kernel: Zone ranges: Jul 6 23:55:11.028549 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:55:11.028556 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 6 23:55:11.028564 kernel: Normal empty Jul 6 23:55:11.028571 kernel: Movable zone start for each node Jul 6 23:55:11.028578 kernel: Early memory node ranges Jul 6 23:55:11.028585 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 6 23:55:11.028592 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 6 23:55:11.028600 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 6 23:55:11.028610 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:55:11.028621 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 6 23:55:11.028628 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 6 23:55:11.028636 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:55:11.028643 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:55:11.028650 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:55:11.028657 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:55:11.028665 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:55:11.028672 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:55:11.028682 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:55:11.028689 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:55:11.028696 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:55:11.028704 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:55:11.028711 kernel: TSC deadline timer available Jul 6 23:55:11.028718 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 6 23:55:11.028726 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:55:11.028741 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 6 23:55:11.028754 kernel: kvm-guest: setup PV sched yield Jul 6 23:55:11.028765 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 6 23:55:11.028780 kernel: Booting paravirtualized kernel on KVM Jul 6 23:55:11.028796 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:55:11.028813 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 6 23:55:11.028828 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 6 23:55:11.028845 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 6 23:55:11.028867 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 6 23:55:11.028883 kernel: kvm-guest: PV spinlocks enabled Jul 6 23:55:11.028892 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:55:11.028920 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:11.028930 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:55:11.028937 kernel: random: crng init done Jul 6 23:55:11.028944 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:55:11.028952 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:55:11.028959 kernel: Fallback order for Node 0: 0 Jul 6 23:55:11.028966 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 6 23:55:11.028973 kernel: Policy zone: DMA32 Jul 6 23:55:11.028984 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:55:11.028992 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 136900K reserved, 0K cma-reserved) Jul 6 23:55:11.029000 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:55:11.029007 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:55:11.029014 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:55:11.029021 kernel: Dynamic Preempt: voluntary Jul 6 23:55:11.029029 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:55:11.029037 kernel: rcu: RCU event tracing is enabled. Jul 6 23:55:11.029044 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:55:11.029055 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:55:11.029062 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:55:11.029069 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:55:11.029077 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:55:11.029087 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:55:11.029095 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 6 23:55:11.029102 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:55:11.029110 kernel: Console: colour VGA+ 80x25 Jul 6 23:55:11.029117 kernel: printk: console [ttyS0] enabled Jul 6 23:55:11.029124 kernel: ACPI: Core revision 20230628 Jul 6 23:55:11.029134 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 6 23:55:11.029142 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:55:11.029149 kernel: x2apic enabled Jul 6 23:55:11.029156 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:55:11.029164 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 6 23:55:11.029171 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 6 23:55:11.029178 kernel: kvm-guest: setup PV IPIs Jul 6 23:55:11.029196 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:55:11.029204 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 6 23:55:11.029211 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 6 23:55:11.029219 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 6 23:55:11.029229 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 6 23:55:11.029236 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 6 23:55:11.029244 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:55:11.029259 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:55:11.029267 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:55:11.029279 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 6 23:55:11.029310 kernel: RETBleed: Mitigation: untrained return thunk Jul 6 23:55:11.029322 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:55:11.029330 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:55:11.029338 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 6 23:55:11.029346 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 6 23:55:11.029354 kernel: x86/bugs: return thunk changed Jul 6 23:55:11.029362 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 6 23:55:11.029373 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:55:11.029381 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:55:11.029389 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:55:11.029396 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:55:11.029404 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 6 23:55:11.029412 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:55:11.029420 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:55:11.029427 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:55:11.029435 kernel: landlock: Up and running. Jul 6 23:55:11.029445 kernel: SELinux: Initializing. Jul 6 23:55:11.029453 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:55:11.029461 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:55:11.029468 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 6 23:55:11.029476 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:55:11.029484 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:55:11.029492 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:55:11.029500 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 6 23:55:11.029510 kernel: ... version: 0 Jul 6 23:55:11.029520 kernel: ... bit width: 48 Jul 6 23:55:11.029528 kernel: ... generic registers: 6 Jul 6 23:55:11.029536 kernel: ... value mask: 0000ffffffffffff Jul 6 23:55:11.029544 kernel: ... max period: 00007fffffffffff Jul 6 23:55:11.029551 kernel: ... fixed-purpose events: 0 Jul 6 23:55:11.029559 kernel: ... event mask: 000000000000003f Jul 6 23:55:11.029566 kernel: signal: max sigframe size: 1776 Jul 6 23:55:11.029574 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:55:11.029581 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:55:11.029591 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:55:11.029599 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:55:11.029607 kernel: .... node #0, CPUs: #1 #2 #3 Jul 6 23:55:11.029614 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:55:11.029622 kernel: smpboot: Max logical packages: 1 Jul 6 23:55:11.029629 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 6 23:55:11.029637 kernel: devtmpfs: initialized Jul 6 23:55:11.029645 kernel: x86/mm: Memory block size: 128MB Jul 6 23:55:11.029652 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:55:11.029662 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:55:11.029670 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:55:11.029678 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:55:11.029685 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:55:11.029693 kernel: audit: type=2000 audit(1751846110.157:1): state=initialized audit_enabled=0 res=1 Jul 6 23:55:11.029701 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:55:11.029708 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:55:11.029716 kernel: cpuidle: using governor menu Jul 6 23:55:11.029724 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:55:11.029734 kernel: dca service started, version 1.12.1 Jul 6 23:55:11.029742 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 6 23:55:11.029750 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 6 23:55:11.029758 kernel: PCI: Using configuration type 1 for base access Jul 6 23:55:11.029765 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:55:11.029773 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:55:11.029781 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:55:11.029788 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:55:11.029796 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:55:11.029806 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:55:11.029814 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:55:11.029821 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:55:11.029829 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:55:11.029837 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:55:11.029845 kernel: ACPI: Interpreter enabled Jul 6 23:55:11.029852 kernel: ACPI: PM: (supports S0 S3 S5) Jul 6 23:55:11.029860 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:55:11.029868 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:55:11.029878 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:55:11.029886 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 6 23:55:11.029893 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:55:11.030129 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:55:11.030360 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 6 23:55:11.030558 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 6 23:55:11.030570 kernel: PCI host bridge to bus 0000:00 Jul 6 23:55:11.030740 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:55:11.030867 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:55:11.030985 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:55:11.031109 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 6 23:55:11.031232 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 6 23:55:11.031406 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 6 23:55:11.031530 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:55:11.031697 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 6 23:55:11.031935 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 6 23:55:11.032072 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 6 23:55:11.032216 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 6 23:55:11.032402 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 6 23:55:11.032560 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:55:11.032744 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 6 23:55:11.032893 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 6 23:55:11.033023 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 6 23:55:11.033151 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 6 23:55:11.033330 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:55:11.033463 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 6 23:55:11.033589 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 6 23:55:11.033717 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 6 23:55:11.033872 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:55:11.034004 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 6 23:55:11.034134 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 6 23:55:11.034274 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 6 23:55:11.034420 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 6 23:55:11.034563 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 6 23:55:11.034697 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 6 23:55:11.034854 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 6 23:55:11.034987 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 6 23:55:11.035113 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 6 23:55:11.035274 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 6 23:55:11.035426 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 6 23:55:11.035441 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:55:11.035455 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:55:11.035464 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:55:11.035471 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:55:11.035479 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 6 23:55:11.035487 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 6 23:55:11.035495 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 6 23:55:11.035503 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 6 23:55:11.035511 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 6 23:55:11.035518 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 6 23:55:11.035529 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 6 23:55:11.035537 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 6 23:55:11.035545 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 6 23:55:11.035553 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 6 23:55:11.035560 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 6 23:55:11.035570 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 6 23:55:11.035586 kernel: iommu: Default domain type: Translated Jul 6 23:55:11.035597 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:55:11.035608 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:55:11.035623 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:55:11.035634 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 6 23:55:11.035645 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 6 23:55:11.035795 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 6 23:55:11.035923 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 6 23:55:11.036049 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:55:11.036059 kernel: vgaarb: loaded Jul 6 23:55:11.036067 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 6 23:55:11.036080 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 6 23:55:11.036087 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:55:11.036095 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:55:11.036103 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:55:11.036111 kernel: pnp: PnP ACPI init Jul 6 23:55:11.036279 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 6 23:55:11.036364 kernel: pnp: PnP ACPI: found 6 devices Jul 6 23:55:11.036372 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:55:11.036385 kernel: NET: Registered PF_INET protocol family Jul 6 23:55:11.036393 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:55:11.036401 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:55:11.036409 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:55:11.036416 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:55:11.036424 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:55:11.036434 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:55:11.036445 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:55:11.036456 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:55:11.036471 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:55:11.036479 kernel: NET: Registered PF_XDP protocol family Jul 6 23:55:11.036608 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:55:11.036724 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:55:11.036841 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:55:11.036956 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 6 23:55:11.037071 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 6 23:55:11.037186 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 6 23:55:11.037198 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:55:11.037210 kernel: Initialise system trusted keyrings Jul 6 23:55:11.037218 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:55:11.037226 kernel: Key type asymmetric registered Jul 6 23:55:11.037234 kernel: Asymmetric key parser 'x509' registered Jul 6 23:55:11.037242 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:55:11.037258 kernel: io scheduler mq-deadline registered Jul 6 23:55:11.037266 kernel: io scheduler kyber registered Jul 6 23:55:11.037274 kernel: io scheduler bfq registered Jul 6 23:55:11.037282 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:55:11.037309 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 6 23:55:11.037317 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 6 23:55:11.037325 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 6 23:55:11.037333 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:55:11.037341 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:55:11.037349 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:55:11.037356 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:55:11.037366 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:55:11.037382 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:55:11.037560 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 6 23:55:11.037686 kernel: rtc_cmos 00:04: registered as rtc0 Jul 6 23:55:11.037806 kernel: rtc_cmos 00:04: setting system clock to 2025-07-06T23:55:10 UTC (1751846110) Jul 6 23:55:11.037925 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 6 23:55:11.037935 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 6 23:55:11.037943 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:55:11.037951 kernel: Segment Routing with IPv6 Jul 6 23:55:11.037959 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:55:11.037971 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:55:11.037979 kernel: Key type dns_resolver registered Jul 6 23:55:11.037986 kernel: IPI shorthand broadcast: enabled Jul 6 23:55:11.037994 kernel: sched_clock: Marking stable (982002864, 104841367)->(1108041243, -21197012) Jul 6 23:55:11.038002 kernel: registered taskstats version 1 Jul 6 23:55:11.038009 kernel: Loading compiled-in X.509 certificates Jul 6 23:55:11.038017 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:55:11.038025 kernel: Key type .fscrypt registered Jul 6 23:55:11.038032 kernel: Key type fscrypt-provisioning registered Jul 6 23:55:11.038043 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:55:11.038050 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:55:11.038058 kernel: ima: No architecture policies found Jul 6 23:55:11.038065 kernel: clk: Disabling unused clocks Jul 6 23:55:11.038073 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:55:11.038080 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:55:11.038088 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:55:11.038096 kernel: Run /init as init process Jul 6 23:55:11.038106 kernel: with arguments: Jul 6 23:55:11.038114 kernel: /init Jul 6 23:55:11.038121 kernel: with environment: Jul 6 23:55:11.038128 kernel: HOME=/ Jul 6 23:55:11.038136 kernel: TERM=linux Jul 6 23:55:11.038143 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:55:11.038153 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:55:11.038163 systemd[1]: Detected virtualization kvm. Jul 6 23:55:11.038174 systemd[1]: Detected architecture x86-64. Jul 6 23:55:11.038182 systemd[1]: Running in initrd. Jul 6 23:55:11.038190 systemd[1]: No hostname configured, using default hostname. Jul 6 23:55:11.038198 systemd[1]: Hostname set to . Jul 6 23:55:11.038206 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:55:11.038215 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:55:11.038223 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:11.038231 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:11.038243 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:55:11.038260 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:55:11.038282 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:55:11.038332 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:55:11.038342 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:55:11.038354 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:55:11.038363 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:11.038371 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:11.038380 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:55:11.038389 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:55:11.038397 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:55:11.038405 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:55:11.038414 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:55:11.038425 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:55:11.038434 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:55:11.038443 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:55:11.038452 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:11.038460 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:11.038469 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:11.038478 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:55:11.038486 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:55:11.038495 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:55:11.038506 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:55:11.038514 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:55:11.038523 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:55:11.038531 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:55:11.038540 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:11.038548 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:55:11.038557 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:11.038565 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:55:11.038597 systemd-journald[193]: Collecting audit messages is disabled. Jul 6 23:55:11.038619 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:55:11.038628 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:55:11.038636 systemd-journald[193]: Journal started Jul 6 23:55:11.038659 systemd-journald[193]: Runtime Journal (/run/log/journal/6b35572b67fe45999c383bf7532d4569) is 6.0M, max 48.4M, 42.3M free. Jul 6 23:55:11.030647 systemd-modules-load[194]: Inserted module 'overlay' Jul 6 23:55:11.067672 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:55:11.067689 kernel: Bridge firewalling registered Jul 6 23:55:11.058043 systemd-modules-load[194]: Inserted module 'br_netfilter' Jul 6 23:55:11.070598 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:55:11.072419 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:11.073119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:11.089505 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:11.090644 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:55:11.091572 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:55:11.096163 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:55:11.106965 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:11.111191 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:11.111736 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:11.112807 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:11.120428 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:55:11.122180 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:55:11.135909 dracut-cmdline[226]: dracut-dracut-053 Jul 6 23:55:11.139308 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:55:11.160260 systemd-resolved[230]: Positive Trust Anchors: Jul 6 23:55:11.160282 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:55:11.160327 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:55:11.162975 systemd-resolved[230]: Defaulting to hostname 'linux'. Jul 6 23:55:11.164152 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:55:11.169185 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:11.241330 kernel: SCSI subsystem initialized Jul 6 23:55:11.251341 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:55:11.263314 kernel: iscsi: registered transport (tcp) Jul 6 23:55:11.284310 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:55:11.284349 kernel: QLogic iSCSI HBA Driver Jul 6 23:55:11.344352 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:55:11.352564 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:55:11.377926 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:55:11.377984 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:55:11.377997 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:55:11.419348 kernel: raid6: avx2x4 gen() 30317 MB/s Jul 6 23:55:11.436320 kernel: raid6: avx2x2 gen() 30615 MB/s Jul 6 23:55:11.453406 kernel: raid6: avx2x1 gen() 25712 MB/s Jul 6 23:55:11.453494 kernel: raid6: using algorithm avx2x2 gen() 30615 MB/s Jul 6 23:55:11.471394 kernel: raid6: .... xor() 19790 MB/s, rmw enabled Jul 6 23:55:11.471460 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:55:11.492342 kernel: xor: automatically using best checksumming function avx Jul 6 23:55:11.650346 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:55:11.665130 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:55:11.677468 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:11.694621 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jul 6 23:55:11.700615 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:11.716497 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:55:11.734552 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Jul 6 23:55:11.773171 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:55:11.799582 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:55:11.871271 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:11.879461 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:55:11.894773 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:11.896986 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:11.898314 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:11.898798 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:55:11.906862 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:55:11.915310 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 6 23:55:11.917556 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:11.920376 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:55:11.925025 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:55:11.925101 kernel: GPT:9289727 != 19775487 Jul 6 23:55:11.925115 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:55:11.925126 kernel: GPT:9289727 != 19775487 Jul 6 23:55:11.925135 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:55:11.925145 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:55:11.929334 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:55:11.945805 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:55:11.945832 kernel: AES CTR mode by8 optimization enabled Jul 6 23:55:11.958312 kernel: libata version 3.00 loaded. Jul 6 23:55:11.963465 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:55:11.964591 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (463) Jul 6 23:55:11.968312 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Jul 6 23:55:11.968341 kernel: ahci 0000:00:1f.2: version 3.0 Jul 6 23:55:11.968528 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 6 23:55:11.970752 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 6 23:55:11.970952 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 6 23:55:11.979319 kernel: scsi host0: ahci Jul 6 23:55:11.980995 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:55:11.985328 kernel: scsi host1: ahci Jul 6 23:55:11.985547 kernel: scsi host2: ahci Jul 6 23:55:11.985703 kernel: scsi host3: ahci Jul 6 23:55:11.985857 kernel: scsi host4: ahci Jul 6 23:55:11.987693 kernel: scsi host5: ahci Jul 6 23:55:11.987879 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 6 23:55:11.987891 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 6 23:55:11.988931 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 6 23:55:11.989993 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 6 23:55:11.991021 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 6 23:55:11.991057 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 6 23:55:11.992770 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:55:11.994915 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:55:12.003974 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:55:12.020480 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:55:12.021670 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:12.021731 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:12.023499 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:12.024768 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:12.024844 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:12.025112 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:12.026458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:12.045183 disk-uuid[562]: Primary Header is updated. Jul 6 23:55:12.045183 disk-uuid[562]: Secondary Entries is updated. Jul 6 23:55:12.045183 disk-uuid[562]: Secondary Header is updated. Jul 6 23:55:12.049319 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:55:12.054311 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:55:12.180139 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:12.196760 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:55:12.219180 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:12.303607 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:12.303710 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:12.303742 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:12.305331 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:12.305426 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 6 23:55:12.306338 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 6 23:55:12.307339 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 6 23:55:12.308664 kernel: ata3.00: applying bridge limits Jul 6 23:55:12.308696 kernel: ata3.00: configured for UDMA/100 Jul 6 23:55:12.309325 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 6 23:55:12.355318 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 6 23:55:12.355560 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:55:12.369315 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:55:13.136342 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:55:13.136649 disk-uuid[565]: The operation has completed successfully. Jul 6 23:55:13.174704 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:55:13.175721 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:55:13.191482 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:55:13.197798 sh[591]: Success Jul 6 23:55:13.211313 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 6 23:55:13.251025 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:55:13.266021 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:55:13.269407 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:55:13.282379 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:55:13.282410 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:13.282421 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:55:13.284323 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:55:13.284341 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:55:13.290576 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:55:13.292404 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:55:13.320452 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:55:13.322946 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:55:13.332875 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:13.332931 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:13.332946 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:55:13.336309 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:55:13.346838 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:55:13.349355 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:13.362237 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:55:13.373492 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:55:13.523087 ignition[687]: Ignition 2.19.0 Jul 6 23:55:13.523098 ignition[687]: Stage: fetch-offline Jul 6 23:55:13.523140 ignition[687]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:13.523150 ignition[687]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:55:13.523271 ignition[687]: parsed url from cmdline: "" Jul 6 23:55:13.523276 ignition[687]: no config URL provided Jul 6 23:55:13.523281 ignition[687]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:55:13.523305 ignition[687]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:55:13.523336 ignition[687]: op(1): [started] loading QEMU firmware config module Jul 6 23:55:13.523341 ignition[687]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:55:13.531981 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:13.533336 ignition[687]: op(1): [finished] loading QEMU firmware config module Jul 6 23:55:13.544501 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:13.550738 ignition[687]: parsing config with SHA512: e0e9efa9190a27a90cdc73a4762923d274c13a59c433293c78490a3aa2cdf490d6826fda69f54d35e14dd2db52d20e7d4094af533007c108dd9eb49a43abafdf Jul 6 23:55:13.554053 unknown[687]: fetched base config from "system" Jul 6 23:55:13.554065 unknown[687]: fetched user config from "qemu" Jul 6 23:55:13.554418 ignition[687]: fetch-offline: fetch-offline passed Jul 6 23:55:13.557016 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:13.554483 ignition[687]: Ignition finished successfully Jul 6 23:55:13.572839 systemd-networkd[779]: lo: Link UP Jul 6 23:55:13.572850 systemd-networkd[779]: lo: Gained carrier Jul 6 23:55:13.575006 systemd-networkd[779]: Enumeration completed Jul 6 23:55:13.575125 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:13.575564 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:13.575570 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:13.576781 systemd-networkd[779]: eth0: Link UP Jul 6 23:55:13.576786 systemd-networkd[779]: eth0: Gained carrier Jul 6 23:55:13.576795 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:13.577669 systemd[1]: Reached target network.target - Network. Jul 6 23:55:13.579674 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:55:13.591418 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:55:13.598345 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:55:13.606871 ignition[782]: Ignition 2.19.0 Jul 6 23:55:13.606884 ignition[782]: Stage: kargs Jul 6 23:55:13.607066 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:13.607077 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:55:13.611337 ignition[782]: kargs: kargs passed Jul 6 23:55:13.612044 ignition[782]: Ignition finished successfully Jul 6 23:55:13.616275 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:55:13.630417 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:55:13.734764 ignition[791]: Ignition 2.19.0 Jul 6 23:55:13.734777 ignition[791]: Stage: disks Jul 6 23:55:13.734953 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:13.734965 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:55:13.737974 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:55:13.735784 ignition[791]: disks: disks passed Jul 6 23:55:13.740635 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:13.735836 ignition[791]: Ignition finished successfully Jul 6 23:55:13.742833 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:55:13.744302 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:13.746078 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:13.747340 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:13.759649 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:55:13.772012 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:55:13.904602 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:55:13.921414 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:55:14.012330 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:55:14.012397 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:55:14.013277 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:55:14.022367 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:14.023553 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:55:14.024843 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:55:14.024878 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:55:14.024899 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:14.036312 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Jul 6 23:55:14.036341 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:55:14.038982 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:14.039004 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:14.039022 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:55:14.043560 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:55:14.047425 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:55:14.049525 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:14.082720 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:55:14.088942 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:55:14.095144 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:55:14.100669 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:55:14.208448 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:14.217403 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:55:14.219050 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:55:14.227312 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:14.244609 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:55:14.256188 ignition[924]: INFO : Ignition 2.19.0 Jul 6 23:55:14.256188 ignition[924]: INFO : Stage: mount Jul 6 23:55:14.257883 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:14.257883 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:55:14.260709 ignition[924]: INFO : mount: mount passed Jul 6 23:55:14.261486 ignition[924]: INFO : Ignition finished successfully Jul 6 23:55:14.264667 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:55:14.277460 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:55:14.281517 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:55:14.284828 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:55:14.302318 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (936) Jul 6 23:55:14.302368 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:55:14.304502 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:55:14.304534 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:55:14.308335 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:55:14.309862 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:55:14.343377 ignition[953]: INFO : Ignition 2.19.0 Jul 6 23:55:14.343377 ignition[953]: INFO : Stage: files Jul 6 23:55:14.345568 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:14.345568 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:55:14.345568 ignition[953]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:55:14.345568 ignition[953]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:55:14.345568 ignition[953]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:55:14.353235 ignition[953]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:55:14.353235 ignition[953]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:55:14.353235 ignition[953]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:55:14.353235 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:55:14.353235 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jul 6 23:55:14.349177 unknown[953]: wrote ssh authorized keys file for user: core Jul 6 23:55:14.392733 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:55:14.566359 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jul 6 23:55:14.566359 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:14.570735 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:55:14.570735 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:14.570735 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:55:14.570735 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:14.570735 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:55:14.570735 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:14.570735 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:55:14.570735 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:14.570735 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:55:14.570735 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:55:14.570735 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:55:14.570735 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:55:14.570735 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jul 6 23:55:14.720661 systemd-networkd[779]: eth0: Gained IPv6LL Jul 6 23:55:15.268609 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 6 23:55:16.066997 ignition[953]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jul 6 23:55:16.066997 ignition[953]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 6 23:55:16.070885 ignition[953]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:16.070885 ignition[953]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:55:16.070885 ignition[953]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 6 23:55:16.070885 ignition[953]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 6 23:55:16.070885 ignition[953]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:55:16.070885 ignition[953]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:55:16.070885 ignition[953]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 6 23:55:16.070885 ignition[953]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:55:16.118162 ignition[953]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:55:16.125658 ignition[953]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:55:16.127336 ignition[953]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:55:16.127336 ignition[953]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:16.127336 ignition[953]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:55:16.127336 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:16.127336 ignition[953]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:55:16.127336 ignition[953]: INFO : files: files passed Jul 6 23:55:16.127336 ignition[953]: INFO : Ignition finished successfully Jul 6 23:55:16.138928 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:55:16.150435 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:55:16.152616 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:55:16.160866 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:55:16.162484 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:55:16.166704 initrd-setup-root-after-ignition[981]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:55:16.170012 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:16.170012 initrd-setup-root-after-ignition[983]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:16.173375 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:55:16.177855 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:16.180825 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:55:16.193569 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:55:16.227974 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:55:16.229033 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:55:16.232537 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:55:16.234760 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:55:16.236970 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:55:16.250510 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:55:16.268008 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:16.280505 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:55:16.290827 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:16.293195 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:16.295608 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:55:16.297508 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:55:16.298555 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:55:16.301145 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:55:16.303180 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:55:16.304983 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:55:16.307108 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:55:16.309404 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:55:16.311634 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:55:16.313666 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:55:16.316051 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:55:16.318234 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:55:16.320360 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:55:16.322009 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:55:16.323019 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:55:16.325327 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:16.327519 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:16.329961 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:55:16.330960 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:16.333573 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:55:16.334629 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:55:16.336937 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:55:16.338022 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:55:16.340405 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:55:16.342134 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:55:16.343187 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:16.345934 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:55:16.347811 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:55:16.349604 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:55:16.350478 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:55:16.352481 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:55:16.353422 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:55:16.355429 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:55:16.356646 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:55:16.359190 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:55:16.360188 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:55:16.373476 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:55:16.376474 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:55:16.378393 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:55:16.379626 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:16.382170 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:55:16.382951 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:55:16.389464 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:55:16.389586 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:55:16.393329 ignition[1008]: INFO : Ignition 2.19.0 Jul 6 23:55:16.393329 ignition[1008]: INFO : Stage: umount Jul 6 23:55:16.395020 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:55:16.395020 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:55:16.395020 ignition[1008]: INFO : umount: umount passed Jul 6 23:55:16.395020 ignition[1008]: INFO : Ignition finished successfully Jul 6 23:55:16.396369 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:55:16.396520 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:55:16.397716 systemd[1]: Stopped target network.target - Network. Jul 6 23:55:16.400826 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:55:16.400909 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:55:16.401721 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:55:16.401832 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:55:16.404227 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:55:16.404348 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:55:16.404634 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:55:16.404698 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:55:16.407417 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:55:16.407850 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:55:16.420435 systemd-networkd[779]: eth0: DHCPv6 lease lost Jul 6 23:55:16.422718 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:55:16.424975 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:55:16.426368 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:55:16.429049 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:55:16.430101 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:55:16.435038 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:55:16.435134 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:16.446410 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:55:16.446878 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:55:16.446956 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:55:16.447323 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:55:16.447394 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:16.451341 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:55:16.451408 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:16.452280 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:55:16.452365 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:16.452873 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:16.467546 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:55:16.467708 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:55:16.473646 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:55:16.473841 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:16.476377 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:55:16.476454 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:16.478559 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:55:16.478642 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:16.480737 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:55:16.480807 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:55:16.483242 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:55:16.483349 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:55:16.485429 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:55:16.485496 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:55:16.497454 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:55:16.499728 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:55:16.499804 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:16.501197 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:55:16.501266 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:55:16.503633 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:55:16.503701 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:16.506267 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:55:16.506361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:16.509097 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:55:16.509251 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:55:16.580492 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:55:16.580647 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:55:16.581974 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:55:16.583797 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:55:16.583910 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:55:16.593620 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:55:16.604345 systemd[1]: Switching root. Jul 6 23:55:16.631726 systemd-journald[193]: Journal stopped Jul 6 23:55:17.794190 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Jul 6 23:55:17.794316 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:55:17.794363 kernel: SELinux: policy capability open_perms=1 Jul 6 23:55:17.794379 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:55:17.794393 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:55:17.794406 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:55:17.794420 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:55:17.794436 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:55:17.794450 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:55:17.794466 kernel: audit: type=1403 audit(1751846117.011:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:55:17.794482 systemd[1]: Successfully loaded SELinux policy in 43.049ms. Jul 6 23:55:17.794517 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.299ms. Jul 6 23:55:17.794534 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:55:17.794556 systemd[1]: Detected virtualization kvm. Jul 6 23:55:17.794572 systemd[1]: Detected architecture x86-64. Jul 6 23:55:17.794588 systemd[1]: Detected first boot. Jul 6 23:55:17.794610 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:55:17.794626 zram_generator::config[1052]: No configuration found. Jul 6 23:55:17.794643 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:55:17.794665 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:55:17.794682 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:55:17.794700 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:55:17.794717 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:55:17.794734 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:55:17.794750 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:55:17.794766 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:55:17.794781 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:55:17.794797 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:55:17.794820 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:55:17.794836 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:55:17.794853 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:55:17.794870 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:55:17.794885 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:55:17.794901 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:55:17.794917 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:55:17.794933 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:55:17.794948 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:55:17.794972 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:55:17.794988 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:55:17.795004 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:55:17.795022 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:55:17.795035 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:55:17.795049 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:55:17.795062 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:55:17.795080 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:55:17.795101 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:55:17.795113 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:55:17.795125 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:55:17.795139 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:55:17.795152 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:55:17.795164 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:55:17.795176 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:55:17.795188 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:55:17.795200 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:55:17.795218 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:55:17.795234 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:17.795250 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:55:17.795274 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:55:17.795307 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:55:17.795321 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:55:17.795333 systemd[1]: Reached target machines.target - Containers. Jul 6 23:55:17.795347 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:55:17.795381 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:17.795417 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:55:17.795437 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:55:17.795450 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:17.795461 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:55:17.795473 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:17.795488 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:55:17.795505 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:17.795522 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:55:17.795551 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:55:17.795570 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:55:17.795587 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:55:17.795608 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:55:17.795624 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:55:17.795639 kernel: fuse: init (API version 7.39) Jul 6 23:55:17.795653 kernel: loop: module loaded Jul 6 23:55:17.795670 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:55:17.795683 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:55:17.795702 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:55:17.795715 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:55:17.795727 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:55:17.795740 systemd[1]: Stopped verity-setup.service. Jul 6 23:55:17.795754 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:17.795790 systemd-journald[1122]: Collecting audit messages is disabled. Jul 6 23:55:17.795814 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:55:17.795833 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:55:17.795847 systemd-journald[1122]: Journal started Jul 6 23:55:17.795876 systemd-journald[1122]: Runtime Journal (/run/log/journal/6b35572b67fe45999c383bf7532d4569) is 6.0M, max 48.4M, 42.3M free. Jul 6 23:55:17.554078 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:55:17.572368 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:55:17.572947 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:55:17.798471 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:55:17.799328 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:55:17.800323 kernel: ACPI: bus type drm_connector registered Jul 6 23:55:17.801006 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:55:17.802480 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:55:17.803726 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:55:17.804994 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:55:17.806643 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:55:17.808218 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:55:17.808423 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:55:17.809946 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:17.810132 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:17.811689 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:55:17.811866 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:55:17.813417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:17.813598 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:17.815099 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:55:17.815275 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:55:17.816746 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:17.816923 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:17.818351 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:55:17.819883 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:55:17.821483 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:55:17.837186 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:55:17.846404 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:55:17.849173 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:55:17.850379 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:55:17.850414 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:55:17.852495 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:55:17.855055 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:55:17.859463 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:55:17.861704 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:17.863722 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:55:17.867801 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:55:17.869155 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:55:17.877706 systemd-journald[1122]: Time spent on flushing to /var/log/journal/6b35572b67fe45999c383bf7532d4569 is 18.073ms for 946 entries. Jul 6 23:55:17.877706 systemd-journald[1122]: System Journal (/var/log/journal/6b35572b67fe45999c383bf7532d4569) is 8.0M, max 195.6M, 187.6M free. Jul 6 23:55:17.980846 systemd-journald[1122]: Received client request to flush runtime journal. Jul 6 23:55:17.875406 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:55:17.876540 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:55:17.883020 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:55:17.956629 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:55:17.963204 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:55:17.972537 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:55:17.974795 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:55:17.976754 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:55:17.979743 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:55:17.981441 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:55:17.983186 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:55:17.991942 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:55:18.035406 kernel: loop0: detected capacity change from 0 to 229808 Jul 6 23:55:18.040631 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:55:18.052143 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:55:18.069551 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:55:18.071095 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:55:18.103979 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jul 6 23:55:18.104397 systemd-tmpfiles[1167]: ACLs are not supported, ignoring. Jul 6 23:55:18.109404 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:55:18.147129 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:55:18.150699 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:55:18.151388 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:55:18.158316 kernel: loop1: detected capacity change from 0 to 142488 Jul 6 23:55:18.162806 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:55:18.236328 kernel: loop2: detected capacity change from 0 to 140768 Jul 6 23:55:18.237888 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:55:18.293907 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:55:18.325655 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jul 6 23:55:18.326309 kernel: loop3: detected capacity change from 0 to 229808 Jul 6 23:55:18.325680 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Jul 6 23:55:18.353250 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:55:18.358353 kernel: loop4: detected capacity change from 0 to 142488 Jul 6 23:55:18.376320 kernel: loop5: detected capacity change from 0 to 140768 Jul 6 23:55:18.389701 (sd-merge)[1192]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:55:18.390556 (sd-merge)[1192]: Merged extensions into '/usr'. Jul 6 23:55:18.396787 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:55:18.396806 systemd[1]: Reloading... Jul 6 23:55:18.527437 zram_generator::config[1218]: No configuration found. Jul 6 23:55:18.618553 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:55:18.706823 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:18.757232 systemd[1]: Reloading finished in 359 ms. Jul 6 23:55:18.792372 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:55:18.794003 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:55:18.810653 systemd[1]: Starting ensure-sysext.service... Jul 6 23:55:18.813121 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:55:18.855598 systemd[1]: Reloading requested from client PID 1256 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:55:18.855619 systemd[1]: Reloading... Jul 6 23:55:18.885660 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:55:18.886188 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:55:18.888119 systemd-tmpfiles[1257]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:55:18.888582 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jul 6 23:55:18.888694 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Jul 6 23:55:18.900271 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:55:18.900299 systemd-tmpfiles[1257]: Skipping /boot Jul 6 23:55:18.918733 systemd-tmpfiles[1257]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:55:18.918869 systemd-tmpfiles[1257]: Skipping /boot Jul 6 23:55:18.929340 zram_generator::config[1287]: No configuration found. Jul 6 23:55:19.051774 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:19.102383 systemd[1]: Reloading finished in 246 ms. Jul 6 23:55:19.119265 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:55:19.134037 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:55:19.143580 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:55:19.146463 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:55:19.148944 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:55:19.154188 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:55:19.161528 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:55:19.166157 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:55:19.170031 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:19.170225 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:19.173766 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:19.177646 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:19.180853 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:19.182778 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:19.184803 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:55:19.186206 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:19.190648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:19.190884 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:19.192921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:19.193095 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:19.196062 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:19.196274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:19.208428 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:19.209624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:19.221083 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jul 6 23:55:19.221624 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:19.246383 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:19.251593 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:19.254197 augenrules[1354]: No rules Jul 6 23:55:19.254709 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:19.254883 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:19.256259 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:55:19.258207 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:55:19.260419 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:55:19.262359 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:19.262586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:19.265084 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:19.265347 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:19.267966 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:19.268161 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:19.279132 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:55:19.281884 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:19.283387 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:55:19.290482 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:55:19.296523 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:55:19.306617 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:55:19.312730 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:55:19.314360 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:55:19.317924 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:55:19.322529 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:55:19.323628 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:55:19.324091 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:55:19.326021 systemd[1]: Finished ensure-sysext.service. Jul 6 23:55:19.326654 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:55:19.327410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:55:19.329404 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:55:19.330963 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:55:19.331628 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:55:19.333091 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:55:19.334339 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:55:19.355699 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:55:19.365468 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:55:19.366680 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:55:19.367138 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:55:19.367386 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:55:19.368884 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:55:19.378915 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:55:19.414439 systemd-resolved[1327]: Positive Trust Anchors: Jul 6 23:55:19.414886 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:55:19.414985 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:55:19.427312 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1366) Jul 6 23:55:19.450163 systemd-resolved[1327]: Defaulting to hostname 'linux'. Jul 6 23:55:19.457266 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:55:19.458746 systemd-networkd[1391]: lo: Link UP Jul 6 23:55:19.458905 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:55:19.460960 systemd-networkd[1391]: lo: Gained carrier Jul 6 23:55:19.462793 systemd-networkd[1391]: Enumeration completed Jul 6 23:55:19.463422 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:55:19.464993 systemd[1]: Reached target network.target - Network. Jul 6 23:55:19.468954 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:55:19.471455 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:55:19.504370 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 6 23:55:19.511378 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:55:19.518742 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:55:19.519626 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:19.519637 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:55:19.522458 systemd-networkd[1391]: eth0: Link UP Jul 6 23:55:19.522472 systemd-networkd[1391]: eth0: Gained carrier Jul 6 23:55:19.522484 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:55:19.528512 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:55:19.531835 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:55:19.533508 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:55:19.538489 systemd-networkd[1391]: eth0: DHCPv4 address 10.0.0.104/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:55:19.540135 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Jul 6 23:55:19.542007 systemd-timesyncd[1401]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:55:19.542068 systemd-timesyncd[1401]: Initial clock synchronization to Sun 2025-07-06 23:55:19.925311 UTC. Jul 6 23:55:19.543352 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 6 23:55:19.553460 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:55:19.591314 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 6 23:55:19.591635 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 6 23:55:19.592785 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 6 23:55:19.597304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:55:19.601328 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:55:19.757319 kernel: kvm_amd: TSC scaling supported Jul 6 23:55:19.757391 kernel: kvm_amd: Nested Virtualization enabled Jul 6 23:55:19.757436 kernel: kvm_amd: Nested Paging enabled Jul 6 23:55:19.757463 kernel: kvm_amd: LBR virtualization supported Jul 6 23:55:19.757477 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 6 23:55:19.757489 kernel: kvm_amd: Virtual GIF supported Jul 6 23:55:19.766370 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:55:19.781320 kernel: EDAC MC: Ver: 3.0.0 Jul 6 23:55:19.821362 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:55:19.836515 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:55:19.847718 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:55:19.886578 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:55:19.888392 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:55:19.889645 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:55:19.890945 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:55:19.892337 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:55:19.894006 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:55:19.895280 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:55:19.896628 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:55:19.897970 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:55:19.898011 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:55:19.899003 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:55:19.901230 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:55:19.904785 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:55:19.915677 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:55:19.918423 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:55:19.920076 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:55:19.921258 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:55:19.922316 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:55:19.923319 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:55:19.923351 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:55:19.924417 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:55:19.926650 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:55:19.929392 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:55:19.931398 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:55:19.935481 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:55:19.938900 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:55:19.941953 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:55:19.944460 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:55:19.944908 jq[1436]: false Jul 6 23:55:19.947460 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:55:19.950442 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:55:19.956866 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:55:19.958493 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:55:19.958931 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:55:19.964713 dbus-daemon[1435]: [system] SELinux support is enabled Jul 6 23:55:19.965496 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:55:19.967834 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:55:19.971263 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:55:19.973465 extend-filesystems[1437]: Found loop3 Jul 6 23:55:19.974981 extend-filesystems[1437]: Found loop4 Jul 6 23:55:19.974981 extend-filesystems[1437]: Found loop5 Jul 6 23:55:19.974981 extend-filesystems[1437]: Found sr0 Jul 6 23:55:19.974981 extend-filesystems[1437]: Found vda Jul 6 23:55:19.974981 extend-filesystems[1437]: Found vda1 Jul 6 23:55:19.974981 extend-filesystems[1437]: Found vda2 Jul 6 23:55:19.974981 extend-filesystems[1437]: Found vda3 Jul 6 23:55:19.974981 extend-filesystems[1437]: Found usr Jul 6 23:55:19.974981 extend-filesystems[1437]: Found vda4 Jul 6 23:55:19.974981 extend-filesystems[1437]: Found vda6 Jul 6 23:55:19.974981 extend-filesystems[1437]: Found vda7 Jul 6 23:55:19.974981 extend-filesystems[1437]: Found vda9 Jul 6 23:55:19.974981 extend-filesystems[1437]: Checking size of /dev/vda9 Jul 6 23:55:20.018497 extend-filesystems[1437]: Resized partition /dev/vda9 Jul 6 23:55:20.026647 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:55:19.976257 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:55:20.026734 update_engine[1444]: I20250706 23:55:19.998831 1444 main.cc:92] Flatcar Update Engine starting Jul 6 23:55:20.026734 update_engine[1444]: I20250706 23:55:20.000907 1444 update_check_scheduler.cc:74] Next update check in 8m26s Jul 6 23:55:20.027070 jq[1446]: true Jul 6 23:55:20.027269 extend-filesystems[1464]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:55:19.986849 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:55:19.987098 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:55:19.987507 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:55:20.029198 jq[1460]: true Jul 6 23:55:19.987754 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:55:19.991455 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:55:19.992158 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:55:20.019916 (ntainerd)[1461]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:55:20.059007 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:55:20.085065 extend-filesystems[1464]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:55:20.085065 extend-filesystems[1464]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:55:20.085065 extend-filesystems[1464]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:55:20.100256 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1374) Jul 6 23:55:20.084351 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:55:20.101302 extend-filesystems[1437]: Resized filesystem in /dev/vda9 Jul 6 23:55:20.084384 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:55:20.106913 tar[1457]: linux-amd64/LICENSE Jul 6 23:55:20.089278 systemd-logind[1443]: New seat seat0. Jul 6 23:55:20.091514 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:55:20.093990 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:55:20.094250 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:55:20.105596 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:55:20.108422 tar[1457]: linux-amd64/helm Jul 6 23:55:20.113555 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:55:20.113746 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:55:20.115514 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:55:20.115636 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:55:20.161019 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:55:20.177910 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:55:20.264058 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:55:20.283805 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:55:20.294610 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:55:20.303079 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:55:20.303373 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:55:20.306586 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:55:20.386554 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:55:20.442153 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:55:20.445287 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:55:20.451277 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:55:20.461802 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:55:20.465807 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:55:20.467613 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:55:20.673637 systemd-networkd[1391]: eth0: Gained IPv6LL Jul 6 23:55:20.677838 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:55:20.691929 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:55:20.701019 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:55:20.707822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:20.714423 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:55:20.739619 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:55:20.739914 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:55:20.742933 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:55:20.749815 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:55:20.754878 containerd[1461]: time="2025-07-06T23:55:20.754750751Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:55:20.815536 containerd[1461]: time="2025-07-06T23:55:20.815407738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:20.818184 containerd[1461]: time="2025-07-06T23:55:20.818126564Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:20.819492 containerd[1461]: time="2025-07-06T23:55:20.818246766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:55:20.819492 containerd[1461]: time="2025-07-06T23:55:20.818277645Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:55:20.819492 containerd[1461]: time="2025-07-06T23:55:20.818536832Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:55:20.819492 containerd[1461]: time="2025-07-06T23:55:20.818557862Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:20.819492 containerd[1461]: time="2025-07-06T23:55:20.818645319Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:20.819492 containerd[1461]: time="2025-07-06T23:55:20.818659923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:20.819492 containerd[1461]: time="2025-07-06T23:55:20.818923731Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:20.819492 containerd[1461]: time="2025-07-06T23:55:20.818940361Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:20.819492 containerd[1461]: time="2025-07-06T23:55:20.818955459Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:20.819492 containerd[1461]: time="2025-07-06T23:55:20.818968172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:20.819492 containerd[1461]: time="2025-07-06T23:55:20.819078024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:20.819492 containerd[1461]: time="2025-07-06T23:55:20.819366357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:55:20.819871 containerd[1461]: time="2025-07-06T23:55:20.819513479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:55:20.819871 containerd[1461]: time="2025-07-06T23:55:20.819529459Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:55:20.819871 containerd[1461]: time="2025-07-06T23:55:20.819651373Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:55:20.819871 containerd[1461]: time="2025-07-06T23:55:20.819726304Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:55:20.843734 tar[1457]: linux-amd64/README.md Jul 6 23:55:20.862560 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:55:20.951275 containerd[1461]: time="2025-07-06T23:55:20.951065721Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:55:20.951275 containerd[1461]: time="2025-07-06T23:55:20.951258671Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:55:20.951496 containerd[1461]: time="2025-07-06T23:55:20.951285790Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:55:20.951496 containerd[1461]: time="2025-07-06T23:55:20.951358318Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:55:20.951638 containerd[1461]: time="2025-07-06T23:55:20.951536327Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:55:20.951938 containerd[1461]: time="2025-07-06T23:55:20.951904254Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:55:20.952548 containerd[1461]: time="2025-07-06T23:55:20.952450936Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:55:20.952935 containerd[1461]: time="2025-07-06T23:55:20.952873414Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:55:20.952935 containerd[1461]: time="2025-07-06T23:55:20.952920019Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:55:20.953075 containerd[1461]: time="2025-07-06T23:55:20.952959265Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:55:20.953075 containerd[1461]: time="2025-07-06T23:55:20.953006133Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.953075 containerd[1461]: time="2025-07-06T23:55:20.953032086Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.953075 containerd[1461]: time="2025-07-06T23:55:20.953046490Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.953075 containerd[1461]: time="2025-07-06T23:55:20.953063971Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.953243 containerd[1461]: time="2025-07-06T23:55:20.953096791Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.953243 containerd[1461]: time="2025-07-06T23:55:20.953121873Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.953899 containerd[1461]: time="2025-07-06T23:55:20.953556258Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.953899 containerd[1461]: time="2025-07-06T23:55:20.953675893Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:55:20.953899 containerd[1461]: time="2025-07-06T23:55:20.953793356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.953899 containerd[1461]: time="2025-07-06T23:55:20.953826302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.953899 containerd[1461]: time="2025-07-06T23:55:20.953846302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.953899 containerd[1461]: time="2025-07-06T23:55:20.953869485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.953899 containerd[1461]: time="2025-07-06T23:55:20.953893013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.954129 containerd[1461]: time="2025-07-06T23:55:20.953919754Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.954129 containerd[1461]: time="2025-07-06T23:55:20.953948794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.954129 containerd[1461]: time="2025-07-06T23:55:20.953994286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.954129 containerd[1461]: time="2025-07-06T23:55:20.954023021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.954129 containerd[1461]: time="2025-07-06T23:55:20.954049101Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.954129 containerd[1461]: time="2025-07-06T23:55:20.954068282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.954129 containerd[1461]: time="2025-07-06T23:55:20.954085774Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.954129 containerd[1461]: time="2025-07-06T23:55:20.954103759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.954129 containerd[1461]: time="2025-07-06T23:55:20.954131004Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:55:20.954391 containerd[1461]: time="2025-07-06T23:55:20.954184968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.954391 containerd[1461]: time="2025-07-06T23:55:20.954218681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.954391 containerd[1461]: time="2025-07-06T23:55:20.954255847Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:55:20.954448 containerd[1461]: time="2025-07-06T23:55:20.954424335Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:55:20.954549 containerd[1461]: time="2025-07-06T23:55:20.954465879Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:55:20.954549 containerd[1461]: time="2025-07-06T23:55:20.954489942Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:55:20.954549 containerd[1461]: time="2025-07-06T23:55:20.954534941Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:55:20.954614 containerd[1461]: time="2025-07-06T23:55:20.954553734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.955054 containerd[1461]: time="2025-07-06T23:55:20.954678420Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:55:20.955054 containerd[1461]: time="2025-07-06T23:55:20.954788334Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:55:20.955054 containerd[1461]: time="2025-07-06T23:55:20.954902825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:55:20.955959 containerd[1461]: time="2025-07-06T23:55:20.955858495Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:55:20.956323 containerd[1461]: time="2025-07-06T23:55:20.955981018Z" level=info msg="Connect containerd service" Jul 6 23:55:20.956323 containerd[1461]: time="2025-07-06T23:55:20.956103636Z" level=info msg="using legacy CRI server" Jul 6 23:55:20.956323 containerd[1461]: time="2025-07-06T23:55:20.956134461Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:55:20.956500 containerd[1461]: time="2025-07-06T23:55:20.956456947Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:55:20.957957 containerd[1461]: time="2025-07-06T23:55:20.957875853Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:55:20.958277 containerd[1461]: time="2025-07-06T23:55:20.958174969Z" level=info msg="Start subscribing containerd event" Jul 6 23:55:20.958354 containerd[1461]: time="2025-07-06T23:55:20.958316810Z" level=info msg="Start recovering state" Jul 6 23:55:20.958485 containerd[1461]: time="2025-07-06T23:55:20.958445066Z" level=info msg="Start event monitor" Jul 6 23:55:20.958485 containerd[1461]: time="2025-07-06T23:55:20.958483649Z" level=info msg="Start snapshots syncer" Jul 6 23:55:20.958543 containerd[1461]: time="2025-07-06T23:55:20.958499052Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:55:20.958543 containerd[1461]: time="2025-07-06T23:55:20.958507577Z" level=info msg="Start streaming server" Jul 6 23:55:20.958619 containerd[1461]: time="2025-07-06T23:55:20.958556250Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:55:20.958619 containerd[1461]: time="2025-07-06T23:55:20.958612272Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:55:20.958873 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:55:20.959334 containerd[1461]: time="2025-07-06T23:55:20.959283220Z" level=info msg="containerd successfully booted in 0.206800s" Jul 6 23:55:22.112063 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:22.113856 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:55:22.115235 systemd[1]: Startup finished in 1.119s (kernel) + 6.160s (initrd) + 5.144s (userspace) = 12.424s. Jul 6 23:55:22.130130 (kubelet)[1548]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:22.678162 kubelet[1548]: E0706 23:55:22.678083 1548 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:22.682898 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:22.683155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:22.683579 systemd[1]: kubelet.service: Consumed 1.629s CPU time. Jul 6 23:55:24.731597 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:55:24.733119 systemd[1]: Started sshd@0-10.0.0.104:22-10.0.0.1:32966.service - OpenSSH per-connection server daemon (10.0.0.1:32966). Jul 6 23:55:24.790827 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 32966 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:24.793125 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:24.803399 systemd-logind[1443]: New session 1 of user core. Jul 6 23:55:24.804810 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:55:24.816560 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:55:24.831642 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:55:24.840743 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:55:24.847119 (systemd)[1565]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:55:24.969305 systemd[1565]: Queued start job for default target default.target. Jul 6 23:55:24.983896 systemd[1565]: Created slice app.slice - User Application Slice. Jul 6 23:55:24.983929 systemd[1565]: Reached target paths.target - Paths. Jul 6 23:55:24.983943 systemd[1565]: Reached target timers.target - Timers. Jul 6 23:55:24.985880 systemd[1565]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:55:24.998741 systemd[1565]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:55:24.998925 systemd[1565]: Reached target sockets.target - Sockets. Jul 6 23:55:24.998947 systemd[1565]: Reached target basic.target - Basic System. Jul 6 23:55:24.998998 systemd[1565]: Reached target default.target - Main User Target. Jul 6 23:55:24.999047 systemd[1565]: Startup finished in 141ms. Jul 6 23:55:24.999517 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:55:25.001356 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:55:25.065442 systemd[1]: Started sshd@1-10.0.0.104:22-10.0.0.1:32974.service - OpenSSH per-connection server daemon (10.0.0.1:32974). Jul 6 23:55:25.105041 sshd[1576]: Accepted publickey for core from 10.0.0.1 port 32974 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:25.107342 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:25.112518 systemd-logind[1443]: New session 2 of user core. Jul 6 23:55:25.122590 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:55:25.180179 sshd[1576]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:25.197520 systemd[1]: sshd@1-10.0.0.104:22-10.0.0.1:32974.service: Deactivated successfully. Jul 6 23:55:25.199345 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:55:25.200859 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:55:25.202438 systemd[1]: Started sshd@2-10.0.0.104:22-10.0.0.1:32988.service - OpenSSH per-connection server daemon (10.0.0.1:32988). Jul 6 23:55:25.203530 systemd-logind[1443]: Removed session 2. Jul 6 23:55:25.235779 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 32988 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:25.237272 sshd[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:25.241469 systemd-logind[1443]: New session 3 of user core. Jul 6 23:55:25.253564 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:55:25.320197 systemd[1]: Started sshd@3-10.0.0.104:22-10.0.0.1:33000.service - OpenSSH per-connection server daemon (10.0.0.1:33000). Jul 6 23:55:25.353505 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 33000 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:25.355242 sshd[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:25.359593 systemd-logind[1443]: New session 4 of user core. Jul 6 23:55:25.370438 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:55:25.447005 systemd[1]: Started sshd@4-10.0.0.104:22-10.0.0.1:33008.service - OpenSSH per-connection server daemon (10.0.0.1:33008). Jul 6 23:55:25.479286 sshd[1593]: Accepted publickey for core from 10.0.0.1 port 33008 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:55:25.480785 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:55:25.484817 systemd-logind[1443]: New session 5 of user core. Jul 6 23:55:25.498496 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:55:25.522226 sshd[1583]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:25.526495 systemd[1]: sshd@2-10.0.0.104:22-10.0.0.1:32988.service: Deactivated successfully. Jul 6 23:55:25.528745 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:55:25.529367 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:55:25.530237 systemd-logind[1443]: Removed session 3. Jul 6 23:55:25.558859 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:55:25.559224 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:55:25.645286 sshd[1588]: pam_unix(sshd:session): session closed for user core Jul 6 23:55:25.649623 systemd[1]: sshd@3-10.0.0.104:22-10.0.0.1:33000.service: Deactivated successfully. Jul 6 23:55:25.651820 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:55:25.652395 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:55:25.653351 systemd-logind[1443]: Removed session 4. Jul 6 23:55:26.233743 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:55:26.233943 (dockerd)[1618]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:55:27.293499 dockerd[1618]: time="2025-07-06T23:55:27.293403532Z" level=info msg="Starting up" Jul 6 23:55:27.902748 dockerd[1618]: time="2025-07-06T23:55:27.902656303Z" level=info msg="Loading containers: start." Jul 6 23:55:28.039346 kernel: Initializing XFRM netlink socket Jul 6 23:55:28.179235 systemd-networkd[1391]: docker0: Link UP Jul 6 23:55:28.207141 dockerd[1618]: time="2025-07-06T23:55:28.207085611Z" level=info msg="Loading containers: done." Jul 6 23:55:28.227715 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck130226577-merged.mount: Deactivated successfully. Jul 6 23:55:28.231016 dockerd[1618]: time="2025-07-06T23:55:28.230947049Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:55:28.231121 dockerd[1618]: time="2025-07-06T23:55:28.231087799Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 6 23:55:28.231274 dockerd[1618]: time="2025-07-06T23:55:28.231239272Z" level=info msg="Daemon has completed initialization" Jul 6 23:55:28.277423 dockerd[1618]: time="2025-07-06T23:55:28.277298422Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:55:28.278062 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:55:29.153855 containerd[1461]: time="2025-07-06T23:55:29.153765782Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 6 23:55:30.126528 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3113677506.mount: Deactivated successfully. Jul 6 23:55:32.238406 containerd[1461]: time="2025-07-06T23:55:32.238343540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:32.239179 containerd[1461]: time="2025-07-06T23:55:32.239116020Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=30079099" Jul 6 23:55:32.240307 containerd[1461]: time="2025-07-06T23:55:32.240260135Z" level=info msg="ImageCreate event name:\"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:32.243311 containerd[1461]: time="2025-07-06T23:55:32.243255518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:32.244840 containerd[1461]: time="2025-07-06T23:55:32.244786299Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"30075899\" in 3.090936121s" Jul 6 23:55:32.244918 containerd[1461]: time="2025-07-06T23:55:32.244853557Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:ee794efa53d856b7e291320be3cd6390fa2e113c3f258a21290bc27fc214233e\"" Jul 6 23:55:32.245754 containerd[1461]: time="2025-07-06T23:55:32.245712424Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 6 23:55:32.908412 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:55:32.922653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:33.185942 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:33.192751 (kubelet)[1829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:33.762375 kubelet[1829]: E0706 23:55:33.762276 1829 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:33.769813 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:33.770075 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:34.898570 containerd[1461]: time="2025-07-06T23:55:34.898478495Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:34.899643 containerd[1461]: time="2025-07-06T23:55:34.899562516Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=26018946" Jul 6 23:55:34.900682 containerd[1461]: time="2025-07-06T23:55:34.900619761Z" level=info msg="ImageCreate event name:\"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:34.904164 containerd[1461]: time="2025-07-06T23:55:34.904113892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:34.905410 containerd[1461]: time="2025-07-06T23:55:34.905373580Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"27646507\" in 2.659610634s" Jul 6 23:55:34.905472 containerd[1461]: time="2025-07-06T23:55:34.905415908Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:ff4f56c76b82d6cda0555115a0fe479d5dd612264b85efb9cc14b1b4b937bdf2\"" Jul 6 23:55:34.905994 containerd[1461]: time="2025-07-06T23:55:34.905967442Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 6 23:55:36.613684 containerd[1461]: time="2025-07-06T23:55:36.613625074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:36.614194 containerd[1461]: time="2025-07-06T23:55:36.614151285Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=20155055" Jul 6 23:55:36.615327 containerd[1461]: time="2025-07-06T23:55:36.615301613Z" level=info msg="ImageCreate event name:\"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:36.617891 containerd[1461]: time="2025-07-06T23:55:36.617868665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:36.618943 containerd[1461]: time="2025-07-06T23:55:36.618918662Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"21782634\" in 1.712917041s" Jul 6 23:55:36.619072 containerd[1461]: time="2025-07-06T23:55:36.618948869Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:cfed1ff7489289d4e8d796b0d95fd251990403510563cf843912f42ab9718a7b\"" Jul 6 23:55:36.619555 containerd[1461]: time="2025-07-06T23:55:36.619518939Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 6 23:55:38.765701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2969748304.mount: Deactivated successfully. Jul 6 23:55:40.082046 containerd[1461]: time="2025-07-06T23:55:40.081945419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:40.083136 containerd[1461]: time="2025-07-06T23:55:40.083054780Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=31892746" Jul 6 23:55:40.084850 containerd[1461]: time="2025-07-06T23:55:40.084811109Z" level=info msg="ImageCreate event name:\"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:40.086968 containerd[1461]: time="2025-07-06T23:55:40.086919028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:40.087644 containerd[1461]: time="2025-07-06T23:55:40.087574913Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"31891765\" in 3.468025337s" Jul 6 23:55:40.087644 containerd[1461]: time="2025-07-06T23:55:40.087629565Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:661d404f36f01cd854403fd3540f18dcf0342d22bd9c6516bb9de234ac183b19\"" Jul 6 23:55:40.088387 containerd[1461]: time="2025-07-06T23:55:40.088344627Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 6 23:55:40.601993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2092973961.mount: Deactivated successfully. Jul 6 23:55:42.207954 containerd[1461]: time="2025-07-06T23:55:42.207860167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:42.208668 containerd[1461]: time="2025-07-06T23:55:42.208609437Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Jul 6 23:55:42.212079 containerd[1461]: time="2025-07-06T23:55:42.212012915Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:42.222385 containerd[1461]: time="2025-07-06T23:55:42.222333344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:42.224169 containerd[1461]: time="2025-07-06T23:55:42.224096479Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.135689425s" Jul 6 23:55:42.224218 containerd[1461]: time="2025-07-06T23:55:42.224173317Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jul 6 23:55:42.224796 containerd[1461]: time="2025-07-06T23:55:42.224774344Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:55:42.720386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2008684234.mount: Deactivated successfully. Jul 6 23:55:42.726227 containerd[1461]: time="2025-07-06T23:55:42.726179405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:42.727011 containerd[1461]: time="2025-07-06T23:55:42.726940165Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 6 23:55:42.728085 containerd[1461]: time="2025-07-06T23:55:42.728043424Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:42.730473 containerd[1461]: time="2025-07-06T23:55:42.730429281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:42.731162 containerd[1461]: time="2025-07-06T23:55:42.731113876Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 506.311186ms" Jul 6 23:55:42.731162 containerd[1461]: time="2025-07-06T23:55:42.731145837Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 6 23:55:42.731723 containerd[1461]: time="2025-07-06T23:55:42.731687452Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 6 23:55:43.908276 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 6 23:55:43.919476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:44.091474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:44.096689 (kubelet)[1917]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:55:44.279228 kubelet[1917]: E0706 23:55:44.279047 1917 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:55:44.283909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:55:44.284142 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:55:45.511402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2955246145.mount: Deactivated successfully. Jul 6 23:55:49.596869 containerd[1461]: time="2025-07-06T23:55:49.596767298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:49.597725 containerd[1461]: time="2025-07-06T23:55:49.597660011Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Jul 6 23:55:49.600467 containerd[1461]: time="2025-07-06T23:55:49.600354544Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:49.603868 containerd[1461]: time="2025-07-06T23:55:49.603803411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:55:49.604942 containerd[1461]: time="2025-07-06T23:55:49.604866302Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 6.873147747s" Jul 6 23:55:49.604942 containerd[1461]: time="2025-07-06T23:55:49.604920818Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jul 6 23:55:52.401960 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:52.413556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:52.441763 systemd[1]: Reloading requested from client PID 2011 ('systemctl') (unit session-5.scope)... Jul 6 23:55:52.441781 systemd[1]: Reloading... Jul 6 23:55:52.540333 zram_generator::config[2053]: No configuration found. Jul 6 23:55:52.838999 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:55:52.918916 systemd[1]: Reloading finished in 476 ms. Jul 6 23:55:52.975105 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:52.978971 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:55:52.979247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:52.981163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:55:53.154946 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:55:53.161547 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:55:53.219175 kubelet[2100]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:55:53.219824 kubelet[2100]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:55:53.219824 kubelet[2100]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:55:53.220212 kubelet[2100]: I0706 23:55:53.219882 2100 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:55:53.933170 kubelet[2100]: I0706 23:55:53.933095 2100 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:55:53.933170 kubelet[2100]: I0706 23:55:53.933143 2100 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:55:53.935100 kubelet[2100]: I0706 23:55:53.935049 2100 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:55:53.963908 kubelet[2100]: I0706 23:55:53.963832 2100 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:55:53.965576 kubelet[2100]: E0706 23:55:53.965544 2100 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:55:53.971275 kubelet[2100]: E0706 23:55:53.971095 2100 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:55:53.971275 kubelet[2100]: I0706 23:55:53.971143 2100 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:55:53.978734 kubelet[2100]: I0706 23:55:53.978670 2100 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:55:53.978992 kubelet[2100]: I0706 23:55:53.978954 2100 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:55:53.979759 kubelet[2100]: I0706 23:55:53.978989 2100 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:55:53.979759 kubelet[2100]: I0706 23:55:53.979199 2100 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:55:53.979759 kubelet[2100]: I0706 23:55:53.979210 2100 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:55:53.980685 kubelet[2100]: I0706 23:55:53.980637 2100 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:55:53.982932 kubelet[2100]: I0706 23:55:53.982875 2100 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:55:53.982932 kubelet[2100]: I0706 23:55:53.982905 2100 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:55:53.983050 kubelet[2100]: I0706 23:55:53.982945 2100 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:55:53.984431 kubelet[2100]: I0706 23:55:53.984382 2100 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:55:53.988604 kubelet[2100]: E0706 23:55:53.988525 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:55:53.990447 kubelet[2100]: E0706 23:55:53.989897 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:55:53.993038 kubelet[2100]: I0706 23:55:53.993016 2100 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:55:53.993568 kubelet[2100]: I0706 23:55:53.993532 2100 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:55:53.994200 kubelet[2100]: W0706 23:55:53.994168 2100 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:55:53.997117 kubelet[2100]: I0706 23:55:53.997088 2100 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:55:53.997168 kubelet[2100]: I0706 23:55:53.997142 2100 server.go:1289] "Started kubelet" Jul 6 23:55:53.998238 kubelet[2100]: I0706 23:55:53.998161 2100 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:55:53.998939 kubelet[2100]: I0706 23:55:53.998855 2100 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:55:53.998939 kubelet[2100]: I0706 23:55:53.998915 2100 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:55:53.998939 kubelet[2100]: I0706 23:55:53.998930 2100 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:55:53.999955 kubelet[2100]: I0706 23:55:53.999928 2100 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:55:54.002245 kubelet[2100]: I0706 23:55:54.001205 2100 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:55:54.003236 kubelet[2100]: E0706 23:55:54.002042 2100 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.104:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fcecb9e21d03f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:55:53.997111359 +0000 UTC m=+0.827572771,LastTimestamp:2025-07-06 23:55:53.997111359 +0000 UTC m=+0.827572771,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:55:54.003953 kubelet[2100]: E0706 23:55:54.003927 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:54.004054 kubelet[2100]: E0706 23:55:54.003940 2100 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:55:54.004156 kubelet[2100]: I0706 23:55:54.004141 2100 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:55:54.004544 kubelet[2100]: I0706 23:55:54.004525 2100 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:55:54.004733 kubelet[2100]: I0706 23:55:54.004718 2100 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:55:54.005090 kubelet[2100]: I0706 23:55:54.005030 2100 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:55:54.005185 kubelet[2100]: I0706 23:55:54.005153 2100 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:55:54.005425 kubelet[2100]: E0706 23:55:54.005379 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="200ms" Jul 6 23:55:54.005597 kubelet[2100]: E0706 23:55:54.005535 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:55:54.006615 kubelet[2100]: I0706 23:55:54.006587 2100 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:55:54.024279 kubelet[2100]: I0706 23:55:54.024225 2100 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:55:54.026008 kubelet[2100]: I0706 23:55:54.025848 2100 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:55:54.026008 kubelet[2100]: I0706 23:55:54.025866 2100 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:55:54.026008 kubelet[2100]: I0706 23:55:54.025888 2100 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:55:54.026008 kubelet[2100]: I0706 23:55:54.025895 2100 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:55:54.026008 kubelet[2100]: E0706 23:55:54.025954 2100 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:55:54.029853 kubelet[2100]: E0706 23:55:54.029815 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:55:54.030176 kubelet[2100]: I0706 23:55:54.030145 2100 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:55:54.030176 kubelet[2100]: I0706 23:55:54.030160 2100 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:55:54.030176 kubelet[2100]: I0706 23:55:54.030178 2100 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:55:54.104436 kubelet[2100]: E0706 23:55:54.104348 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:54.127135 kubelet[2100]: E0706 23:55:54.127070 2100 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:55:54.204620 kubelet[2100]: E0706 23:55:54.204476 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:54.206172 kubelet[2100]: E0706 23:55:54.206121 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="400ms" Jul 6 23:55:54.305455 kubelet[2100]: E0706 23:55:54.305354 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:54.327848 kubelet[2100]: E0706 23:55:54.327779 2100 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:55:54.406384 kubelet[2100]: E0706 23:55:54.406268 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:54.506547 kubelet[2100]: E0706 23:55:54.506363 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:54.607182 kubelet[2100]: E0706 23:55:54.606994 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:54.607639 kubelet[2100]: E0706 23:55:54.607585 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="800ms" Jul 6 23:55:54.708093 kubelet[2100]: E0706 23:55:54.708025 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:54.728355 kubelet[2100]: E0706 23:55:54.728277 2100 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 6 23:55:54.808889 kubelet[2100]: E0706 23:55:54.808762 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:54.909457 kubelet[2100]: E0706 23:55:54.909396 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:55.010142 kubelet[2100]: E0706 23:55:55.010090 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:55.084533 kubelet[2100]: I0706 23:55:55.084404 2100 policy_none.go:49] "None policy: Start" Jul 6 23:55:55.084533 kubelet[2100]: I0706 23:55:55.084433 2100 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:55:55.084533 kubelet[2100]: I0706 23:55:55.084448 2100 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:55:55.109374 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:55:55.110914 kubelet[2100]: E0706 23:55:55.110877 2100 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:55:55.124447 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:55:55.127879 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:55:55.146797 kubelet[2100]: E0706 23:55:55.146751 2100 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:55:55.147118 kubelet[2100]: I0706 23:55:55.147078 2100 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:55:55.147118 kubelet[2100]: I0706 23:55:55.147098 2100 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:55:55.147658 kubelet[2100]: I0706 23:55:55.147388 2100 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:55:55.148157 kubelet[2100]: E0706 23:55:55.148134 2100 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:55:55.148272 kubelet[2100]: E0706 23:55:55.148245 2100 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:55:55.211023 kubelet[2100]: E0706 23:55:55.210954 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 6 23:55:55.249678 kubelet[2100]: I0706 23:55:55.249615 2100 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:55:55.250092 kubelet[2100]: E0706 23:55:55.250042 2100 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jul 6 23:55:55.270955 kubelet[2100]: E0706 23:55:55.270892 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 6 23:55:55.353642 kubelet[2100]: E0706 23:55:55.353485 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 6 23:55:55.408824 kubelet[2100]: E0706 23:55:55.408742 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="1.6s" Jul 6 23:55:55.452239 kubelet[2100]: I0706 23:55:55.452193 2100 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:55:55.452557 kubelet[2100]: E0706 23:55:55.452510 2100 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jul 6 23:55:55.542465 systemd[1]: Created slice kubepods-burstable-pod7221148fad4839e5e8e190ec26719d4e.slice - libcontainer container kubepods-burstable-pod7221148fad4839e5e8e190ec26719d4e.slice. Jul 6 23:55:55.545377 kubelet[2100]: E0706 23:55:55.545337 2100 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 6 23:55:55.552143 kubelet[2100]: E0706 23:55:55.552100 2100 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:55:55.555568 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 6 23:55:55.557246 kubelet[2100]: E0706 23:55:55.557205 2100 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:55:55.558948 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 6 23:55:55.560662 kubelet[2100]: E0706 23:55:55.560627 2100 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:55:55.614249 kubelet[2100]: I0706 23:55:55.614128 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7221148fad4839e5e8e190ec26719d4e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7221148fad4839e5e8e190ec26719d4e\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:55.614249 kubelet[2100]: I0706 23:55:55.614165 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7221148fad4839e5e8e190ec26719d4e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7221148fad4839e5e8e190ec26719d4e\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:55.614249 kubelet[2100]: I0706 23:55:55.614189 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:55.614249 kubelet[2100]: I0706 23:55:55.614213 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:55.614455 kubelet[2100]: I0706 23:55:55.614376 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7221148fad4839e5e8e190ec26719d4e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7221148fad4839e5e8e190ec26719d4e\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:55.614455 kubelet[2100]: I0706 23:55:55.614445 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:55.614528 kubelet[2100]: I0706 23:55:55.614479 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:55.614528 kubelet[2100]: I0706 23:55:55.614515 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:55.614602 kubelet[2100]: I0706 23:55:55.614538 2100 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:55:55.853812 kubelet[2100]: E0706 23:55:55.853760 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:55.854804 containerd[1461]: time="2025-07-06T23:55:55.854759858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7221148fad4839e5e8e190ec26719d4e,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:55.855355 kubelet[2100]: I0706 23:55:55.854850 2100 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:55:55.855355 kubelet[2100]: E0706 23:55:55.855313 2100 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jul 6 23:55:55.858751 kubelet[2100]: E0706 23:55:55.858634 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:55.859317 containerd[1461]: time="2025-07-06T23:55:55.859233131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:55.861621 kubelet[2100]: E0706 23:55:55.861538 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:55.862179 containerd[1461]: time="2025-07-06T23:55:55.862134073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 6 23:55:56.106258 kubelet[2100]: E0706 23:55:56.106193 2100 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.104:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 6 23:55:56.616170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1469616975.mount: Deactivated successfully. Jul 6 23:55:56.624894 containerd[1461]: time="2025-07-06T23:55:56.624804360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:56.626021 containerd[1461]: time="2025-07-06T23:55:56.625989096Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:56.626583 containerd[1461]: time="2025-07-06T23:55:56.626531313Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:55:56.627578 containerd[1461]: time="2025-07-06T23:55:56.627535960Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:56.628561 containerd[1461]: time="2025-07-06T23:55:56.628518386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:55:56.629630 containerd[1461]: time="2025-07-06T23:55:56.629585804Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:56.630382 containerd[1461]: time="2025-07-06T23:55:56.630312563Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:55:56.633399 containerd[1461]: time="2025-07-06T23:55:56.633358108Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:55:56.635682 containerd[1461]: time="2025-07-06T23:55:56.635634299Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 776.263106ms" Jul 6 23:55:56.636178 containerd[1461]: time="2025-07-06T23:55:56.636133459Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 781.269322ms" Jul 6 23:55:56.636777 containerd[1461]: time="2025-07-06T23:55:56.636735037Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 774.514565ms" Jul 6 23:55:56.658091 kubelet[2100]: I0706 23:55:56.657653 2100 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:55:56.658091 kubelet[2100]: E0706 23:55:56.658041 2100 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.104:6443/api/v1/nodes\": dial tcp 10.0.0.104:6443: connect: connection refused" node="localhost" Jul 6 23:55:56.859499 containerd[1461]: time="2025-07-06T23:55:56.859366637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:56.859499 containerd[1461]: time="2025-07-06T23:55:56.859421597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:56.859499 containerd[1461]: time="2025-07-06T23:55:56.859433920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:56.860089 containerd[1461]: time="2025-07-06T23:55:56.859529059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:56.862890 containerd[1461]: time="2025-07-06T23:55:56.862590258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:56.862890 containerd[1461]: time="2025-07-06T23:55:56.862645609Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:56.862890 containerd[1461]: time="2025-07-06T23:55:56.862657591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:56.862890 containerd[1461]: time="2025-07-06T23:55:56.862734570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:56.863470 containerd[1461]: time="2025-07-06T23:55:56.863369821Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:55:56.863470 containerd[1461]: time="2025-07-06T23:55:56.863426616Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:55:56.863470 containerd[1461]: time="2025-07-06T23:55:56.863437405Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:56.863855 containerd[1461]: time="2025-07-06T23:55:56.863770282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:55:56.967559 systemd[1]: Started cri-containerd-ef06d242b741d629a3e17960bd2a10a894f1fb59d4f07c7a6d485bcf2fda2b1f.scope - libcontainer container ef06d242b741d629a3e17960bd2a10a894f1fb59d4f07c7a6d485bcf2fda2b1f. Jul 6 23:55:56.974830 systemd[1]: Started cri-containerd-46316a11c60cbc54855e88280a4565a88e743f1d84229c5d0e47959d6b6a0c73.scope - libcontainer container 46316a11c60cbc54855e88280a4565a88e743f1d84229c5d0e47959d6b6a0c73. Jul 6 23:55:56.981406 systemd[1]: Started cri-containerd-d8f5fadde087dc74647322b1ab779b277da893c3a61957dc45fd63d70a50f3ec.scope - libcontainer container d8f5fadde087dc74647322b1ab779b277da893c3a61957dc45fd63d70a50f3ec. Jul 6 23:55:57.011297 kubelet[2100]: E0706 23:55:57.010524 2100 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.104:6443: connect: connection refused" interval="3.2s" Jul 6 23:55:57.028336 containerd[1461]: time="2025-07-06T23:55:57.028280720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ef06d242b741d629a3e17960bd2a10a894f1fb59d4f07c7a6d485bcf2fda2b1f\"" Jul 6 23:55:57.030843 kubelet[2100]: E0706 23:55:57.030818 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:57.031243 containerd[1461]: time="2025-07-06T23:55:57.031192790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7221148fad4839e5e8e190ec26719d4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8f5fadde087dc74647322b1ab779b277da893c3a61957dc45fd63d70a50f3ec\"" Jul 6 23:55:57.032474 kubelet[2100]: E0706 23:55:57.032424 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:57.034234 containerd[1461]: time="2025-07-06T23:55:57.034034531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"46316a11c60cbc54855e88280a4565a88e743f1d84229c5d0e47959d6b6a0c73\"" Jul 6 23:55:57.034628 kubelet[2100]: E0706 23:55:57.034601 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:57.038259 containerd[1461]: time="2025-07-06T23:55:57.038222537Z" level=info msg="CreateContainer within sandbox \"d8f5fadde087dc74647322b1ab779b277da893c3a61957dc45fd63d70a50f3ec\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:55:57.040230 containerd[1461]: time="2025-07-06T23:55:57.040192372Z" level=info msg="CreateContainer within sandbox \"ef06d242b741d629a3e17960bd2a10a894f1fb59d4f07c7a6d485bcf2fda2b1f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:55:57.042633 containerd[1461]: time="2025-07-06T23:55:57.042585894Z" level=info msg="CreateContainer within sandbox \"46316a11c60cbc54855e88280a4565a88e743f1d84229c5d0e47959d6b6a0c73\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:55:57.057895 containerd[1461]: time="2025-07-06T23:55:57.057774693Z" level=info msg="CreateContainer within sandbox \"d8f5fadde087dc74647322b1ab779b277da893c3a61957dc45fd63d70a50f3ec\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"895b0f39543f8824aa225d8a5fe5652c19ab1c0c9d26e11862c8816cba600153\"" Jul 6 23:55:57.058391 containerd[1461]: time="2025-07-06T23:55:57.058362296Z" level=info msg="StartContainer for \"895b0f39543f8824aa225d8a5fe5652c19ab1c0c9d26e11862c8816cba600153\"" Jul 6 23:55:57.061692 containerd[1461]: time="2025-07-06T23:55:57.061652048Z" level=info msg="CreateContainer within sandbox \"ef06d242b741d629a3e17960bd2a10a894f1fb59d4f07c7a6d485bcf2fda2b1f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9ebad37763f94a70770b65714bfef4ec1e65c9988fa8d0f3e3864076321f78d6\"" Jul 6 23:55:57.062317 containerd[1461]: time="2025-07-06T23:55:57.061938675Z" level=info msg="StartContainer for \"9ebad37763f94a70770b65714bfef4ec1e65c9988fa8d0f3e3864076321f78d6\"" Jul 6 23:55:57.065490 containerd[1461]: time="2025-07-06T23:55:57.065451072Z" level=info msg="CreateContainer within sandbox \"46316a11c60cbc54855e88280a4565a88e743f1d84229c5d0e47959d6b6a0c73\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fb9f54ce4da363f1d6f3db765268c7af44f9d8832b827196e541405634c04c65\"" Jul 6 23:55:57.065810 containerd[1461]: time="2025-07-06T23:55:57.065788195Z" level=info msg="StartContainer for \"fb9f54ce4da363f1d6f3db765268c7af44f9d8832b827196e541405634c04c65\"" Jul 6 23:55:57.087458 systemd[1]: Started cri-containerd-895b0f39543f8824aa225d8a5fe5652c19ab1c0c9d26e11862c8816cba600153.scope - libcontainer container 895b0f39543f8824aa225d8a5fe5652c19ab1c0c9d26e11862c8816cba600153. Jul 6 23:55:57.090716 systemd[1]: Started cri-containerd-9ebad37763f94a70770b65714bfef4ec1e65c9988fa8d0f3e3864076321f78d6.scope - libcontainer container 9ebad37763f94a70770b65714bfef4ec1e65c9988fa8d0f3e3864076321f78d6. Jul 6 23:55:57.099984 systemd[1]: Started cri-containerd-fb9f54ce4da363f1d6f3db765268c7af44f9d8832b827196e541405634c04c65.scope - libcontainer container fb9f54ce4da363f1d6f3db765268c7af44f9d8832b827196e541405634c04c65. Jul 6 23:55:57.135697 containerd[1461]: time="2025-07-06T23:55:57.135639365Z" level=info msg="StartContainer for \"895b0f39543f8824aa225d8a5fe5652c19ab1c0c9d26e11862c8816cba600153\" returns successfully" Jul 6 23:55:57.149902 containerd[1461]: time="2025-07-06T23:55:57.149615426Z" level=info msg="StartContainer for \"fb9f54ce4da363f1d6f3db765268c7af44f9d8832b827196e541405634c04c65\" returns successfully" Jul 6 23:55:57.149902 containerd[1461]: time="2025-07-06T23:55:57.149693827Z" level=info msg="StartContainer for \"9ebad37763f94a70770b65714bfef4ec1e65c9988fa8d0f3e3864076321f78d6\" returns successfully" Jul 6 23:55:58.042045 kubelet[2100]: E0706 23:55:58.041830 2100 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:55:58.042045 kubelet[2100]: E0706 23:55:58.041971 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:58.044728 kubelet[2100]: E0706 23:55:58.044156 2100 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:55:58.044728 kubelet[2100]: E0706 23:55:58.044233 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:58.053503 kubelet[2100]: E0706 23:55:58.053456 2100 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 6 23:55:58.053653 kubelet[2100]: E0706 23:55:58.053633 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:58.262836 kubelet[2100]: I0706 23:55:58.262789 2100 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:55:58.525426 kubelet[2100]: I0706 23:55:58.525017 2100 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:55:58.525426 kubelet[2100]: E0706 23:55:58.525104 2100 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 6 23:55:58.605499 kubelet[2100]: I0706 23:55:58.605403 2100 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:55:58.609937 kubelet[2100]: E0706 23:55:58.609899 2100 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 6 23:55:58.609937 kubelet[2100]: I0706 23:55:58.609931 2100 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:58.611451 kubelet[2100]: E0706 23:55:58.611419 2100 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:58.611451 kubelet[2100]: I0706 23:55:58.611453 2100 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:58.612877 kubelet[2100]: E0706 23:55:58.612857 2100 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:58.989475 kubelet[2100]: I0706 23:55:58.989419 2100 apiserver.go:52] "Watching apiserver" Jul 6 23:55:59.005509 kubelet[2100]: I0706 23:55:59.005461 2100 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:55:59.047141 kubelet[2100]: I0706 23:55:59.047063 2100 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:59.048216 kubelet[2100]: I0706 23:55:59.047563 2100 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:55:59.048662 kubelet[2100]: I0706 23:55:59.048628 2100 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:59.049679 kubelet[2100]: E0706 23:55:59.049631 2100 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:55:59.049809 kubelet[2100]: E0706 23:55:59.049790 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:59.050023 kubelet[2100]: E0706 23:55:59.049993 2100 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 6 23:55:59.050200 kubelet[2100]: E0706 23:55:59.050167 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:55:59.050390 kubelet[2100]: E0706 23:55:59.050326 2100 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 6 23:55:59.050583 kubelet[2100]: E0706 23:55:59.050430 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:00.048461 kubelet[2100]: I0706 23:56:00.048422 2100 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:56:00.053813 kubelet[2100]: E0706 23:56:00.053788 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:00.815721 systemd[1]: Reloading requested from client PID 2388 ('systemctl') (unit session-5.scope)... Jul 6 23:56:00.815740 systemd[1]: Reloading... Jul 6 23:56:00.895553 zram_generator::config[2433]: No configuration found. Jul 6 23:56:01.049500 kubelet[2100]: E0706 23:56:01.049469 2100 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:01.130534 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:56:01.232177 systemd[1]: Reloading finished in 415 ms. Jul 6 23:56:01.277794 kubelet[2100]: I0706 23:56:01.277704 2100 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:56:01.277802 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:01.302747 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:56:01.303060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:01.303120 systemd[1]: kubelet.service: Consumed 1.491s CPU time, 133.4M memory peak, 0B memory swap peak. Jul 6 23:56:01.315754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:56:01.505717 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:56:01.511151 (kubelet)[2472]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:56:01.549888 kubelet[2472]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:01.549888 kubelet[2472]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 6 23:56:01.549888 kubelet[2472]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:56:01.550282 kubelet[2472]: I0706 23:56:01.549918 2472 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:56:01.557354 kubelet[2472]: I0706 23:56:01.557311 2472 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 6 23:56:01.558181 kubelet[2472]: I0706 23:56:01.557547 2472 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:56:01.558181 kubelet[2472]: I0706 23:56:01.557809 2472 server.go:956] "Client rotation is on, will bootstrap in background" Jul 6 23:56:01.559071 kubelet[2472]: I0706 23:56:01.559053 2472 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 6 23:56:01.561143 kubelet[2472]: I0706 23:56:01.561100 2472 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:56:01.564134 kubelet[2472]: E0706 23:56:01.564096 2472 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:56:01.564134 kubelet[2472]: I0706 23:56:01.564130 2472 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:56:01.570310 kubelet[2472]: I0706 23:56:01.570268 2472 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:56:01.570593 kubelet[2472]: I0706 23:56:01.570547 2472 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:56:01.570777 kubelet[2472]: I0706 23:56:01.570577 2472 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:56:01.570871 kubelet[2472]: I0706 23:56:01.570782 2472 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:56:01.570871 kubelet[2472]: I0706 23:56:01.570793 2472 container_manager_linux.go:303] "Creating device plugin manager" Jul 6 23:56:01.571460 kubelet[2472]: I0706 23:56:01.571433 2472 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:01.571614 kubelet[2472]: I0706 23:56:01.571587 2472 kubelet.go:480] "Attempting to sync node with API server" Jul 6 23:56:01.571614 kubelet[2472]: I0706 23:56:01.571603 2472 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:56:01.571661 kubelet[2472]: I0706 23:56:01.571623 2472 kubelet.go:386] "Adding apiserver pod source" Jul 6 23:56:01.571661 kubelet[2472]: I0706 23:56:01.571639 2472 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:56:01.576162 kubelet[2472]: I0706 23:56:01.575161 2472 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:56:01.576162 kubelet[2472]: I0706 23:56:01.575612 2472 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 6 23:56:01.579456 kubelet[2472]: I0706 23:56:01.578978 2472 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 6 23:56:01.579456 kubelet[2472]: I0706 23:56:01.579020 2472 server.go:1289] "Started kubelet" Jul 6 23:56:01.579533 kubelet[2472]: I0706 23:56:01.579454 2472 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:56:01.579902 kubelet[2472]: I0706 23:56:01.579877 2472 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:56:01.581222 kubelet[2472]: I0706 23:56:01.581161 2472 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:56:01.583027 kubelet[2472]: I0706 23:56:01.582993 2472 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:56:01.585452 kubelet[2472]: I0706 23:56:01.583890 2472 server.go:317] "Adding debug handlers to kubelet server" Jul 6 23:56:01.585452 kubelet[2472]: E0706 23:56:01.583980 2472 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:56:01.585452 kubelet[2472]: I0706 23:56:01.584199 2472 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 6 23:56:01.585452 kubelet[2472]: I0706 23:56:01.584221 2472 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:56:01.585452 kubelet[2472]: I0706 23:56:01.584351 2472 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 6 23:56:01.585452 kubelet[2472]: I0706 23:56:01.584503 2472 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:56:01.586126 kubelet[2472]: I0706 23:56:01.586096 2472 factory.go:223] Registration of the systemd container factory successfully Jul 6 23:56:01.586211 kubelet[2472]: I0706 23:56:01.586191 2472 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:56:01.587446 kubelet[2472]: I0706 23:56:01.587404 2472 factory.go:223] Registration of the containerd container factory successfully Jul 6 23:56:01.595732 kubelet[2472]: I0706 23:56:01.595683 2472 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 6 23:56:01.597145 kubelet[2472]: I0706 23:56:01.597108 2472 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 6 23:56:01.597145 kubelet[2472]: I0706 23:56:01.597132 2472 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 6 23:56:01.597224 kubelet[2472]: I0706 23:56:01.597154 2472 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 6 23:56:01.597224 kubelet[2472]: I0706 23:56:01.597165 2472 kubelet.go:2436] "Starting kubelet main sync loop" Jul 6 23:56:01.597224 kubelet[2472]: E0706 23:56:01.597213 2472 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:56:01.629635 kubelet[2472]: I0706 23:56:01.629592 2472 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 6 23:56:01.629635 kubelet[2472]: I0706 23:56:01.629613 2472 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 6 23:56:01.629635 kubelet[2472]: I0706 23:56:01.629633 2472 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:56:01.629801 kubelet[2472]: I0706 23:56:01.629769 2472 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:56:01.629801 kubelet[2472]: I0706 23:56:01.629779 2472 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:56:01.629801 kubelet[2472]: I0706 23:56:01.629796 2472 policy_none.go:49] "None policy: Start" Jul 6 23:56:01.629876 kubelet[2472]: I0706 23:56:01.629806 2472 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 6 23:56:01.629876 kubelet[2472]: I0706 23:56:01.629817 2472 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:56:01.629923 kubelet[2472]: I0706 23:56:01.629912 2472 state_mem.go:75] "Updated machine memory state" Jul 6 23:56:01.634068 kubelet[2472]: E0706 23:56:01.633925 2472 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 6 23:56:01.634169 kubelet[2472]: I0706 23:56:01.634149 2472 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:56:01.634199 kubelet[2472]: I0706 23:56:01.634169 2472 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:56:01.634478 kubelet[2472]: I0706 23:56:01.634446 2472 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:56:01.635017 kubelet[2472]: E0706 23:56:01.634989 2472 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 6 23:56:01.698510 kubelet[2472]: I0706 23:56:01.698468 2472 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 6 23:56:01.698689 kubelet[2472]: I0706 23:56:01.698558 2472 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 6 23:56:01.698689 kubelet[2472]: I0706 23:56:01.698468 2472 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:56:01.739076 kubelet[2472]: E0706 23:56:01.738922 2472 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:56:01.741428 kubelet[2472]: I0706 23:56:01.741393 2472 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 6 23:56:01.787943 kubelet[2472]: I0706 23:56:01.787803 2472 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 6 23:56:01.787943 kubelet[2472]: I0706 23:56:01.787915 2472 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 6 23:56:01.886204 kubelet[2472]: I0706 23:56:01.886147 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:56:01.886204 kubelet[2472]: I0706 23:56:01.886187 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7221148fad4839e5e8e190ec26719d4e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7221148fad4839e5e8e190ec26719d4e\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:56:01.886420 kubelet[2472]: I0706 23:56:01.886319 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7221148fad4839e5e8e190ec26719d4e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7221148fad4839e5e8e190ec26719d4e\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:56:01.886420 kubelet[2472]: I0706 23:56:01.886378 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:56:01.886420 kubelet[2472]: I0706 23:56:01.886401 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7221148fad4839e5e8e190ec26719d4e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7221148fad4839e5e8e190ec26719d4e\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:56:01.886420 kubelet[2472]: I0706 23:56:01.886418 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:56:01.886553 kubelet[2472]: I0706 23:56:01.886435 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:56:01.886553 kubelet[2472]: I0706 23:56:01.886452 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:56:01.886553 kubelet[2472]: I0706 23:56:01.886473 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:56:02.039181 kubelet[2472]: E0706 23:56:02.039045 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:02.039337 kubelet[2472]: E0706 23:56:02.039205 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:02.039440 kubelet[2472]: E0706 23:56:02.039429 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:02.573314 kubelet[2472]: I0706 23:56:02.573218 2472 apiserver.go:52] "Watching apiserver" Jul 6 23:56:02.585522 kubelet[2472]: I0706 23:56:02.585461 2472 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 6 23:56:02.610819 kubelet[2472]: I0706 23:56:02.610774 2472 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 6 23:56:02.611139 kubelet[2472]: E0706 23:56:02.611119 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:02.611203 kubelet[2472]: E0706 23:56:02.611174 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:02.618498 kubelet[2472]: E0706 23:56:02.618447 2472 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:56:02.619427 kubelet[2472]: E0706 23:56:02.618587 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:02.634265 kubelet[2472]: I0706 23:56:02.634202 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.634181796 podStartE2EDuration="1.634181796s" podCreationTimestamp="2025-07-06 23:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:02.633362928 +0000 UTC m=+1.117418366" watchObservedRunningTime="2025-07-06 23:56:02.634181796 +0000 UTC m=+1.118237214" Jul 6 23:56:02.634685 kubelet[2472]: I0706 23:56:02.634570 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.634564028 podStartE2EDuration="1.634564028s" podCreationTimestamp="2025-07-06 23:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:02.626393718 +0000 UTC m=+1.110449136" watchObservedRunningTime="2025-07-06 23:56:02.634564028 +0000 UTC m=+1.118619446" Jul 6 23:56:02.652404 kubelet[2472]: I0706 23:56:02.652328 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.652309024 podStartE2EDuration="2.652309024s" podCreationTimestamp="2025-07-06 23:56:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:02.642346082 +0000 UTC m=+1.126401510" watchObservedRunningTime="2025-07-06 23:56:02.652309024 +0000 UTC m=+1.136364453" Jul 6 23:56:02.931446 sudo[1598]: pam_unix(sudo:session): session closed for user root Jul 6 23:56:02.934279 sshd[1593]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:02.939274 systemd[1]: sshd@4-10.0.0.104:22-10.0.0.1:33008.service: Deactivated successfully. Jul 6 23:56:02.941455 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:56:02.941641 systemd[1]: session-5.scope: Consumed 4.864s CPU time, 160.2M memory peak, 0B memory swap peak. Jul 6 23:56:02.942045 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:56:02.942988 systemd-logind[1443]: Removed session 5. Jul 6 23:56:03.611847 kubelet[2472]: E0706 23:56:03.611809 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:03.611847 kubelet[2472]: E0706 23:56:03.611824 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:04.613724 kubelet[2472]: E0706 23:56:04.613688 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:05.614578 kubelet[2472]: E0706 23:56:05.614535 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:05.693412 update_engine[1444]: I20250706 23:56:05.693341 1444 update_attempter.cc:509] Updating boot flags... Jul 6 23:56:05.722995 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2550) Jul 6 23:56:05.761339 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2550) Jul 6 23:56:05.796578 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2550) Jul 6 23:56:06.137586 kubelet[2472]: E0706 23:56:06.137538 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:07.745764 kubelet[2472]: I0706 23:56:07.745714 2472 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:56:07.746316 containerd[1461]: time="2025-07-06T23:56:07.746135399Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:56:07.746667 kubelet[2472]: I0706 23:56:07.746315 2472 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:56:08.626993 systemd[1]: Created slice kubepods-besteffort-pod830e43ac_69a8_4cbf_934d_932f83b0403a.slice - libcontainer container kubepods-besteffort-pod830e43ac_69a8_4cbf_934d_932f83b0403a.slice. Jul 6 23:56:08.629861 kubelet[2472]: I0706 23:56:08.629819 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/6169b842-5917-4f10-bb8c-77c44c7f7564-cni-plugin\") pod \"kube-flannel-ds-rjflk\" (UID: \"6169b842-5917-4f10-bb8c-77c44c7f7564\") " pod="kube-flannel/kube-flannel-ds-rjflk" Jul 6 23:56:08.629861 kubelet[2472]: I0706 23:56:08.629851 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpwlb\" (UniqueName: \"kubernetes.io/projected/6169b842-5917-4f10-bb8c-77c44c7f7564-kube-api-access-jpwlb\") pod \"kube-flannel-ds-rjflk\" (UID: \"6169b842-5917-4f10-bb8c-77c44c7f7564\") " pod="kube-flannel/kube-flannel-ds-rjflk" Jul 6 23:56:08.630158 kubelet[2472]: I0706 23:56:08.629871 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6169b842-5917-4f10-bb8c-77c44c7f7564-run\") pod \"kube-flannel-ds-rjflk\" (UID: \"6169b842-5917-4f10-bb8c-77c44c7f7564\") " pod="kube-flannel/kube-flannel-ds-rjflk" Jul 6 23:56:08.630158 kubelet[2472]: I0706 23:56:08.629885 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/6169b842-5917-4f10-bb8c-77c44c7f7564-flannel-cfg\") pod \"kube-flannel-ds-rjflk\" (UID: \"6169b842-5917-4f10-bb8c-77c44c7f7564\") " pod="kube-flannel/kube-flannel-ds-rjflk" Jul 6 23:56:08.630158 kubelet[2472]: I0706 23:56:08.629899 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6169b842-5917-4f10-bb8c-77c44c7f7564-xtables-lock\") pod \"kube-flannel-ds-rjflk\" (UID: \"6169b842-5917-4f10-bb8c-77c44c7f7564\") " pod="kube-flannel/kube-flannel-ds-rjflk" Jul 6 23:56:08.630158 kubelet[2472]: I0706 23:56:08.629916 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/830e43ac-69a8-4cbf-934d-932f83b0403a-kube-proxy\") pod \"kube-proxy-np2dp\" (UID: \"830e43ac-69a8-4cbf-934d-932f83b0403a\") " pod="kube-system/kube-proxy-np2dp" Jul 6 23:56:08.630158 kubelet[2472]: I0706 23:56:08.629959 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/830e43ac-69a8-4cbf-934d-932f83b0403a-lib-modules\") pod \"kube-proxy-np2dp\" (UID: \"830e43ac-69a8-4cbf-934d-932f83b0403a\") " pod="kube-system/kube-proxy-np2dp" Jul 6 23:56:08.630441 kubelet[2472]: I0706 23:56:08.630001 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/6169b842-5917-4f10-bb8c-77c44c7f7564-cni\") pod \"kube-flannel-ds-rjflk\" (UID: \"6169b842-5917-4f10-bb8c-77c44c7f7564\") " pod="kube-flannel/kube-flannel-ds-rjflk" Jul 6 23:56:08.630441 kubelet[2472]: I0706 23:56:08.630063 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/830e43ac-69a8-4cbf-934d-932f83b0403a-xtables-lock\") pod \"kube-proxy-np2dp\" (UID: \"830e43ac-69a8-4cbf-934d-932f83b0403a\") " pod="kube-system/kube-proxy-np2dp" Jul 6 23:56:08.630441 kubelet[2472]: I0706 23:56:08.630092 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgbtw\" (UniqueName: \"kubernetes.io/projected/830e43ac-69a8-4cbf-934d-932f83b0403a-kube-api-access-qgbtw\") pod \"kube-proxy-np2dp\" (UID: \"830e43ac-69a8-4cbf-934d-932f83b0403a\") " pod="kube-system/kube-proxy-np2dp" Jul 6 23:56:08.639508 systemd[1]: Created slice kubepods-burstable-pod6169b842_5917_4f10_bb8c_77c44c7f7564.slice - libcontainer container kubepods-burstable-pod6169b842_5917_4f10_bb8c_77c44c7f7564.slice. Jul 6 23:56:08.936928 kubelet[2472]: E0706 23:56:08.936876 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:08.937688 containerd[1461]: time="2025-07-06T23:56:08.937641230Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-np2dp,Uid:830e43ac-69a8-4cbf-934d-932f83b0403a,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:08.942102 kubelet[2472]: E0706 23:56:08.942067 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:08.943269 containerd[1461]: time="2025-07-06T23:56:08.943213565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rjflk,Uid:6169b842-5917-4f10-bb8c-77c44c7f7564,Namespace:kube-flannel,Attempt:0,}" Jul 6 23:56:08.968417 containerd[1461]: time="2025-07-06T23:56:08.967308515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:08.968417 containerd[1461]: time="2025-07-06T23:56:08.968027381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:08.968417 containerd[1461]: time="2025-07-06T23:56:08.968040581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:08.968417 containerd[1461]: time="2025-07-06T23:56:08.968263150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:08.982640 containerd[1461]: time="2025-07-06T23:56:08.981425413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:08.982640 containerd[1461]: time="2025-07-06T23:56:08.982512296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:08.982640 containerd[1461]: time="2025-07-06T23:56:08.982531551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:08.983431 containerd[1461]: time="2025-07-06T23:56:08.983157389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:08.991514 systemd[1]: Started cri-containerd-89c4a242c2eca22b9e9288c08bd2552f770af036d1761e5e83bfd381873daffd.scope - libcontainer container 89c4a242c2eca22b9e9288c08bd2552f770af036d1761e5e83bfd381873daffd. Jul 6 23:56:08.998743 systemd[1]: Started cri-containerd-7ee7049d2ada313f9416f1751d80f82ab058c23efb218bf5363326e2230f5627.scope - libcontainer container 7ee7049d2ada313f9416f1751d80f82ab058c23efb218bf5363326e2230f5627. Jul 6 23:56:09.019408 containerd[1461]: time="2025-07-06T23:56:09.019344498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-np2dp,Uid:830e43ac-69a8-4cbf-934d-932f83b0403a,Namespace:kube-system,Attempt:0,} returns sandbox id \"89c4a242c2eca22b9e9288c08bd2552f770af036d1761e5e83bfd381873daffd\"" Jul 6 23:56:09.020666 kubelet[2472]: E0706 23:56:09.020629 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:09.026320 containerd[1461]: time="2025-07-06T23:56:09.026245394Z" level=info msg="CreateContainer within sandbox \"89c4a242c2eca22b9e9288c08bd2552f770af036d1761e5e83bfd381873daffd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:56:09.041247 containerd[1461]: time="2025-07-06T23:56:09.041169219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-rjflk,Uid:6169b842-5917-4f10-bb8c-77c44c7f7564,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"7ee7049d2ada313f9416f1751d80f82ab058c23efb218bf5363326e2230f5627\"" Jul 6 23:56:09.042009 kubelet[2472]: E0706 23:56:09.041973 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:09.042870 containerd[1461]: time="2025-07-06T23:56:09.042838434Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\"" Jul 6 23:56:09.052436 containerd[1461]: time="2025-07-06T23:56:09.052380940Z" level=info msg="CreateContainer within sandbox \"89c4a242c2eca22b9e9288c08bd2552f770af036d1761e5e83bfd381873daffd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e77f22c87ce480e6287516fd9d8ec68b1c1b4b3d52faf8600f23fbbc30984792\"" Jul 6 23:56:09.052950 containerd[1461]: time="2025-07-06T23:56:09.052908268Z" level=info msg="StartContainer for \"e77f22c87ce480e6287516fd9d8ec68b1c1b4b3d52faf8600f23fbbc30984792\"" Jul 6 23:56:09.079455 systemd[1]: Started cri-containerd-e77f22c87ce480e6287516fd9d8ec68b1c1b4b3d52faf8600f23fbbc30984792.scope - libcontainer container e77f22c87ce480e6287516fd9d8ec68b1c1b4b3d52faf8600f23fbbc30984792. Jul 6 23:56:09.111346 containerd[1461]: time="2025-07-06T23:56:09.111179576Z" level=info msg="StartContainer for \"e77f22c87ce480e6287516fd9d8ec68b1c1b4b3d52faf8600f23fbbc30984792\" returns successfully" Jul 6 23:56:09.623097 kubelet[2472]: E0706 23:56:09.623038 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:10.529917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3191765549.mount: Deactivated successfully. Jul 6 23:56:10.569487 containerd[1461]: time="2025-07-06T23:56:10.569432390Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:10.570172 containerd[1461]: time="2025-07-06T23:56:10.570115403Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1: active requests=0, bytes read=4857008" Jul 6 23:56:10.571425 containerd[1461]: time="2025-07-06T23:56:10.571390392Z" level=info msg="ImageCreate event name:\"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:10.574422 containerd[1461]: time="2025-07-06T23:56:10.574385720Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:10.575967 containerd[1461]: time="2025-07-06T23:56:10.575913508Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" with image id \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\", repo tag \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\", repo digest \"ghcr.io/flannel-io/flannel-cni-plugin@sha256:f1812994f0edbcb5bb5ccb63be2147ba6ad10e1faaa7ca9fcdad4f441739d84f\", size \"4856838\" in 1.533039783s" Jul 6 23:56:10.575967 containerd[1461]: time="2025-07-06T23:56:10.575954944Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel-cni-plugin:v1.6.2-flannel1\" returns image reference \"sha256:55ce2385d9d8c6f720091c177fbe885a21c9dc07c9e480bfb4d94b3001f58182\"" Jul 6 23:56:10.580627 containerd[1461]: time="2025-07-06T23:56:10.580591970Z" level=info msg="CreateContainer within sandbox \"7ee7049d2ada313f9416f1751d80f82ab058c23efb218bf5363326e2230f5627\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jul 6 23:56:10.591543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1519891663.mount: Deactivated successfully. Jul 6 23:56:10.592972 containerd[1461]: time="2025-07-06T23:56:10.592923044Z" level=info msg="CreateContainer within sandbox \"7ee7049d2ada313f9416f1751d80f82ab058c23efb218bf5363326e2230f5627\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"8662ffbdc87f1309decb6db4d30c5e53c42d95dc02aa64f7051f8d8a51bcbe96\"" Jul 6 23:56:10.593553 containerd[1461]: time="2025-07-06T23:56:10.593516813Z" level=info msg="StartContainer for \"8662ffbdc87f1309decb6db4d30c5e53c42d95dc02aa64f7051f8d8a51bcbe96\"" Jul 6 23:56:10.626480 systemd[1]: Started cri-containerd-8662ffbdc87f1309decb6db4d30c5e53c42d95dc02aa64f7051f8d8a51bcbe96.scope - libcontainer container 8662ffbdc87f1309decb6db4d30c5e53c42d95dc02aa64f7051f8d8a51bcbe96. Jul 6 23:56:10.656664 containerd[1461]: time="2025-07-06T23:56:10.656617087Z" level=info msg="StartContainer for \"8662ffbdc87f1309decb6db4d30c5e53c42d95dc02aa64f7051f8d8a51bcbe96\" returns successfully" Jul 6 23:56:10.657476 systemd[1]: cri-containerd-8662ffbdc87f1309decb6db4d30c5e53c42d95dc02aa64f7051f8d8a51bcbe96.scope: Deactivated successfully. Jul 6 23:56:10.727857 containerd[1461]: time="2025-07-06T23:56:10.725276019Z" level=info msg="shim disconnected" id=8662ffbdc87f1309decb6db4d30c5e53c42d95dc02aa64f7051f8d8a51bcbe96 namespace=k8s.io Jul 6 23:56:10.727857 containerd[1461]: time="2025-07-06T23:56:10.727831540Z" level=warning msg="cleaning up after shim disconnected" id=8662ffbdc87f1309decb6db4d30c5e53c42d95dc02aa64f7051f8d8a51bcbe96 namespace=k8s.io Jul 6 23:56:10.727857 containerd[1461]: time="2025-07-06T23:56:10.727842445Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:56:11.629610 kubelet[2472]: E0706 23:56:11.629560 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:11.630326 containerd[1461]: time="2025-07-06T23:56:11.630086105Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\"" Jul 6 23:56:11.642525 kubelet[2472]: I0706 23:56:11.642420 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-np2dp" podStartSLOduration=3.642367447 podStartE2EDuration="3.642367447s" podCreationTimestamp="2025-07-06 23:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:09.63218213 +0000 UTC m=+8.116237558" watchObservedRunningTime="2025-07-06 23:56:11.642367447 +0000 UTC m=+10.126422865" Jul 6 23:56:11.987712 kubelet[2472]: E0706 23:56:11.987666 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:12.631746 kubelet[2472]: E0706 23:56:12.631703 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:13.818727 containerd[1461]: time="2025-07-06T23:56:13.818631302Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel:v0.26.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:13.819253 containerd[1461]: time="2025-07-06T23:56:13.819183599Z" level=info msg="stop pulling image ghcr.io/flannel-io/flannel:v0.26.7: active requests=0, bytes read=29354574" Jul 6 23:56:13.820562 containerd[1461]: time="2025-07-06T23:56:13.820496726Z" level=info msg="ImageCreate event name:\"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:13.824392 containerd[1461]: time="2025-07-06T23:56:13.824330665Z" level=info msg="ImageCreate event name:\"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:56:13.826273 containerd[1461]: time="2025-07-06T23:56:13.826239316Z" level=info msg="Pulled image \"ghcr.io/flannel-io/flannel:v0.26.7\" with image id \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\", repo tag \"ghcr.io/flannel-io/flannel:v0.26.7\", repo digest \"ghcr.io/flannel-io/flannel@sha256:7f471907fa940f944867270de4ed78121b8b4c5d564e17f940dc787cb16dea82\", size \"32996046\" in 2.196114102s" Jul 6 23:56:13.826334 containerd[1461]: time="2025-07-06T23:56:13.826277792Z" level=info msg="PullImage \"ghcr.io/flannel-io/flannel:v0.26.7\" returns image reference \"sha256:965b9dd4aa4c1b6b68a4c54a166692b4645b6e6f8a5937d8dc17736cb63f515e\"" Jul 6 23:56:13.831519 containerd[1461]: time="2025-07-06T23:56:13.831469850Z" level=info msg="CreateContainer within sandbox \"7ee7049d2ada313f9416f1751d80f82ab058c23efb218bf5363326e2230f5627\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 6 23:56:13.842847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3982429988.mount: Deactivated successfully. Jul 6 23:56:13.843779 containerd[1461]: time="2025-07-06T23:56:13.843717012Z" level=info msg="CreateContainer within sandbox \"7ee7049d2ada313f9416f1751d80f82ab058c23efb218bf5363326e2230f5627\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a3394e315f8251461b69ca710ac3c08367e2fe798fa68ddb9ec66da98ad9ef01\"" Jul 6 23:56:13.844468 containerd[1461]: time="2025-07-06T23:56:13.844420126Z" level=info msg="StartContainer for \"a3394e315f8251461b69ca710ac3c08367e2fe798fa68ddb9ec66da98ad9ef01\"" Jul 6 23:56:13.882429 systemd[1]: Started cri-containerd-a3394e315f8251461b69ca710ac3c08367e2fe798fa68ddb9ec66da98ad9ef01.scope - libcontainer container a3394e315f8251461b69ca710ac3c08367e2fe798fa68ddb9ec66da98ad9ef01. Jul 6 23:56:13.907744 systemd[1]: cri-containerd-a3394e315f8251461b69ca710ac3c08367e2fe798fa68ddb9ec66da98ad9ef01.scope: Deactivated successfully. Jul 6 23:56:13.910447 containerd[1461]: time="2025-07-06T23:56:13.910402682Z" level=info msg="StartContainer for \"a3394e315f8251461b69ca710ac3c08367e2fe798fa68ddb9ec66da98ad9ef01\" returns successfully" Jul 6 23:56:13.929052 kubelet[2472]: E0706 23:56:13.928979 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:14.008224 kubelet[2472]: I0706 23:56:14.008186 2472 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 6 23:56:14.419647 containerd[1461]: time="2025-07-06T23:56:14.419397009Z" level=info msg="shim disconnected" id=a3394e315f8251461b69ca710ac3c08367e2fe798fa68ddb9ec66da98ad9ef01 namespace=k8s.io Jul 6 23:56:14.419647 containerd[1461]: time="2025-07-06T23:56:14.419452802Z" level=warning msg="cleaning up after shim disconnected" id=a3394e315f8251461b69ca710ac3c08367e2fe798fa68ddb9ec66da98ad9ef01 namespace=k8s.io Jul 6 23:56:14.419647 containerd[1461]: time="2025-07-06T23:56:14.419461011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:56:14.438677 systemd[1]: Created slice kubepods-burstable-poda703ca22_e3ed_458c_ab76_c3c195f22f27.slice - libcontainer container kubepods-burstable-poda703ca22_e3ed_458c_ab76_c3c195f22f27.slice. Jul 6 23:56:14.447549 systemd[1]: Created slice kubepods-burstable-pod3994ed64_0722_4eea_b3b5_d16c6af4438b.slice - libcontainer container kubepods-burstable-pod3994ed64_0722_4eea_b3b5_d16c6af4438b.slice. Jul 6 23:56:14.462461 kubelet[2472]: I0706 23:56:14.462407 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a703ca22-e3ed-458c-ab76-c3c195f22f27-config-volume\") pod \"coredns-674b8bbfcf-wchhw\" (UID: \"a703ca22-e3ed-458c-ab76-c3c195f22f27\") " pod="kube-system/coredns-674b8bbfcf-wchhw" Jul 6 23:56:14.462461 kubelet[2472]: I0706 23:56:14.462445 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x29zm\" (UniqueName: \"kubernetes.io/projected/a703ca22-e3ed-458c-ab76-c3c195f22f27-kube-api-access-x29zm\") pod \"coredns-674b8bbfcf-wchhw\" (UID: \"a703ca22-e3ed-458c-ab76-c3c195f22f27\") " pod="kube-system/coredns-674b8bbfcf-wchhw" Jul 6 23:56:14.462461 kubelet[2472]: I0706 23:56:14.462470 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2sz9\" (UniqueName: \"kubernetes.io/projected/3994ed64-0722-4eea-b3b5-d16c6af4438b-kube-api-access-d2sz9\") pod \"coredns-674b8bbfcf-fdwt7\" (UID: \"3994ed64-0722-4eea-b3b5-d16c6af4438b\") " pod="kube-system/coredns-674b8bbfcf-fdwt7" Jul 6 23:56:14.462713 kubelet[2472]: I0706 23:56:14.462491 2472 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3994ed64-0722-4eea-b3b5-d16c6af4438b-config-volume\") pod \"coredns-674b8bbfcf-fdwt7\" (UID: \"3994ed64-0722-4eea-b3b5-d16c6af4438b\") " pod="kube-system/coredns-674b8bbfcf-fdwt7" Jul 6 23:56:14.636797 kubelet[2472]: E0706 23:56:14.636737 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:14.643371 containerd[1461]: time="2025-07-06T23:56:14.643311377Z" level=info msg="CreateContainer within sandbox \"7ee7049d2ada313f9416f1751d80f82ab058c23efb218bf5363326e2230f5627\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jul 6 23:56:14.659855 containerd[1461]: time="2025-07-06T23:56:14.659801432Z" level=info msg="CreateContainer within sandbox \"7ee7049d2ada313f9416f1751d80f82ab058c23efb218bf5363326e2230f5627\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"87dd60a70a8a2380705680801db70a67eac313bc1c59a6f07a934b1836e24f17\"" Jul 6 23:56:14.660483 containerd[1461]: time="2025-07-06T23:56:14.660341102Z" level=info msg="StartContainer for \"87dd60a70a8a2380705680801db70a67eac313bc1c59a6f07a934b1836e24f17\"" Jul 6 23:56:14.691457 systemd[1]: Started cri-containerd-87dd60a70a8a2380705680801db70a67eac313bc1c59a6f07a934b1836e24f17.scope - libcontainer container 87dd60a70a8a2380705680801db70a67eac313bc1c59a6f07a934b1836e24f17. Jul 6 23:56:14.719818 containerd[1461]: time="2025-07-06T23:56:14.719766963Z" level=info msg="StartContainer for \"87dd60a70a8a2380705680801db70a67eac313bc1c59a6f07a934b1836e24f17\" returns successfully" Jul 6 23:56:14.744598 kubelet[2472]: E0706 23:56:14.744532 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:14.745316 containerd[1461]: time="2025-07-06T23:56:14.745175107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wchhw,Uid:a703ca22-e3ed-458c-ab76-c3c195f22f27,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:14.751080 kubelet[2472]: E0706 23:56:14.751031 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:14.751655 containerd[1461]: time="2025-07-06T23:56:14.751611039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fdwt7,Uid:3994ed64-0722-4eea-b3b5-d16c6af4438b,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:14.792383 containerd[1461]: time="2025-07-06T23:56:14.792169882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wchhw,Uid:a703ca22-e3ed-458c-ab76-c3c195f22f27,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d47419635be4dfddaa770d0090c21cbc4d3aa595e09a6e5f5228b7268c207007\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 6 23:56:14.792623 kubelet[2472]: E0706 23:56:14.792522 2472 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d47419635be4dfddaa770d0090c21cbc4d3aa595e09a6e5f5228b7268c207007\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 6 23:56:14.792623 kubelet[2472]: E0706 23:56:14.792612 2472 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d47419635be4dfddaa770d0090c21cbc4d3aa595e09a6e5f5228b7268c207007\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-wchhw" Jul 6 23:56:14.792727 kubelet[2472]: E0706 23:56:14.792640 2472 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d47419635be4dfddaa770d0090c21cbc4d3aa595e09a6e5f5228b7268c207007\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-wchhw" Jul 6 23:56:14.792727 kubelet[2472]: E0706 23:56:14.792705 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-wchhw_kube-system(a703ca22-e3ed-458c-ab76-c3c195f22f27)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-wchhw_kube-system(a703ca22-e3ed-458c-ab76-c3c195f22f27)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d47419635be4dfddaa770d0090c21cbc4d3aa595e09a6e5f5228b7268c207007\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-wchhw" podUID="a703ca22-e3ed-458c-ab76-c3c195f22f27" Jul 6 23:56:14.797545 containerd[1461]: time="2025-07-06T23:56:14.797451248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fdwt7,Uid:3994ed64-0722-4eea-b3b5-d16c6af4438b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d9da756cb1573aee8cdb80e83453c116cf1c8d70bb5abe609e6b7d75190c17ef\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 6 23:56:14.797814 kubelet[2472]: E0706 23:56:14.797766 2472 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9da756cb1573aee8cdb80e83453c116cf1c8d70bb5abe609e6b7d75190c17ef\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 6 23:56:14.797893 kubelet[2472]: E0706 23:56:14.797867 2472 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9da756cb1573aee8cdb80e83453c116cf1c8d70bb5abe609e6b7d75190c17ef\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-fdwt7" Jul 6 23:56:14.797947 kubelet[2472]: E0706 23:56:14.797897 2472 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9da756cb1573aee8cdb80e83453c116cf1c8d70bb5abe609e6b7d75190c17ef\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-674b8bbfcf-fdwt7" Jul 6 23:56:14.798041 kubelet[2472]: E0706 23:56:14.798002 2472 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fdwt7_kube-system(3994ed64-0722-4eea-b3b5-d16c6af4438b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fdwt7_kube-system(3994ed64-0722-4eea-b3b5-d16c6af4438b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9da756cb1573aee8cdb80e83453c116cf1c8d70bb5abe609e6b7d75190c17ef\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-674b8bbfcf-fdwt7" podUID="3994ed64-0722-4eea-b3b5-d16c6af4438b" Jul 6 23:56:14.842763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a3394e315f8251461b69ca710ac3c08367e2fe798fa68ddb9ec66da98ad9ef01-rootfs.mount: Deactivated successfully. Jul 6 23:56:15.640441 kubelet[2472]: E0706 23:56:15.640363 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:15.650616 kubelet[2472]: I0706 23:56:15.650233 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-rjflk" podStartSLOduration=2.865338634 podStartE2EDuration="7.650214755s" podCreationTimestamp="2025-07-06 23:56:08 +0000 UTC" firstStartedPulling="2025-07-06 23:56:09.042592325 +0000 UTC m=+7.526647733" lastFinishedPulling="2025-07-06 23:56:13.827468436 +0000 UTC m=+12.311523854" observedRunningTime="2025-07-06 23:56:15.650179367 +0000 UTC m=+14.134234785" watchObservedRunningTime="2025-07-06 23:56:15.650214755 +0000 UTC m=+14.134270173" Jul 6 23:56:15.769751 systemd-networkd[1391]: flannel.1: Link UP Jul 6 23:56:15.769766 systemd-networkd[1391]: flannel.1: Gained carrier Jul 6 23:56:16.142373 kubelet[2472]: E0706 23:56:16.142337 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:16.642123 kubelet[2472]: E0706 23:56:16.642071 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:17.184631 systemd-networkd[1391]: flannel.1: Gained IPv6LL Jul 6 23:56:23.587770 systemd[1]: Started sshd@5-10.0.0.104:22-10.0.0.1:36650.service - OpenSSH per-connection server daemon (10.0.0.1:36650). Jul 6 23:56:23.638820 sshd[3149]: Accepted publickey for core from 10.0.0.1 port 36650 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:23.641108 sshd[3149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:23.647325 systemd-logind[1443]: New session 6 of user core. Jul 6 23:56:23.657543 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:56:23.788392 sshd[3149]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:23.793128 systemd[1]: sshd@5-10.0.0.104:22-10.0.0.1:36650.service: Deactivated successfully. Jul 6 23:56:23.795494 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:56:23.796170 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:56:23.797161 systemd-logind[1443]: Removed session 6. Jul 6 23:56:26.597868 kubelet[2472]: E0706 23:56:26.597818 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:26.598559 containerd[1461]: time="2025-07-06T23:56:26.598305254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fdwt7,Uid:3994ed64-0722-4eea-b3b5-d16c6af4438b,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:26.814913 systemd-networkd[1391]: cni0: Link UP Jul 6 23:56:26.814924 systemd-networkd[1391]: cni0: Gained carrier Jul 6 23:56:26.819766 systemd-networkd[1391]: cni0: Lost carrier Jul 6 23:56:26.824703 systemd-networkd[1391]: vethd8d3d502: Link UP Jul 6 23:56:26.826989 kernel: cni0: port 1(vethd8d3d502) entered blocking state Jul 6 23:56:26.827082 kernel: cni0: port 1(vethd8d3d502) entered disabled state Jul 6 23:56:26.827228 kernel: vethd8d3d502: entered allmulticast mode Jul 6 23:56:26.828331 kernel: vethd8d3d502: entered promiscuous mode Jul 6 23:56:26.830089 kernel: cni0: port 1(vethd8d3d502) entered blocking state Jul 6 23:56:26.830169 kernel: cni0: port 1(vethd8d3d502) entered forwarding state Jul 6 23:56:26.831284 kernel: cni0: port 1(vethd8d3d502) entered disabled state Jul 6 23:56:26.839170 kernel: cni0: port 1(vethd8d3d502) entered blocking state Jul 6 23:56:26.839268 kernel: cni0: port 1(vethd8d3d502) entered forwarding state Jul 6 23:56:26.839429 systemd-networkd[1391]: vethd8d3d502: Gained carrier Jul 6 23:56:26.839755 systemd-networkd[1391]: cni0: Gained carrier Jul 6 23:56:26.841805 containerd[1461]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000129a0), "name":"cbr0", "type":"bridge"} Jul 6 23:56:26.841805 containerd[1461]: delegateAdd: netconf sent to delegate plugin: Jul 6 23:56:26.868232 containerd[1461]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-07-06T23:56:26.867909472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:26.868232 containerd[1461]: time="2025-07-06T23:56:26.867979670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:26.868232 containerd[1461]: time="2025-07-06T23:56:26.867992777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:26.868232 containerd[1461]: time="2025-07-06T23:56:26.868135367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:26.896495 systemd[1]: Started cri-containerd-59831195453f17ab0644c556d5e7641e9317062e7f9bf8d9a561c9863d92c7ec.scope - libcontainer container 59831195453f17ab0644c556d5e7641e9317062e7f9bf8d9a561c9863d92c7ec. Jul 6 23:56:26.908573 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:56:26.934457 containerd[1461]: time="2025-07-06T23:56:26.934410560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fdwt7,Uid:3994ed64-0722-4eea-b3b5-d16c6af4438b,Namespace:kube-system,Attempt:0,} returns sandbox id \"59831195453f17ab0644c556d5e7641e9317062e7f9bf8d9a561c9863d92c7ec\"" Jul 6 23:56:26.935373 kubelet[2472]: E0706 23:56:26.935346 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:27.095245 containerd[1461]: time="2025-07-06T23:56:27.095178808Z" level=info msg="CreateContainer within sandbox \"59831195453f17ab0644c556d5e7641e9317062e7f9bf8d9a561c9863d92c7ec\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:56:27.462399 containerd[1461]: time="2025-07-06T23:56:27.462327667Z" level=info msg="CreateContainer within sandbox \"59831195453f17ab0644c556d5e7641e9317062e7f9bf8d9a561c9863d92c7ec\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2127e13e5750e287a27be28a91812721ac5292383b2c5190330f591d35caf72c\"" Jul 6 23:56:27.462977 containerd[1461]: time="2025-07-06T23:56:27.462946673Z" level=info msg="StartContainer for \"2127e13e5750e287a27be28a91812721ac5292383b2c5190330f591d35caf72c\"" Jul 6 23:56:27.492444 systemd[1]: Started cri-containerd-2127e13e5750e287a27be28a91812721ac5292383b2c5190330f591d35caf72c.scope - libcontainer container 2127e13e5750e287a27be28a91812721ac5292383b2c5190330f591d35caf72c. Jul 6 23:56:27.562418 containerd[1461]: time="2025-07-06T23:56:27.562368888Z" level=info msg="StartContainer for \"2127e13e5750e287a27be28a91812721ac5292383b2c5190330f591d35caf72c\" returns successfully" Jul 6 23:56:27.664846 kubelet[2472]: E0706 23:56:27.664778 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:27.684546 kubelet[2472]: I0706 23:56:27.684450 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fdwt7" podStartSLOduration=19.684423221 podStartE2EDuration="19.684423221s" podCreationTimestamp="2025-07-06 23:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:27.674686948 +0000 UTC m=+26.158742366" watchObservedRunningTime="2025-07-06 23:56:27.684423221 +0000 UTC m=+26.168478639" Jul 6 23:56:27.747371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount182530685.mount: Deactivated successfully. Jul 6 23:56:28.448470 systemd-networkd[1391]: cni0: Gained IPv6LL Jul 6 23:56:28.512411 systemd-networkd[1391]: vethd8d3d502: Gained IPv6LL Jul 6 23:56:28.597829 kubelet[2472]: E0706 23:56:28.597770 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:28.598299 containerd[1461]: time="2025-07-06T23:56:28.598245783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wchhw,Uid:a703ca22-e3ed-458c-ab76-c3c195f22f27,Namespace:kube-system,Attempt:0,}" Jul 6 23:56:28.622590 systemd-networkd[1391]: veth050252e7: Link UP Jul 6 23:56:28.624678 kernel: cni0: port 2(veth050252e7) entered blocking state Jul 6 23:56:28.624738 kernel: cni0: port 2(veth050252e7) entered disabled state Jul 6 23:56:28.625432 kernel: veth050252e7: entered allmulticast mode Jul 6 23:56:28.625554 kernel: veth050252e7: entered promiscuous mode Jul 6 23:56:28.633426 kernel: cni0: port 2(veth050252e7) entered blocking state Jul 6 23:56:28.633472 kernel: cni0: port 2(veth050252e7) entered forwarding state Jul 6 23:56:28.633574 systemd-networkd[1391]: veth050252e7: Gained carrier Jul 6 23:56:28.635639 containerd[1461]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil), MTU:0, AdvMSS:0, Priority:0, Table:(*int)(nil), Scope:(*int)(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a950), "name":"cbr0", "type":"bridge"} Jul 6 23:56:28.635639 containerd[1461]: delegateAdd: netconf sent to delegate plugin: Jul 6 23:56:28.661124 containerd[1461]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-07-06T23:56:28.660980425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:56:28.661124 containerd[1461]: time="2025-07-06T23:56:28.661078890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:56:28.661383 containerd[1461]: time="2025-07-06T23:56:28.661092638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:28.661383 containerd[1461]: time="2025-07-06T23:56:28.661213250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:56:28.667082 kubelet[2472]: E0706 23:56:28.666985 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:28.691455 systemd[1]: Started cri-containerd-53b6ebc483af5cfce653423bf6d9c67c027157b95bd5e805cad998e224b93d2e.scope - libcontainer container 53b6ebc483af5cfce653423bf6d9c67c027157b95bd5e805cad998e224b93d2e. Jul 6 23:56:28.704338 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:56:28.729180 containerd[1461]: time="2025-07-06T23:56:28.729119080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-wchhw,Uid:a703ca22-e3ed-458c-ab76-c3c195f22f27,Namespace:kube-system,Attempt:0,} returns sandbox id \"53b6ebc483af5cfce653423bf6d9c67c027157b95bd5e805cad998e224b93d2e\"" Jul 6 23:56:28.730125 kubelet[2472]: E0706 23:56:28.730090 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:28.738940 containerd[1461]: time="2025-07-06T23:56:28.738752949Z" level=info msg="CreateContainer within sandbox \"53b6ebc483af5cfce653423bf6d9c67c027157b95bd5e805cad998e224b93d2e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:56:28.753591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount990105462.mount: Deactivated successfully. Jul 6 23:56:28.754962 containerd[1461]: time="2025-07-06T23:56:28.754900547Z" level=info msg="CreateContainer within sandbox \"53b6ebc483af5cfce653423bf6d9c67c027157b95bd5e805cad998e224b93d2e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"264f5d8b60896b1dc64a86955f5cddda5d90c6b46ad835dda651bd8f08ec2cff\"" Jul 6 23:56:28.755467 containerd[1461]: time="2025-07-06T23:56:28.755437226Z" level=info msg="StartContainer for \"264f5d8b60896b1dc64a86955f5cddda5d90c6b46ad835dda651bd8f08ec2cff\"" Jul 6 23:56:28.787461 systemd[1]: Started cri-containerd-264f5d8b60896b1dc64a86955f5cddda5d90c6b46ad835dda651bd8f08ec2cff.scope - libcontainer container 264f5d8b60896b1dc64a86955f5cddda5d90c6b46ad835dda651bd8f08ec2cff. Jul 6 23:56:28.805204 systemd[1]: Started sshd@6-10.0.0.104:22-10.0.0.1:36664.service - OpenSSH per-connection server daemon (10.0.0.1:36664). Jul 6 23:56:28.825439 containerd[1461]: time="2025-07-06T23:56:28.825389426Z" level=info msg="StartContainer for \"264f5d8b60896b1dc64a86955f5cddda5d90c6b46ad835dda651bd8f08ec2cff\" returns successfully" Jul 6 23:56:28.841988 sshd[3418]: Accepted publickey for core from 10.0.0.1 port 36664 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:28.844537 sshd[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:28.849574 systemd-logind[1443]: New session 7 of user core. Jul 6 23:56:28.859496 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:56:28.983573 sshd[3418]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:28.988683 systemd[1]: sshd@6-10.0.0.104:22-10.0.0.1:36664.service: Deactivated successfully. Jul 6 23:56:28.991213 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:56:28.991974 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:56:28.992974 systemd-logind[1443]: Removed session 7. Jul 6 23:56:29.669881 kubelet[2472]: E0706 23:56:29.669665 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:29.669881 kubelet[2472]: E0706 23:56:29.669793 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:29.677982 kubelet[2472]: I0706 23:56:29.677888 2472 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-wchhw" podStartSLOduration=21.677871106 podStartE2EDuration="21.677871106s" podCreationTimestamp="2025-07-06 23:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:56:29.677856606 +0000 UTC m=+28.161912034" watchObservedRunningTime="2025-07-06 23:56:29.677871106 +0000 UTC m=+28.161926544" Jul 6 23:56:30.176473 systemd-networkd[1391]: veth050252e7: Gained IPv6LL Jul 6 23:56:30.671979 kubelet[2472]: E0706 23:56:30.671941 2472 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:56:33.999171 systemd[1]: Started sshd@7-10.0.0.104:22-10.0.0.1:57290.service - OpenSSH per-connection server daemon (10.0.0.1:57290). Jul 6 23:56:34.033983 sshd[3475]: Accepted publickey for core from 10.0.0.1 port 57290 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:34.036096 sshd[3475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:34.040738 systemd-logind[1443]: New session 8 of user core. Jul 6 23:56:34.061619 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:56:34.184922 sshd[3475]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:34.196159 systemd[1]: sshd@7-10.0.0.104:22-10.0.0.1:57290.service: Deactivated successfully. Jul 6 23:56:34.197851 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:56:34.199512 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:56:34.207530 systemd[1]: Started sshd@8-10.0.0.104:22-10.0.0.1:57292.service - OpenSSH per-connection server daemon (10.0.0.1:57292). Jul 6 23:56:34.208445 systemd-logind[1443]: Removed session 8. Jul 6 23:56:34.236640 sshd[3490]: Accepted publickey for core from 10.0.0.1 port 57292 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:34.238253 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:34.242323 systemd-logind[1443]: New session 9 of user core. Jul 6 23:56:34.249400 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:56:34.396693 sshd[3490]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:34.408440 systemd[1]: sshd@8-10.0.0.104:22-10.0.0.1:57292.service: Deactivated successfully. Jul 6 23:56:34.410235 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:56:34.412624 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:56:34.421666 systemd[1]: Started sshd@9-10.0.0.104:22-10.0.0.1:57296.service - OpenSSH per-connection server daemon (10.0.0.1:57296). Jul 6 23:56:34.422528 systemd-logind[1443]: Removed session 9. Jul 6 23:56:34.448541 sshd[3503]: Accepted publickey for core from 10.0.0.1 port 57296 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:34.450160 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:34.453992 systemd-logind[1443]: New session 10 of user core. Jul 6 23:56:34.463430 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:56:34.569385 sshd[3503]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:34.573242 systemd[1]: sshd@9-10.0.0.104:22-10.0.0.1:57296.service: Deactivated successfully. Jul 6 23:56:34.575337 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:56:34.575985 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:56:34.576863 systemd-logind[1443]: Removed session 10. Jul 6 23:56:39.584120 systemd[1]: Started sshd@10-10.0.0.104:22-10.0.0.1:49946.service - OpenSSH per-connection server daemon (10.0.0.1:49946). Jul 6 23:56:39.617476 sshd[3541]: Accepted publickey for core from 10.0.0.1 port 49946 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:39.619457 sshd[3541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:39.623978 systemd-logind[1443]: New session 11 of user core. Jul 6 23:56:39.633513 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:56:39.741214 sshd[3541]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:39.746152 systemd[1]: sshd@10-10.0.0.104:22-10.0.0.1:49946.service: Deactivated successfully. Jul 6 23:56:39.748342 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:56:39.748944 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:56:39.749884 systemd-logind[1443]: Removed session 11. Jul 6 23:56:44.758136 systemd[1]: Started sshd@11-10.0.0.104:22-10.0.0.1:49960.service - OpenSSH per-connection server daemon (10.0.0.1:49960). Jul 6 23:56:44.790583 sshd[3576]: Accepted publickey for core from 10.0.0.1 port 49960 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:44.792204 sshd[3576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:44.796351 systemd-logind[1443]: New session 12 of user core. Jul 6 23:56:44.806421 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:56:44.916125 sshd[3576]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:44.920380 systemd[1]: sshd@11-10.0.0.104:22-10.0.0.1:49960.service: Deactivated successfully. Jul 6 23:56:44.922646 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:56:44.923279 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:56:44.924269 systemd-logind[1443]: Removed session 12. Jul 6 23:56:49.928708 systemd[1]: Started sshd@12-10.0.0.104:22-10.0.0.1:45508.service - OpenSSH per-connection server daemon (10.0.0.1:45508). Jul 6 23:56:49.961554 sshd[3611]: Accepted publickey for core from 10.0.0.1 port 45508 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:49.963423 sshd[3611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:49.968401 systemd-logind[1443]: New session 13 of user core. Jul 6 23:56:49.976459 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:56:50.101778 sshd[3611]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:50.106938 systemd[1]: sshd@12-10.0.0.104:22-10.0.0.1:45508.service: Deactivated successfully. Jul 6 23:56:50.109213 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:56:50.110031 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:56:50.111396 systemd-logind[1443]: Removed session 13. Jul 6 23:56:55.115457 systemd[1]: Started sshd@13-10.0.0.104:22-10.0.0.1:45524.service - OpenSSH per-connection server daemon (10.0.0.1:45524). Jul 6 23:56:55.165905 sshd[3646]: Accepted publickey for core from 10.0.0.1 port 45524 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:55.167921 sshd[3646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:55.172346 systemd-logind[1443]: New session 14 of user core. Jul 6 23:56:55.182543 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:56:55.298810 sshd[3646]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:55.312799 systemd[1]: sshd@13-10.0.0.104:22-10.0.0.1:45524.service: Deactivated successfully. Jul 6 23:56:55.315008 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:56:55.316894 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:56:55.325586 systemd[1]: Started sshd@14-10.0.0.104:22-10.0.0.1:45534.service - OpenSSH per-connection server daemon (10.0.0.1:45534). Jul 6 23:56:55.326585 systemd-logind[1443]: Removed session 14. Jul 6 23:56:55.356759 sshd[3660]: Accepted publickey for core from 10.0.0.1 port 45534 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:55.358721 sshd[3660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:55.362987 systemd-logind[1443]: New session 15 of user core. Jul 6 23:56:55.372427 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:56:55.570319 sshd[3660]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:55.578874 systemd[1]: sshd@14-10.0.0.104:22-10.0.0.1:45534.service: Deactivated successfully. Jul 6 23:56:55.580872 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:56:55.582665 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:56:55.599561 systemd[1]: Started sshd@15-10.0.0.104:22-10.0.0.1:45550.service - OpenSSH per-connection server daemon (10.0.0.1:45550). Jul 6 23:56:55.600966 systemd-logind[1443]: Removed session 15. Jul 6 23:56:55.631442 sshd[3673]: Accepted publickey for core from 10.0.0.1 port 45550 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:55.633324 sshd[3673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:55.637821 systemd-logind[1443]: New session 16 of user core. Jul 6 23:56:55.647446 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:56:56.784195 sshd[3673]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:56.792518 systemd[1]: sshd@15-10.0.0.104:22-10.0.0.1:45550.service: Deactivated successfully. Jul 6 23:56:56.794490 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:56:56.795868 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:56:56.804733 systemd[1]: Started sshd@16-10.0.0.104:22-10.0.0.1:45552.service - OpenSSH per-connection server daemon (10.0.0.1:45552). Jul 6 23:56:56.805733 systemd-logind[1443]: Removed session 16. Jul 6 23:56:56.832026 sshd[3713]: Accepted publickey for core from 10.0.0.1 port 45552 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:56.833606 sshd[3713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:56.837731 systemd-logind[1443]: New session 17 of user core. Jul 6 23:56:56.845557 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:56:57.336096 sshd[3713]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:57.348680 systemd[1]: sshd@16-10.0.0.104:22-10.0.0.1:45552.service: Deactivated successfully. Jul 6 23:56:57.350924 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:56:57.351819 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:56:57.360604 systemd[1]: Started sshd@17-10.0.0.104:22-10.0.0.1:45558.service - OpenSSH per-connection server daemon (10.0.0.1:45558). Jul 6 23:56:57.361426 systemd-logind[1443]: Removed session 17. Jul 6 23:56:57.390646 sshd[3726]: Accepted publickey for core from 10.0.0.1 port 45558 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:56:57.392765 sshd[3726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:56:57.397665 systemd-logind[1443]: New session 18 of user core. Jul 6 23:56:57.408476 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:56:57.519406 sshd[3726]: pam_unix(sshd:session): session closed for user core Jul 6 23:56:57.523347 systemd[1]: sshd@17-10.0.0.104:22-10.0.0.1:45558.service: Deactivated successfully. Jul 6 23:56:57.525505 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:56:57.526375 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:56:57.527379 systemd-logind[1443]: Removed session 18. Jul 6 23:57:02.530954 systemd[1]: Started sshd@18-10.0.0.104:22-10.0.0.1:58100.service - OpenSSH per-connection server daemon (10.0.0.1:58100). Jul 6 23:57:02.569499 sshd[3762]: Accepted publickey for core from 10.0.0.1 port 58100 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:57:02.571406 sshd[3762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:02.576021 systemd-logind[1443]: New session 19 of user core. Jul 6 23:57:02.585437 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:57:02.693546 sshd[3762]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:02.697737 systemd[1]: sshd@18-10.0.0.104:22-10.0.0.1:58100.service: Deactivated successfully. Jul 6 23:57:02.699970 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:57:02.700626 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:57:02.701549 systemd-logind[1443]: Removed session 19. Jul 6 23:57:07.709719 systemd[1]: Started sshd@19-10.0.0.104:22-10.0.0.1:58112.service - OpenSSH per-connection server daemon (10.0.0.1:58112). Jul 6 23:57:07.743690 sshd[3798]: Accepted publickey for core from 10.0.0.1 port 58112 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:57:07.745516 sshd[3798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:07.750113 systemd-logind[1443]: New session 20 of user core. Jul 6 23:57:07.759484 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:57:07.947720 sshd[3798]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:07.952053 systemd[1]: sshd@19-10.0.0.104:22-10.0.0.1:58112.service: Deactivated successfully. Jul 6 23:57:07.954259 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:57:07.954961 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:57:07.955987 systemd-logind[1443]: Removed session 20. Jul 6 23:57:12.959798 systemd[1]: Started sshd@20-10.0.0.104:22-10.0.0.1:41942.service - OpenSSH per-connection server daemon (10.0.0.1:41942). Jul 6 23:57:12.992787 sshd[3835]: Accepted publickey for core from 10.0.0.1 port 41942 ssh2: RSA SHA256:9QYV+m92awFBb0AmA0Mv9BfSJ4HlnldfdyOdj1iBPG4 Jul 6 23:57:12.994628 sshd[3835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:12.998935 systemd-logind[1443]: New session 21 of user core. Jul 6 23:57:13.008420 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:57:13.111799 sshd[3835]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:13.116252 systemd[1]: sshd@20-10.0.0.104:22-10.0.0.1:41942.service: Deactivated successfully. Jul 6 23:57:13.118584 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:57:13.119572 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:57:13.120609 systemd-logind[1443]: Removed session 21.